text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
A Novel Strategy to Reconstruct NDVI Time-Series with High Temporal Resolution from MODIS Multi-Temporal Composite Products
: Vegetation indices (VIs) data derived from satellite imageries play a vital role in land surface vegetation and dynamic monitoring. Due to the excessive noises (e.g., cloud cover, atmospheric contamination) in daily VI data, temporal compositing methods are commonly used to produce composite data to minimize the negative influence of noise over a given compositing time interval. However, VI time series with high temporal resolution were preferred by many applications such as vegetation phenology and land change detections. This study presents a novel strategy named DAVIR-MUTCOP (DAily Vegetation Index Reconstruction based on MUlti-Temporal COmposite Products) method for normalized difference vegetation index (NDVI) time-series reconstruction with high temporal resolution. The core of the DAVIR-MUTCOP method is a combination of the advantages of both original daily and temporally composite products, and selecting more daily observations with high quality through the temporal variation of temporally corrected composite data. The DAVIR-MUTCOP method was applied to reconstruct high-quality NDVI time-series using MODIS multi-temporal products in two study areas in the continental United States (CONUS), i.e., three field experimental sites near Mead, Nebraska from 2001 to 2012 and forty-six AmeriFlux sites evenly distributed across CONUS from 2006 to 2010. In these two study areas, the DAVIR-MUTCOP method was also compared to several commonly used methods, i.e., the Harmonic Analysis of Time-Series (HANTS) method using original daily observations, Savitzky–Golay (SG) filtering using daily observations with cloud mask products as auxiliary data, and SG filtering using temporally corrected composite data. The results showed that the DAVIR-MUTCOP method significantly improved the temporal resolution of the reconstructed NDVI time series. It performed the best in reconstructing NDVI time-series across time and space (coefficient of determination (R 2 = 0.93~0.94) between reconstructed NDVI and ground-observed LAI). DAVIR-MUTCOP method presented the highest robustness and accuracy with the change of the filtering parameter (R 2 = 0.99~1.00, bias = 0.001, root mean square error (RMSE)
Introduction
Satellite-derived VIs have been being widely used in monitoring vegetation conditions and dynamics on regional or global scales [1,2]. The satellite-derived normalized difference vegetation index (NDVI) calculated from the spectral reflectance of near infrared (NIR) and visible red bands is one of the most used Vis [2][3][4]. Accurate NDVI data with high temporal (e.g., daily) and spatial resolution was preferred and sometimes necessary in many applications like forest disturbances [5,6] and vegetation phenology detection [7,8].
A Moderate Resolution Imaging Spectroradiometer (MODIS), with a high temporal and moderate spatial resolution, has been a key instrument aboard the NASA Terra and Aqua satellites providing a long time series of global satellite observations since 2000 [9][10][11][12]. MODIS provide a near-daily global coverage multispectral imagery of the entire earth surface for 36 spectral bands including the visible red and near infrared (NIR) bands with moderate spatial resolution (up to 250 m) [13]. Compared to the Advanced Very High Resolution Radiometer (AVHRR), the MODIS instrument has a higher spatial resolution and can provide improved global dynamics and processes occurring on the land [13][14][15]. Compared to other satellites with higher spatial resolution, such as Landsat and Sentinel, MODIS has higher temporal resolution [16,17]. As a result, MODIS data plays a vital role in a wide range of regional or global monitoring efforts of the earth system, considering its high temporal resolution, moderate spatial resolution, and long time-series of continuous daily observations (20 years) [1,18]. However, the noises in these products that include cloud cover and other atmospheric contamination, off-nadir viewing effects and shadow effects are the primary issues for accurate monitoring of daily vegetation dynamics [19]. Even after screening the data using the quality flag of the MODIS MOD09 product, 40% of the daily NDVI data were still potentially contaminated by residual sub-pixel clouds [19][20][21].
Considering the excessive noises remaining in the daily MODIS NDVI product, temporal compositing methods over 8-or 16-day intervals are commonly used to produce composite products with minimal cloud cover and atmospheric contamination [5,13]. Currently, MODIS provides daily, 8-day and 16-day products, e.g., MOD09GQ, MOD09Q1 and MOD13Q1 products, respectively (National Aeronautics and Space Administration (NASA), https://modis.gsfc.nasa.gov/data/, accessed on 20 March 2021). Temporally-composited MODIS VI products (e.g., MOD09Q1 and MOD13Q1) have been widely used in the previous studies, e.g., crop yield or gross primary production (GPP) estimation [22,23], vegetation phenology detection [9,12], and land surface change detection [6,24]. The maximum value composite (MVC) method is one of the common compositing methods designed to select the highest VI value from series of daily observations during a given time interval to represent the VI value for that time period [25]. MODIS VI composites, e.g., 16-day MOD13Q1 NDVI products, are produced using a refined version of the MVC technique called constrained view MVC (CV-MVC) method that reduces angular and sun-target-sensor variations [8]. The CV-MVC method is designed to select observations closest to nadir view zenith angles and can produce more consistent and accurate datasets [8,[26][27][28][29].
However, these composite MODIS products with decreased temporal resolution inevitably introduce uncertainties in two ways. First, only one value is selected in a given time interval and subtle, shorter-term VI change information within this time interval will be lost [7,8,30,31]. Second, the nominal observation date of the selected value is deter-mined as a fixed day within the composite time interval. As for MODIS composite products, the nominal observation date is the first day of the time interval. In reality, the actual acquisition date of this selected value can vary from the first to the last day of the composite time interval. Guindin-Garcia et al. [32] found that the temporal intervals (the period between two consecutive observations) reached 15 days for 8-day composites and 30 days for 16-day composites. This introduces uncertainty when comparing the value to the ground observations on a certain day [14,30]. In addition, previous research has shown that the adoption of the nominal date of composites introduced temporal errors that potentially caused appreciable changes in the trajectory of NDVI time series [8,33,34]. As a result, it is expected to be inadequate to correctly describe phenological patterns [34,35] nor to estimate rapidly changing biophysical characteristics [32,34]. For example, MODIS composite data using the first date in the interval was found to result in an earlier estimated start of growth [34][35][36].
To eliminate the temporal uncertainty of composite data, many previous studies suggested that this influence can be mitigated by adopting several temporal pattern strategies [32,34,35,[37][38][39]. For example, Testa et al. [40] used the median date of a composite period as the observation date based on the consideration of that the actual acquisition date was close to the center of the compositing period in the most years. While Guindin-Garcia et al. [32] found that the actual acquisition dates within the compositing period change without any predictable pattern, Wang and Zhu [38] found that the actual acquisition date of the NDVI value was usually later than the mean date of a composite period in spring and earlier in fall. The researchers, e.g., Thayn and Price [35], Guindin-Garcia et al. [32], Wang and Zhu [38], suggested using the actual acquisition date. Temporal correction with the actual acquisition date can eliminate the temporal shift and some errors of composite data. However, the degradation of temporal resolution during the compositing processing inevitably missed some key information, especially when monitoring the rapid changing events. This is the reason why daily VI time series data instead of temporal composite data were still suggested to be used in many previous studies [5][6][7][8]. While, in reality, composite data were more widely used in previous studies than daily data considering the data storage and computing cost, as well as the excessive noises in daily data which might introduce higher uncertainty than the degradation of temporal resolution [2].
Many smoothing and denoising methods have been developed and applied to reconstruct the daily VI data [2,41]. Local smoothing methods, for example the Savitzky-Golay (SG) filtering method, can exhibit strong fidelity, but it is difficult to interpret the excessive noise time-series data and it present less smoothness and spatial continuity [42,43]. Some global methods, such as asymmetric Gaussian and Harmonic Analysis of Time Series (HANTS) methods, have robust smoothness and tend to be less sensitive to the noise, but they are unable to describe more detailed, subtle changes of time-series curves with strong fidelity [2,42,44]. The widely used, upper envelope method assumes that the most types of noise are negatively biased and is designed to address low values that represent some forms of noise, but cannot detect noises that are positively biased [21] nor distinguish the actual vegetation change trajectory with lower to moderate levels of noise (e.g., a thin cirrus cloud) [45]. Cloud contamination and other atmospheric effects generally decrease NDVI values, while solar and viewing geometry variation through time introduce higher NDVI values and even a phase shift in the NDVI series [46][47][48][49]. No single method always outperforms all others under all these different situations to obtain a daily NDVI time series, because each of the methods have trade-offs in their respective approaches (e.g., removal of noise or preservation of the details of the NDVI temporal dynamics) [3,4,20,50].
Some studies suggested a combination of data from different sensors (e.g., data from both the Terra and the Aqua satellites) [7,21] or from microwave or geostationary satellitebased sensors [51,52]. However, the combinations of these different observations can introduce uncertainties considering the different atmospheric conditions at their varying observation times, different sensor characteristics and product generation algorithms [52,53]. For example, some studies applied cloud mask data in an attempt to develop a cloud-free dataset (e.g., [14]). However, the 250 m cloud mask indicator is based on the visible channel data only and still includes uncertainty. Luo et al. [54] reported that the standard MOD35 cloud mask product is inadequate for masking cloudy pixels and it could only identify bright and thick clouds but misses a large amount of other cloud types. Wilson et al. [55] found that spatial variability in the processing path applied in the MOD35 algorithm affects the likelihood of a cloudy observation by up to 20% in some areas. Wang et al. [56] reported that at least 9.1% of clouds were missed by the MOD35 product. Sun et al. [57] found that the universal accuracy of the MOD35 cloud mask algorithm was 50% for vegetated regions. In addition, noise introduced by other factors such as solar and viewing geometry variations are still difficult to eliminate by only using cloud mask data. As a result, a challenge still exists to reconstruct the daily NDVI with high quality that considers all these factors.
To address the above challenges, this study presents a novel strategy named the DAVIR-MUTCOP (Daily Vegetation Index Reconstruction based on Multi-Temporal Composite Products) method to reconstruct high-quality NDVI time-series data by combining the MODIS daily product (MOD09GQ) with MODIS 8-day or 16-day composite product (MOD09Q1 or MOD13Q1) as an example without any other auxiliary data. The core of the proposed DAVIR-MUTCOP method is using the NDVI time-series of composite product corrected by actual acquisition date to cross-check the original daily NDVI observations and select the daily observations that are not significantly contaminated. Most of the selected daily NDVI observations are not included in the temporal composite products, considering that the composite products select only one value in each temporal composite window. The proposed method is based on an assumption that the composite products which are less influenced by noise can generally reflect the temporal trends of the daily NDVI observations [8,[26][27][28][29], and it is reliable to screen the remaining effective daily NDVI observations in the daily product because these effective daily observations are restricted by the variation tendency of the composite data. The development of the DAVIR-MUTCOP method is presented in detail in the methodology section and the associated MATLAB codes and example datasets are provided as attachments. A comparison between DAVIR-MUTCOP and commonly used, existing methods for NDVI time series reconstruction was conducted for two study areas in the United States to demonstrate the capability and advantages of the DAVIR-MUTCOP method.
Study Area
The methods included in this study were applied in two study areas in the United States for validation and comparison purposes. The first study area includes three agri- (Figure 1). The CSP sites locate in humid continental climate zone with severe winter and hot summer, but without a dry season. The mean annual temperature in the CSP sites is around 10 °C and the mean annual precipitation is about 790 mm. Given the large field sizes, the 250 m observations from MODIS near the center of these sites do not suffer from the problem of mixed pixels. CSP#1 was continuously planted in corn since 2001, while CSP#2 was planted in a corn (odd years)-soybeans (even years) rotation before 2009 and continuously planted to corn since 2009. CSP#3 has been planted in a corn (odd years)-soybean (even years) rotation since 2001. During the 2001 to 2012 study period, ground-based leaf area index (LAI) data of corn in these three sites were collected. The second study area includes forty-six AmeriFlux (AF) sites (AmeriFlux website: https://ameriflux.lbl.gov/, accessed on 20 March 2021) evenly distributed across the continental United States (CONUS) from 19° N latitude to 50° N latitude ( Figure 2). The climate of CONUS varies with the latitude and a range of geographic features, including mountains and deserts, ranging from the frost-free tropical of southernmost Florida to the cold climate zone in northeastern CONUS, from the rain drenched northwest coast to the drought-ridden deserts of the southwestern CONUS [58]. These forty-six sites were selected by two steps: (1) dividing the CONUS into a 5° × 5° grid, and (2) selecting the first site of each grid to make sure that the sites are generally evenly distributed across CO-NUS. In total, forty-six AmeriFlux ( Figure 2) sites were selected with various vegetation types, climates, and topographies.
Data Requirement
The MOD09GQ, MOD09Q1 and MOD13Q1 products were downloaded using the Google Earth Engine (GEE) platform (https://code.earthengine.google.com/, accessed on 20 March 2021). NDVI time series data at three temporal scales (daily, 8-day and 16-day) were obtained from MODIS MOD09GQ, MOD09Q1 and MOD13Q1 products, respectively. The MOD09GQ product provides the daily surface spectral reflectance (SSR) of visible red and NIR bands at a spatial resolution of 250 m. MOD09Q1 and MOD13Q1 products are temporally composited data over 8-and 16-day periods, respectively, to minimize cloud cover and atmospheric effects over a given time interval. The daily and 8-day composite NDVI time series data (NDVIGQ and NDVI09Q1) were calculated from the SSRs of MOD09GR and MOD09Q1 products (Equation (1)). 16-day composite NDVI time series data (NDVI13Q1) were directly derived from the layer of the MOD13Q1 product. MODIS products used the first day of the composite window as the nominal observation date of the composite value, instead of the actual acquisition date of the selected NDVI value for the composite period.
where and refer to the visible red and near-infrared band of MODIS.
Cloud Mask Product
The MOD35_L2 product provides a daily cloud mask indicator at 250 m spatial resolution. In order to minimize the influence of cloud cover, the cloud mask product was used to filter the cloud-free observations in this study. The MOD35_L2 product was not available on GEE platform and downloaded from the NASA website (https://modis.gsfc.nasa.gov/, accessed on 20 March 2021).
Methodology
At CSP sites, the LAI data were used to analyze the relationship between groundmeasured LAI and reconstructed daily NDVI data by different methods. The application of NDVI reconstruction methods for the AF sites allowed the reconstruction methods to be tested over various surface conditions compared with the CSP sites. The comparison of the methods' performances in the AF sites will further validate their advantages and disadvantages. Two existing NDVI reconstruction methods, HANTS and the combination of SG-filtering and Piecewise Cubic Hermite Interpolating Polynomial (PHCIP) interpolation, were used and compared with the DAVIR-MUTCOP method in this study. The steps to apply these methods are introduced in detail in the following subsections.
HANTS
The HANTS method was widely used to reconstruct the missing and biased satellitederived NDVI time series data by decomposing periodic time-dependent data into sum of sinusoids (see [59]). One of the advantages of the HANTS method is that it can easily be applied. The MATLAB source code of HANTS can be freely downloaded from the website (http://gdsc.nlr.nl/gdsc/en/tools/hants, accessed on 20 March 2021). The application of the HANTS method only needs the original NDVI data calculated from the MOD09GQ product. The HANTS method has five parameters, including the number of periodic terms in the NDVI series (N), the maximum number observations in a time series (normally one year) (M), the fit error tolerance (FET), the damping factor (Delta), and the degree of over determinedness (DOD). The parameters must be carefully set when using the HANTS method. However, according to previous studies [10,59], there is no objective guideline to determine the parameters. By testing the HANTS method for the CSP sites, the parameters were set to the values when the bias between estimated LAI and ground measured LAI is minimal. In the open source code, N and M were set as 365 and 10, respectively; FET was set as 0.05; Delta was set as 0.5; DOD was set as 5. The valid range of NDVI was defined as "0 to 1". The "low" option was selected as the flag indicating the direction of outliers with respect to the current curve.
SG Filtering
SG filtering was used to filter the original temporal composite NDVI data. To reconstruct daily NDVI data, the PHCIP method was then used to interpolate the filtered temporal composite NDVI data. According to the source of original NDVI data, three strategies were built in this study: (1) using the NDVI data calculated from acquisition-datecorrected MOD09Q1 product only (named SG8A hereafter); (2) using the NDVI data from the acquisition-date-corrected MOD13Q1 product only (named SG16A hereafter); and (3) combining the original daily NDVI data calculated from the MOD09GQ product and MOD35_L2 cloud mask product by filtering the original daily NDVI data with cloud mask data (named SG35L2 hereafter).
Before the application of the methods, the data of MOD09GQ, MOD09Q1 and MOD13Q1 was pre-processed using their quality flag layers as well as the median moving method [60]. As for the SG8A method, the outliers of the MOD09Q1 product were identified by the rule that the value of the outlier differed from the median by more than one standard deviation in a 7-point moving window [60]. As for the SG16A method, the outliers of the MOD13Q1 product were identified by using a 5-point moving window [40] and one standard deviation. As for the SG35L2 method, the outliers of the cloud-free data were identified by using a 25-point moving window according to Eklundh and Jösson [61] and one standard deviation.
The SG filtering and PHCIP interpolation were implemented in MATLAB. Parameters of SG filtering (i.e., temporal frame size and polynomial order) were set and tested according to the temporally composite windows of the input data. The values of these parameters were determined by a trial-and-error method using the data from the CSP study sites. The temporal frame sizes (Fsc represented the frame size of composite product, Fse represented the frame size of selected daily observations) for the SG8A, SG16A and SG35L2 methods were set as 3 (Fsc = 3), 2 (Fsc = 2) and 5 (Fse = 5), respectively. The polynomial order for all these methods was set as 1. The same parameter setting was used when these methods were applied for the AF sites. Then the sensitiveness of the reconstruction methods to the frame sizes of Fsc and Fse, was tested and analyzed by changing their value in the CSP and AF sites.
DAVIR-MUTCOP Method
The DAVIR-MUTCOP method used original daily NDVI data calculated from the MOD09GQ product, as well as the original composite NDVI data calculated from the MOD09Q1 product or the MOD13Q1 product to reconstruct daily NDVI time series. DAVIR-MUTCOP method is applied and evaluated in this study using the two standard MODIS compositing products that included: (1) combining daily MOD09GQ product and 8-day MOD09Q1 composite product (named DAVIR-MUTCOP8 hereafter) and (2) combining daily MOD09GQ product and a 16-day MOD13Q1 composite product (named DAVIR-MUTCOP16 hereafter). The development of the DAVIR-MUTCOP method is based on two assumptions that: (1) the temporal variation of NDVI data calculated from composite product can reflect the general variation of daily NDVI, and (2) the reasonable daily NDVI data derived from the MOD09GQ product fluctuates around the filtered composite NDVI time series with the actual acquisition date, considering that the applied scientific composite algorithm selected the value with the nominal highest quality within the composite window and therefore minimized the noise of the composite data.
The flowchart to implement DAVIR-MUTCOP method are shown in Figure 3 and presented in detail as follows: Step 1: Calculating the actual acquisition date of NDVI from composite product (NDVIC) using Equation (2). Similar to the previous study conducted by Guindin-Garcia et al. [32], the actual acquisition dates within the compositing period changed without any predictable pattern in this study (not shown). The original composite NDVI time series was adjusted to its actual acquisition date obtaining a new time series NDVIAD, in which the observation dates were not equidistant. SG8A and SG16A methods were also implemented using the NDVIAD.
where m is the sequence number of composite NDVIC in a year, n is the sequence number of original daily NDVIGQ (derived from MOD09GQ product) in a year, tm is the actual acquisition date of mth NDVIc during a period of one year, tn is the date of n th NDVIGQ during a period of one year, D is the time interval of composite product. When the composite product was MOD09Q1, D was set as 8. When the composite product was MOD13Q1, D was set as 16.
Step 2: Denoising and filtering the NDVIAD data. It is assumed that a biased NDVIAD is usually lower than the true data. If the value of mth NDVIAD was lower than its immediate neighbors ((m − 1)th and (m + 1)th NDVIAD), the mth NDVIAD was temporarily deleted.
Step 3: The remaining NDVIAD data were filtered using SG filtering with frame size (Fsc) set as 3 (2) for MOD09Q1 (MOD13Q1) product and polynomial order set as 1. The filtered NDVIAD was named NDVIF. The sensitiveness of the method to Fsc in reconstructing the daily NDVI time series was discussed in Section 4.4.2.
Step 4: Interpolation for daily NDVI series (named NDVIP) from NDVIF using PHCIP interpolation algorithm. The normalized NDVIP (named NNDVIP) and the absolute value of the first derivative of NDVIP (named DNDVIP) were also computed. For the original NDVIAD obtained in Step 1, the NDVIAD of i th day (day of year) should be considered as the reasonable data if it satisfied Equation (3). After the new reasonable NDVIAD time series was obtained, we returned to Step 3. When the NDVIAD time series did not change, the loop was ended.
Step 5: Finding out the effective NDVIGQ (named ENDVIGQ). The effective NDVIGQ is expected to fluctuate around the trajectory of the reasonable NDVIAD. Based on the final NDVIP, DNDVIP, and NNDVIP in step 4, the ENDVIGQ was selected by using Equation (4).
Step 6: Reconstruction of the daily NDVI time series. SG filtering (frame size (Fse) = 5, polynomial order = 1) was used to filter the ENDVIGQ and filtered data were named NDVIFE. The NDVIFE was then interpolated by PHCIP algorithm to obtain the final reconstructed daily NDVI time series (named NDVIRD). The sensitiveness of the method to Fse in reconstructing the final NDVIRD was discussed in Section 4.4.3.
Evaluation of the Method's Performance
In CSP sites, the relationship between LAI and reconstructed NDVI, as well as the relationship between the NDVI data reconstructed by different methods, were used to quantitatively validate the accuracy and analyze the uncertainty and error of different reconstruction strategies in terms of the bias (Equation (5)), root mean square error (RMSE) (Equation (6)), slope and determination coefficient ( where K was the number of NDVI data, NDVIm1,i and NDVIm2,i were the NDVI reconstructed by method 1 and method 2.
The MODIS Products
In order to study the MODIS products, the NDVI time-series of the second CSP site in 2009 was taken as an example for analysis ( Figure 4). The original MOD09GQ product shows extensive, noise-contaminated NDVI data. It is difficult to interpret time-series data with excessive noise features and reconstruct the time-series data with high smoothness and spatial continuity [42,43]. Daily NDVI observations filtered by MOD35_L2 cloud mask data still included considerable noises (Figure 4a,d). This is probably explained by the facts that: (1) MOD35_L2 product still includes error and uncertainty considering that sub-pixel clouds, thin clouds and cloud shadows are difficult to be fully detected or removed [55,56]. Thus, SG35L2 method tends to underestimate NDVI values, especially during or approaching the NDVI peak period (Figure 4), and (2) in addition to cloud cover, other types of noise that exist in the dataset, including other atmospheric contamination and off-nadir viewing effects, can increase or decrease NDVI values [47,48]. 8-and 16-day composite data had much less noise than daily NDVI data (Figure 4b,c), because the CV-MVC method generally selects the representative NDVI with higher accuracy in the given time interval to minimize the influence of the noises in daily time series data, though there were still remaining data noises in composite data, especially for the 8-day composite data product (Figure 4e,f).
However, obvious temporal shifts were observed in 8-and 16-day NDVI values compared to the original daily observations, especially during the period when NDVI values changed quickly (Figure 4b,c), as the nominal dates of the composite products MOD09Q1 and MOD13Q1 is defined as the first day of the composite window. While during the green-up stage, the CV-MVC method is expected to select a higher NDVI value from late in the composite period than what would be observed on the first day of the composite period, as the NDVI is increasing across the 8 or 16-day window. During the senescence phase when the NDVI is decreasing, the CV-MVC method is expected to select a NDVI value towards the beginning of the composite period because the NDVI would gradually decrease as the days in the composite period progressed. However, usually, a decreased NDVI value observed after the first day is always selected considering that the factor, such as high-frequent noise or large sensor viewing angles, might affect the observations of the first day and even the nearby days as well. In addition, the "horizontal shift" (temporal shift) can introduce "vertical shift" (NDVI value shift). Thus, the composite data commonly tends to overestimate the NDVI values before the NDVI peak and underestimates the NDVI values after the NDVI peak. This is also the reason to temporally correct the composite data by using actual acquisition dates in the DAVIR-MUTCOP method, as shown in the flowchart in Figure 3.
The Temporal Resolution of Different Reconstruction Stratiges
Temporal composition of the NDVI time series reduced the data noises to some extent. However, it also significantly sacrificed the temporal resolution, especially for 16day composite data. For example, there might be few or even no clear observations in composite data during the period when NDVI increased rapidly from the bottom to the peak (Figure 4c,e,f). The effect of the data gap combined with the above-discussed temporal shift due to nominal observation data might introduce obvious changes and uncertainties in NDVI time-series trajectories and NDVI values when directly using MODIS composite data to reconstruct NDVI time-series curves.
As shown in Figure 4b,c,e,f, the DAVIR-MUTCOP method corrected the temporal shift of the composite data products and included more validate daily observations (black dots) to increase the temporal resolution as well, which were significantly helpful for accurate time-series NDVI reconstruction. It is shown from Figure 5 that the DAVIR-MUTCOP method significantly promotes the temporal resolution of NDVI observations in CSP sites, especially during the growing season from April to October. The 8-and 16day composite data had about 3.5 and 1.5 clear observation per month on average, respectively (i.e., 9-day and 20-day temporal resolution, respectively), while the DAVIR-MUTCOP method obtained at least 10 clear observations per month at the growing season (i.e., <3-day temporal resolution). Besides, it is shown in Figure 4 that the DAVIR-MUTCOP method provided a less biased and a more smoothed daily NDVI time series than SG35L2 method.
In addition, as shown as the averaged clear observations per month in Figure 6, the temporal resolution of the reconstructed NDVI time series was obviously improved by the DAVIR-MUTCOP method in the six selected AF sites with different landcover types. The improvement of the temporal resolution is particularly helpful to monitor the rapid vegetation changes during the periods when the onset or end of the growing season occurs. For example, the number of clear original observations included in DAVIR-MUTCOP method was about 3 and 6 times of that in MODIS 8-and 16-day composite data, respectively, in April, May, October and November at P17 site covered by deciduous broadleaf forests (Figure 6c). Leaf-expansion and leaf-fall of the deciduous broadleaf forest usually occur in these four months at the P17 site. It might be difficult to detect these two stages directly from the composite data, especially for 16-day composite data with < 2 clear observations per month, considering that the vegetation changed rapidly and the duration of these period was short. For example, Nagai et al. [7] reported that the duration of leaf-expansion and leaf-fall varied from 11 to 16 days during the period from 2004 to 2010 at the Takayama site.
The Comparison of the NDVI Time-Series Curves Reconstructed by Different Stratiges
The temporal variations of NDVI time-series data in the three CSP sites and forty-six AF sites reconstructed by SG8A, SG16A, HANTS, SG_35L2, DAVIR-MUTCOP8 and DAVIR-MUTCOP16 methods were presented in Figures S1 and S2, respectively. HANTS achieves high accuracy in reconstructing some of the NDVI time series. However, daily NDVI data in some years reconstructed by the HANTS method suffered from significant fluctuation. Especially, the missing data and noisy NDVI values lead to the failure of this method. For example, the reconstructed time series of the third CSP site (CSP#3) in 2001 and 2006 presented many local fluctuations and peaks not reflecting the terrestrial biotic dynamics ( Figure S1). In addition, the accuracy of HANTS method was relatively sensitive to its parameter settings. The accuracy might significantly decrease when it is applied in other areas without parameter optimization. It is difficult to automatically optimize the parameters in different areas and years [10,59] without ground-based reference data. Therefore, the HANTS method was not further discussed in this study.
The SG35L2 method used MOD35_L2 cloud mask data to screen cloudy free data. However, the reconstructed time series still included some local fluctuations and noises when compared to the composite data ( Figures S1 and S2). The 41th AF site (P41) was also selected to be presented in Figure 7 as an example for further analysis. The SG35L2 method also presented many local fluctuations and troughs even with Fse increased from 5 to 15. The time series reconstructed by the SG8A method were also jagged and exhibited by various temporally localized peaks and troughs due to the data noises. While both the DAVIR-MUTCOP8 and DAVIR-MUTCOP16 methods using the same Fsc with the SG8A/16A method and the same Fse with SG35L2 provided more smooth and consistent results. SG16A provided more smooth results than the SG8A method, which seemed to be similar to that of the DAVIR-MUTCOP method. 16-day composite data with a longer compositing window commonly includes less noise. However, compositing data sacrifices the temporal resolution. It is more degraded with the increase of the temporal compositing window.
The Relationship between Ground-Observed LAI and Reconstructed NDVI
In order to quantitatively validate the reconstructed NDVI time series, ground-observed plant biophysical parameter LAI were used to build its relationships with NDVI values reconstructed by SG8A/16A, DAVIR-MUTCOP8/16 and SG35L2 methods at the three CSP sites. As shown in Figure 8, DAVIR-MUTCOP methods obtained similar performance with SG8/16A methods in terms of coefficient of determination (R 2 ) at CSP sites. This is possibly explained by two facts: (1) the composite product in CSP sites has high data quality without frequent cloud contamination compared to most of the other areas across CONUS, (2) the time series of CSP sites do not suffer from mixed pixel problem, consequently regular shape of the time-series curves minimized the uncertainty and error introduced by the degradation of temporal resolution and data interpolation during the periods between two consecutive observations.
But it is interesting that the fitting lines for ground observed LAI and NDVI values reconstructed by DAVIR-MUTCOP8 and DAVIR-MUTCOP16 method were almost overlayed (Figure 8a). An obvious difference was observed between those of SG8A and SG16A as shown in Figure 8b. Similarly, the scatters plot (Figure 9b) of SG8A-reconstructed NDVI against SG16A-reconstructed NDVI was more spread than that (Figure 9a) of DAVIR-MUTCOP8-reconstructed NDVI against DAVIR-MUTCOP8-reconstructed NDVI (R 2 = 0.98 against 0.99, bias = 0.004 against 0.001, RMSE = 0.034 against 0.020), especially when NDVI changed rapidly (NDVI = 0.4~0.8). This indicates that the DAVIR-MUTCOP method can obtain more stable NDVI reconstruction unaffected by the size of compositing window in CSP sites.
The Sensitiveness of the Reconstructed NDVI to Fsc
In order to further validate the robustness of SG8A, SG16A and DAVIR-MUTCOP methods, the sensitiveness of the reconstruction results to Fsc in different methods was tested. Due to the degradation of the composite data, the rapid or unexpected changes might be confused with rapid drop caused by data noises. The reconstruction results might be sensitive to filtering parameters. For example, as shown in Figure 10a,b and Figure 11a,b, SG8A and SG16A methods were sensitive to the adopted Fsc in SG filter when applied in the 39 th and 10 th AF sites (P39 and P10). Some data noises might still remain when small Fsc was used, while bigger Fsc might even cause the temporal shift of the reconstructed curves for both the SG8A and the SG16A method. The sensitiveness of the reconstruction to the filtering parameters combined with the data noises in the composite data significantly increased the uncertainty of the reconstructed NDVI time-series data, especially for 16-day composite data or in the period when NDVI changed rapidly. In addition, the SG16A method with a longer temporal compositing window was shown to be more sensitive to the parameters, presenting less consistent NDVI time-series trajectories (Figures 10a,b and 11a,b) and more spread scatters of the NDVI values with the change of Fsc rather than the SG8A method (Figure 10e,f). The DAVIR-MUTCOP method, including both original daily observations and composite products, was shown to be insensitive to the adopted Fsc compared to the SG8A and SG16A methods, presenting consistent NDVI time series in the growing season with different compositing windows and different Fsc in an SG filter (Figures 10 and 11).
The Sensitiveness of the Reconstructed NDVI to Fse
The sensitiveness of the DAVIR-MUTCOP and SG8A/16A methods to Fsc was discussed above. The sensitiveness of DAVIR-MUTCOP and SG35L2 methods to Fse was then further analyzed. As shown in Figure 7, the SG35L2 method was significantly affected by Fse in the SG filter due to the existence of the remaining noises. The scatter plot in Figure 12 also showed that the results derived from DAVIR-MUTCOP method with different Fse were more consistent with each other than those from SG35L2.
Overall, the performance of the method that only used original daily or composited data varied with the temporal compositing window as well as the parameters adopted in the data smoothing method. For example, 16-day composite data had superior performance at the P41 site, but inferior performance at P39 and P10 site than 8-day composite data. As for the SG16A method, smaller Fsc was suggested in the P39 site where larger Fsc might result in temporal shift of the time-series trajectories (Figure 10b), while larger Fsc was suggested in the P10 site where smaller Fsc might fail to denoise the data ( Figure 11b). Therefore, usually, it is difficult to find a method or parameter that is generally applicable to remove the noise, retain the short biotic fluctuations and rebuild the accurate growth curves meanwhile in regional applications [2]. This is also the reason why many previous studies that evaluated the different NDVI time-series data smoothing and reconstruction methods stated that no single method always outperforms all others under all these different situations [2][3][4], considering the data noise, the variation and complexity of the vegetation changes under different landcover types at regional scale, as well as the limitation of each method. While the DAVIR-MUTCOP method is more robustness and appliable to reconstruct NDVI time series at the regional scale than other methods.
The Choice of Temporal Compositing Window
Usually, 8-day composite data with higher temporal resolution is more widely used and preferred to reconstruct NDVI time series in the previous studies [9,23]. However, in the cloud-prone areas, 8-day composite data might be still noisy when clear observations might not exist within the 8-day window. Thus, there might be a trade-off: higher temporal resolution or less data noise, when choosing the temporal compositing window of the composite data.
As for the SG8A and SG16A methods, on one hand, the NDVI values in the time series with actual acquisition dates were not equidistant and the period between two consecutive observations varied widely and reached up to 15 days for 8-day composites and 30 days for 16-day composites [32]. The longer interval between two consecutive observations makes it more difficult to correctly reconstruct daily NDVI time series with a continuous missing value. However, for the other hand, 16-day composite data commonly includes less noise, as it has a longer compositing window than 8-day composite data. Therefore, it has a greater possibility of obtaining a higher and clearer observation within the compositing window. It indicates that in practical application, it is hard to choose the composite product for SG8A or SG16A method, which might limit their applications at the regional to the global scale.
However, as shown in Figures 7-12, the DAVIR-MUTCOP16 method had similar a performance to the DAVIR-MUTCOP8 method in terms of accuracy and robustness. In addition, the temporal resolution of reconstructed data was not sacrificed by choosing 16day composite data instead of 8-day composite data, as the DAVIR-MUTCOP method used original daily data as well. As shown in Figures 5 and 6, the NDVI time series reconstructed by DAVIR-MUTCOP16 method generally obtained comparable temporal resolution with the DAVIR-MUTCOP8 method. While the 16-day composite data product with a longer compositing window is less influenced by data noises, it indicates that 16-day composite data MOD13Q1 rather than 8-day composite data MOD09Q1 is suggested when the growth and changes of the studied vegetation is slowly or smooth, which might favor its applications in cloud-prone areas. While 8-day composite data of MOD09Q1 is suggested when the changes of the studied vegetation might be rapid and unexpected.
The Choice of the Filtering/Fitting Method and Parameters
Many smoothing and denoising methods have been developed and applied to reconstruct the NDVI time series data [2,41], e.g., SG filtering method, HANTS methods, logistic method. The widely used SG filtering method was applied in the proposed DAVIR-MUTCOP method in this study to denoise the composite data and to reconstruct the continuous NDVI time series based on the selected daily NDVI observations as well. However, many other smoothing methods also have potential to be used to replace SG filtering in the DAVIR-MUTCOP method.
As for the parameters of different data smoothing methods, it is usually difficult to optimize a set of parameters for the regional applications. Both local methods are usually sensitive to parameters setting [2]. While the DAVIR-MUTCOP method was shown to be obviously less sensitive to the variation of the frame size than other reconstruction strategies when it varied in a reasonable range. The frame sizes of SG filter (Fsc and Fse) used in DAVIR-MUTCOP8 (Fsc = 3~5, Fse = 5~10) and DAVIR-MUTCOP16 (Fsc = 2~3, Fse = 5~10) methods are also suggested in the future applications considering the robust performance of the DAVIR-MUTCOP8/16 method when it was applied in different types of vegetation covers.
The Potential Application of the DAVIR-MUTCOP Method in the Future
NDVI time series data with high temporal resolution plays a very important role in many applications, e.g., remote sensing of phenology, agriculture, and forest disturbance. Compared to the existed approaches presented in this study, the DAVIR-MUTCOP method has been proven to be an effective and robust way to reconstruct NDVI time series under various conditions. The DAVIR-MUTCOP method achieved a high level of robustness in reconstructing daily data time series without changing the parameters over time and space, as illustrated by the results at the CSP and AF sites. In addition, although the DAVIR-MUTCOP method was only applied and validated for NDVI time series reconstruction based on MODIS data, it proposed a universal way to reconstruct daily time series of other VIs (e.g., Enhanced Vegetation Index (EVI), Wide Dynamic Range Vegetation Index (WDRVI)) from MODIS. It is also expected to be applied to VI time series datasets from other operational sensors, such as Advanced Very High Resolution Radiometer (AVHRR) and Visible Infrared Imaging Radiometer Suite (VIIRS). Especially, the land science of VIIRS is expected to build and expand on the heritage of land science from the NOAA AVHRR and EOS MODIS. VIIRS data will be used to expand upon the MODIS applications (NASA, https://earthdata.nasa.gov/, accessed on 20 March 2021). VIIRS has similar products with MODIS, such as daily surface reflectance products (e.g., VNP09GA), 8-day composite surface reflectance products (e.g., VNP09A1) and 16-day vegetation indices products (e.g., VNP13A1), etc.
The Limitation of the DAVIR-MUTCOP Method
The DAVIR-MUTCOP method can generally reconstruct MODIS NDVI time series with high accuracy and high temporal resolution. It still has limitations when the composite data selected from the temporally composite window still contains continuous noise (e.g., prolonged cloudy periods), i.e., the accuracy of temporally composite products might affect the accuracy of reconstruction to some extent. Fortunately, the use of composite data with a longer compositing window, e.g., 16-day, does not affect the application of the DAVIR-MUTCOP method and a longer compositing window can increase the reliability and availability of noise-free data in the given time interval.
Conclusions
Although the original daily observations derived from the satellites include massive incorrect values, there are still many observations with high quality that are not selected in the temporally composite product. The core of the proposed DAVIR-MUTCOP method is making full use of the all effective daily NDVI observations as possible by using the composite NDVI time series with actual acquisition date to screen the daily observations. The DAVIR-MUTCOP and existed HANTS, SG8A, SG16A, SG35L2 methods were applied and evaluated in two study areas in the USA.
SG8A and SG16A methods with temporal correction by the actual acquisition date minimized the temporal shift of observations. However, the unequal distances between the two observations with an actual acquisition date increased the difficulties to reconstruct the daily NDVI time series, considering the distance can vary from one to nearly double the compositing window. The HANTS method is sensitive to the outliers and continuous contamination, which results in the failure of NDVI reconstruction. The parameters' setting and optimization of the HANTS method is also a problem, especially for regional or global application. Cloud mask data products improve the performance of the SG35L2 method compared to the SG1 method. However, cloud mask data products also include uncertainty; for example, the undetected cloud. In addition, other existed factors except for cloud cover can also influence daily observations.
The DAVIR-MUTCOP method combined both daily and composite products and presented the best performance across time and space than other methods applied in this study in terms of accuracy, robustness and temporal resolution. The success of the DAVIR-MUTCOP method not only minimizes excessive noise of the original daily observation and corrects the temporal shift of the composite data by cross-checking the daily and composite data, but also reserves more daily observations with high quality to improve the temporal resolution. Note that the DAVIR-MUTCOP method assumes that the filtered composite data are relatively reliable. In the cloud-prone areas, the DAVIR-MUTCOP method might still fail to reconstruct the daily VI time series when the composite observations, e.g., MOD09Q1 or MOD13Q1 products, are still continuously noisy and unreliable. In these areas, longer temporal composite windows are suggested to be adopted to obtain more reliable composite data.
The DAVIR-MUTCOP method provided a universal way to reconstruct high-temporal-resolution time series of other VIs or from other operations, e.g., AVHRR and VIIRS. Our next work will include: (1) further quantificationally validating the DAVIR-MUTCOP method under different frequency of cloud cover and different change rates of underlying surface by comparing to the near surface optical sensors; (2) validating this method in related applications, e.g., phenology detection, forest disturbance, plant physiological parameter estimation at the reginal scale. | 10,484.6 | 2021-04-05T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
EFOMP policy statement NO. 19: Dosimetry in nuclear medicine therapy – Molecular radiotherapy
The European Council Directive 2013/59/Euratom (BSS Directive) includes optimisation of treatment with radiotherapeutic procedures based on patient dosimetry and verification of the absorbed doses delivered. The present policy statement summarises aspects of three directives relating to the therapeutic use of radio-pharmaceuticals and medical devices, and outlines the steps needed for implementation of patient dosimetry for radioactive drugs. To support the transition from administrations of fixed activities to personalised treatments based on patient-specific dosimetry, EFOMP presents a number of recommendations including: increased networking between centres and disciplines to support data collection and development of codes-of-practice; resourcing to support an infrastructure that permits routine patient dosimetry; research funding to support investigation into individualised treatments; inter-disciplinary training and education programmes; and support for investigator led clinical trials. Close collaborations between the medical physicist and responsible practitioner are encouraged to develop a similar pathway as is routine for external beam radiotherapy and brachytherapy. EFOMP ’ s policy is to promote the roles and responsibilities of medical physics throughout Europe in the development of molecular radiotherapy to ensure patient benefit. As the BSS directive is adopted throughout Europe, unprecedented opportunities arise to develop informed treatments that will mitigate the risks of under-or over-treatments.
Introduction
Molecular radiotherapy a refers to the use of radioactive drugs or radioactive medical devices for medical treatment.First introduced in the 1930 s [1], the field is now rapidly expanding with many new agents for a growing number of indications [2].
Molecular radiotherapy may be administered orally or intravenously to deliver treatment systemically, as for conventional chemotherapy, or may be given intra-arterially or by loco-regional infusion.Examples include the treatment of benign and malignant thyroid disease with radioiodine, intra-arterial administrations of radioactive microspheres with 90 Y or 166 Ho for tumours and metastases in the liver, 177 Lu PSMA ligands for the treatment of metastatic prostate cancer and 177 Lu or 90 Y peptide receptor radionuclide therapy for the treatment of metastatic neuroendocrine tumours.In all cases, the mechanism of treatment is with ionising radiation.
Patient benefit and regulatory compliance require that molecular radiotherapy should be considered as a radiotherapeutic procedure, based on the premise that the absorbed doses delivered to target tissues should be optimised while the absorbed doses delivered to non-target tissues should be minimised and within accepted constraints.Such an approach necessitates individualised treatment planning and verification, based on patient dosimetry.
The motto of EFOMP reads "Applying physics to healthcare for the benefit of patients, staff and public" [3].Patient benefit and regulatory compliance have direct implications for the work and responsibilities of the Medical Physics Expert (MPE) that has clinical responsibility for measurements and imaging, dosimetry, and radiobiology.The interaction between the MPE and radiation protection expert, which may be one in the same person in personalised dosimetry settings, also brings the possibility of safer molecular radiotherapy from an occupational, public and environmental perspective [3].This can result in more costeffective treatments in e.g.outpatient settings which will meet regulatory requirements in a shorter time span.
Regulatory requirements
There are three main European directives that regulate the use of radiopharmaceuticals for therapeutic purposes.
The European Council Directive 2013/59/EURATOM of 5 December 2013
The European Council Directive 2013/59/EURATOM of 5 December 2013 "Laying down basic safety standards for protection against the dangers arising from exposure to ionising radiation" (henceforth referred to in this policy statement as the Basic Safety Standards (BSS) directive) and was brought into force in national legislation and regulations in 2018 [4].
Within Chapter VII (Medical Exposures) it is stated: Article 56 'Optimisation 1.For all medical exposure of patients for radiotherapeutic purposes, exposures of target volumes shall be individually planned and their delivery appropriately verified taking into account that doses to non-target volumes and tissues shall be as low as reasonably achievable and consistent with the intended radiotherapeutic purpose of the exposure.'.
Within Chapter II (Definitions) Article 4 (81) the definition is explicitly given that '"radiotherapeutic" means pertaining to radiotherapy, including nuclear medicine for therapeutic purposes.'.
This BSS directive follows recommendations given by the International Commission on Radiological Protection (ICRP), International Commission on Radiation Units and Measurements (ICRU) and the International Atomic Energy Agency (IAEA) relating to the fundamental principles of justification and optimisation for radiological protection including for therapeutic practice with radionuclides [5][6][7][8].
Specifically, ICRP report 140 states that: 'Individual absorbed dose estimates should be performed for treatment planning and for post administration verification of doses to tumours and normal tissues.'.For radiopharmaceuticals, it is appreciated that toxicity may be associated with a radiation dose.In diagnosis, this is a consequence of the use of radiopharmaceuticals; in therapy, it is the wanted property.The evaluation of safety and efficacy of radiopharmaceuticals shall, therefore, address requirements for medicinal products and radiation dosimetry aspects.Organ/ tissue exposure to radiation shall be documented.Absorbed radiation dose estimates shall be calculated according to a specified, internationally recognized system by a particular route of administration.'.
Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices
Radioactive microspheres are classified as medical devices.Requirements regarding the information supplied with the device are laid down in Annex I, Chapter II of Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices (referred to in this policy statement as the Medical Device Regulation, 'MDR') [10].
According to section 16, entitled 'Protection against radiation': '(a) Devices shall be designed, manufactured and packaged in such a way that exposure of patients, users and other persons to radiation is reduced as far as possible, and in a manner that is compatible with the intended purpose, whilst not restricting the application of appropriate specified levels for therapeutic and diagnostic purposes.'.
According to section 16.4, entitled 'Ionising radiation' : '(a) Devices intended to emit ionising radiation shall be designed and manufactured taking into account the requirements of the Directive 2013/ 59/Euratom laying down basic safety standards for protection against the a This therapy modality is referred to by different names, including for example: radiopharmaceutical therapy, radionuclide therapy, nuclear medicine therapy, radioligand therapy.Although selective internal radiotherapy is strictly not delivered by molecular pathways, this modality is also intended to be covered by the term within the scope of this policy statement.
b Council Directive 84/466/Euratom has since been superseded by the BSS Directive.
The Pharma directive and the MDR therefore mandate compliance with the BSS Directive that states that exposures of target and non-target volumes shall be individually planned and verified.Whilst the Pharma Directive does not explicitly specify whether dosimetry should be made on an individual or cohort level, or whether this choice depends on whether the radiopharmaceutical is used for diagnostic or therapeutic purposes, these separations are clearly specified in the BSS Directive.
Current status of molecular radiotherapy
Molecular radiotherapy has expanded rapidly in recent years, although there is an absence of detailed records throughout Europe of treatment procedures and outcomes.A review of clinicaltrials.govhas indicated a rapid growth in the number of and participation in clinical trials of new therapeutic radiopharmaceuticals [11].A survey by the Internal Dosimetry User Group in the UK found that administrations have increased by a factor of 4 in the UK in the last 10 years [12].
At present, a range of treatment prescriptions are followed, often based on historical practice [13,14].Recent marketing authorisation has continued a conventional prescription governed by fixed activities, often in multiples of 3700 MBq (100 mCi) [15,16].
The level of activity administered is a poor indicator of the radiation energy absorbed in different tissues and cannot predict effects of treatment.A parallel may be drawn to other radiotherapy modalities, in which the time or rate of radiation exposure have long been abandoned as sole treatment planning parameters.Therefore, the assessment of radiation effects and evaluation of probabilities of effectiveness and risks of toxicity based on the administered activities have a weak scientific foundation.
It is well established that fixed activity administrations to all patients deliver a wide range of absorbed doses to tissues-at-risk and to tumours [17][18][19][20], raising the risk of under-and over-treatments.The benefit of patient-specific dosimetry has been demonstrated in reports on relationships between the absorbed dose and toxicity of normal tissues or disease control [21][22][23][24].
For the development of new agents, cohort escalation studies designed with fixed-activity levels will result in variable absorbed doses within each escalation step.Such development should incorporate investigations of correlations between the absorbed doses delivered and treatment effects to improve risk-versus-benefit evaluation.The approval of new therapeutic radiopharmaceuticals with posology based on fixed activity levels but without inclusion of patient-specific dosimetry presents a major obstacle to patient-specific optimisation according to the BSS Directive.
The feasibility of incorporating dosimetry in molecular radiotherapy procedures, either within clinical routine or in clinical trials, has been clearly demonstrated.For example, in many countries, dosimetry is frequently undertaken as part of the routine work-up for treatments with radioactive microspheres for tumours in the liver and with radioiodine for hyperthyroidism [14,27,29,37].Several clinical trials have incorporated dosimetry, either for treatment guidance or as a primary area of investigation, for a range of treatments and indications.Examples include 131 I-mIBG for neuroblastoma [38], peptide-receptor radionuclide therapy for neuroendocrine tumours [31,[39][40][41], radioiodine treatment of hyperthyroidism [42] and differentiated thyroid cancer [36], and 90 Y microspheres [23,24,43,44].
Challenges and opportunities
The implementation of radiation dosimetry into routine clinical practice faces a number of pressing challenges that, if addressed, will introduce unprecedented opportunities for cancer treatment.
I Collection of evidence to inform treatments
Few patients are treated with molecular radiotherapy in comparison with non-radioactive drug treatments or external beam radiotherapy.An understanding of treatment effectiveness and risks, and their dependence on patient-specific baseline characteristics and prognostic biomarkers, is hampered by limited data regarding the absorbed doses delivered and treatment outcomes.Coherent data collection, harmonisation and metrological standardisation of dosimetry results require close collaboration between the many disciplines involved with molecular radiotherapy [45][46][47].
Recommendations/EFOMP Policy: 1. European molecular radiotherapy networks must be supported and expanded to share experience, expertise and resources.2. National and European databases are required to collect data on clinical factors, dosimetry and patient outcomes from multiple centres.3. Codes-of practice for the validation and harmonisation of dosimetry results and patient outcomes for different treatments should continue to be developed and put into practice.
II. Service and research infrastructure
Further developments within molecular radiotherapy require resourcing for service and research infrastructure.This is particularly relevant to medical physics which suffers wide variations in staffing levels throughout Europe and minimal research funding.The capacity to perform patient imaging and dosimetry also varies widely across centres and countries.
Recommendations/EFOMP policy: 4. Imaging and patient dosimetry must be reimbursed as is the case for external beam radiotherapy.5. Staffing requirements for centres offering molecular radiotherapy must be defined in compliance with the BSS directive [48][49][50].6. Research should be supported through national and European programmes to investigate treatment planning strategies for individual therapeutic procedures.
III. Training and education
Training programmes in molecular radiotherapy, including patient imaging, dosimetry and radiobiology, vary widely throughout Europe and between disciplines.Awareness of the regulatory framework governing molecular radiotherapy should be promoted to ensure integration of dosimetry into routine clinical practice.
Recommendations/EFOMP policy: 7. Professional organisations should continue to provide joint guidelines to perform image-based dosimetry and guidance for resource requirements, for each treatment procedure.8. Initiatives are required to promote engagement and knowledge transfer between the various disciplines, including medical physics and medical specialties, regulatory authorities and industry.
9. MPEs in training should gain experience in the implementation of dosimetry-guided treatments.Where necessary, training may be provided at remote centres.10.Molecular radiotherapy is a highly multidisciplinary field.Programmes of education are therefore required to train all disciplines in relevant areas.
IV. Investigator-initiated clinical trials
Currently, many industry-developed radiotherapeutic drugs are introduced in the clinic without protocols for patient imaging or dosimetry.Collection of evidence to inform the development of personalised molecular radiotherapy must be complemented by investigator-initiated clinical trials, as is the case for external beam radiotherapy.
Recommendations/EFOMP policy: 11. Investigator-initiated multi-centre and multi-national clinical trials should be promoted to develop optimised treatments.12. Networks for dosimetry expertise are required to enable sharing of know-how to support clinical trials.For example, image processing and dosimetry may be performed at remote sites with data collected according to specified protocols.13.For industry-and investigator-initiated clinical trials, individualpatient dosimetry must be incorporated to enable risk-versusbenefit analyses within drug development.Results and evidence must be presented at the time of submission for drug marketing authorisation.14.Health economics studies should be incorporated into clinical trials to investigate the costs of patient imaging and dosimetry relative to that of recently introduced commercial therapeutic radiopharmaceuticals and to other forms of radiotherapy.
Future implementation of molecular radiotherapy
Clinical implementation of molecular radiotherapy relies on shared roles and responsibilities between the MPE and the medical practitioner (MD).As for any radiotherapeutic modality the MPE should be responsible for treatment planning based on individualised patient dosimetry, metrological monitoring, and verification of the absorbed doses delivered, whilst the MD prescribes treatment according to the projected absorbed dose distribution, with account taken of patient specific information that may include baseline characteristics and treatment history (Table 1).
Discussion
The field of molecular radiotherapy is expanding rapidly in terms of new agents either in development or in early phase trials, the number of clinical trials and the range of cancers treated.In recent years, molecular radiotherapy has become increasingly dominated by significant commercial investment.At a time that many alternative treatments are emerging, including targeted therapies, immune-and gene-therapies, the capacity to image the biodistribution and to calculate the radiation absorbed doses delivered on a patient-specific basis, as was pursued when molecular radiotherapy was introduced [51], is unrivalled and offers significant patient benefit.
There is mounting evidence of relationships between the absorbed doses delivered and outcomes.Individualised treatment planning will further develop as more data become available.These may serve as a foundation for treatment planning and patient stratification to mitigate the risks of treatments that are unlikely to be beneficial.Verification of the absorbed doses delivered may be performed readily for most treatments and, in cases of multiple fractions, may inform subsequent administrations.
Molecular radiotherapy cannot be regarded as a single treatment but as a range of modalities, dependent on how the treatment is administered and on the indication.Successful treatments are therefore dependent on a wide range of expertise that may include specialists in medical and clinical oncology, nuclear medicine, endocrinology, urology and interventional radiology.The role of the MPE is to advise on matters relating to radiation protection, image acquisition and
Table 1
Schematic, generic example of how roles and responsibilities in dosimetryguided molecular radiotherapy can be shared. Step
Role and responsibility i
The MD declares intention to treat and identifies the target tissues and tissuesat-risk.ii The MPE presents to the MD a range of activities to administer that are likely to yield a corresponding range of absorbed doses delivered to tissues-at-risk and/or target tissues.iii The MD decides whether treatment will be given.iv The MD specifies the maximum permissible absorbed doses to be delivered to tissues-at-risk and/or the aimed absorbed doses to be delivered to target tissues, taking account of relevant patient-specific parameters, clinical risk factors and treatment intent.The MPE gives advice on matters such as relevant tissues-at-risk and tolerance absorbed doses, as well as the absorbed doses that may be effective for treatment.v The MPE has responsibility for instruments and protocols used for measurement of the prescribed activity, patient dosimetry data (including e.g.quantitative imaging), data analyses and dosimetry calculations.vi Following administration, the MPE conducts the metrological monitoring of the biodistribution of the radiotherapeutic agent and verifies the absorbed doses delivered to target tissues and tissues-at-risk.The data on absorbed doses are recorded in the patient information system and should be traceable to (signed by) an individual MPE and MD.This information may then inform a further treatment cycle or retreatment.Investigator-initiated multi-centre and multi-national clinical trials should be promoted to develop optimised treatments.12 Networks for dosimetry expertise are required to enable sharing of know-how to support clinical trials.For example, image processing and dosimetry may be performed at remote sites with data collected according to specified protocols.13 For industry-and investigator-initiated clinical trials, individual-patient dosimetry must be incorporated to enable risk-versus-benefit analyses within drug development.Results and evidence must be presented at the time of submission for drug marketing authorisation.14 Health economics studies should be incorporated into clinical trials to investigate the costs of patient imaging and dosimetry relative to that of recently introduced commercial therapeutic radiopharmaceuticals and to other forms of radiotherapy.
processing, radiobiology, and patient dosimetry.It is then the role of the responsible practitioner to prescribe treatment, tailored to the individual patient, as informed by these criteria.A summary of the recommendations / EFOMP Policy is provided in Table 2.
Table 2
Summary of Recommendations / EFOMP Policy Professional organisations should continue to provide joint guidelines to perform image-based dosimetry and guidance for resource requirements, for each treatment procedure.8 Initiatives are required to promote engagement and knowledge transfer between the various disciplines, including medical physics and medical specialties, regulatory authorities and industry.9 MPEs in training should gain experience in the implementation of dosimetryguided treatments.Where necessary, training may be provided at remote centres.10 Molecular radiotherapy is a highly multidisciplinary field.Programmes of education are therefore required to train all disciplines in relevant areas.11 | 3,988.6 | 2023-11-01T00:00:00.000 | [
"Medicine",
"Physics"
] |
Classification of Standing and Walking States Using Ground Reaction Forces
The operation of wearable robots, such as gait rehabilitation robots, requires real-time classification of the standing or walking state of the wearer. This report explains a technique that measures the ground reaction force (GRF) using an insole device equipped with force sensing resistors, and detects whether the insole wearer is standing or walking based on the measured results. The technique developed in the present study uses the waveform length that represents the sum of the changes in the center of pressure within an arbitrary time window as the determining factor, and applies this factor to a conventional threshold method and an artificial neural network (ANN) model for classification of the standing and walking states. The results showed that applying the newly developed technique could significantly reduce classification errors due to shuffling movements of the patient, typically noticed in the conventional threshold method using GRF, i.e., real-time classification of the standing and walking states is possible in the ANN model. The insole device used in the present study can be applied not only to gait analysis systems used in wearable robot operations, but also as a device for remotely monitoring the activities of daily living of the wearer.
Introduction
Walking is necessary for performing most of our daily activities. However, gait disturbances may occur due to neurological disorders, such as spinal cord disorder, stroke, and Parkinson's disease, or accidents such as a fall. Gait disturbance can cause significant discomfort in performing the activities of daily living (ADL); thus, gait rehabilitation is absolutely necessary to improve the quality of life of patients suffering from gait disturbance [1,2]. Conventional gait rehabilitation methods are based on the concept of simple and repetitive physical therapy assisted by rehabilitation therapists, and consequently, the treatment outcome may vary depending on the skill and experience of the therapist. Accordingly, studies have been conducted to operate gait rehabilitation robots using actuators and determine the motion intention by receiving feedback from the bio-signals of the patients to achieve more effective and quantifiable gait rehabilitation effect.
To ensure the effective operation of gait rehabilitation robots, it is essential to apply an algorithm that can detect the gait phase of the wearer [3,4]. Various methods have been developed for gait phase detection, including applying a heuristic threshold algorithm using empirically set thresholds or machine learning for detecting the gait phase after acquiring various bio-signals by using the ground reaction force (GRF), accelerometer (Acc), and gyrosCOPe (Gyro) [5][6][7][8].
Algorithms that detect the gait phases using GRF often use the method of detecting the gait phase transition when the GRF measured at the heel or metatarsal bone exceeds a certain threshold. Mariani et al. [6] set the GRF threshold based on the body weight of the insole wearer, whereas Catalfamo et al. [7] set the threshold using the ratio between the maximum and minimum GRF values. Moreover, Yu et al. [8] classified the gait phase using the experiment operator pressed a synchronization signal generation button at the start and end of each experiment to generate a constant voltage trigger signal of 5 V and 0 V, respectively. The trigger signals were saved along with the GRF data and positional data of the motion capture markers to establish the reference points for synchronization. Upon completion of the experiment, the data at the starting and ending points of the experiment, as set by the trigger signals, were interpolated to obtain uniform numbers of data in the same time interval. The acquired GRF data were used for state classification, and the motion capture marker position data were used as the reference data for the actual state classification. Python was used for post-processing of the data acquired by the data acquisition system (DAS).
Participants
The study population for the gait experiments included 32 participants: 28 healthy adults and 4 patients with stroke-induced hemiplegia to evaluate the effect of shuffling on the classification accuracy of standing and walking states ( Table 1). The healthy adult group comprised 19 males (age 25 ± 3 years, height 1.75 ± 5.82 m, weight 68 ± 7 kg) and 9 females (age 21 ± 1 years, height 1.61 ± 4.22 m, weight 51 ± 4 kg), who were capable of normal ambulation and had no trauma or neurological disorder. The GRF measurement device was developed by the authors of the present study. It consisted of an insole equipped with FSRs (FSR 402, Interlink Electronics, Inc. CA 93012, USA), which was inserted into the shoe [9]. Details of the insole device are explained in Appendix A. The GRF and motion capture data were collected using a data acquisition system (NI 6259, National Instruments, TX 78759-3504, USA). Because the GRF measurement device and motion capture system used in the study have different data sampling rates (100 Hz and 30 Hz, respectively), data synchronization was needed. Accordingly, the experiment operator pressed a synchronization signal generation button at the start and end of each experiment to generate a constant voltage trigger signal of 5 V and 0 V, respectively. The trigger signals were saved along with the GRF data and positional data of the motion capture markers to establish the reference points for synchronization. Upon completion of the experiment, the data at the starting and ending points of the experiment, as set by the trigger signals, were interpolated to obtain uniform numbers of data in the same time interval. The acquired GRF data were used for state classification, and the motion capture marker position data were used as the reference data for the actual state classification. Python was used for post-processing of the data acquired by the data acquisition system (DAS).
Participants
The study population for the gait experiments included 32 participants: 28 healthy adults and 4 patients with stroke-induced hemiplegia to evaluate the effect of shuffling on the classification accuracy of standing and walking states ( Table 1). The healthy adult group comprised 19 males (age 25 ± 3 years, height 1.75 ± 5.82 m, weight 68 ± 7 kg) and 9 females (age 21 ± 1 years, height 1.61 ± 4.22 m, weight 51 ± 4 kg), who were capable of normal ambulation and had no trauma or neurological disorder. The hemiplegia group comprised 4 males belonging to the functional ambulation category (FAC) level 5, who were capable of independent ambulation of 10 m without an assistive device.
Test Method
The participants in the gait experiment remained in the standing position while facing forward, and when prompted to start walking by the operator, they walked a distance of 5 m on a flat ground and stopped at the marked position. The participants were allowed to arbitrarily select how long they would stay in the standing state and their walking speed. Each participant repeated the gait experiment a total of 10 times.
The experimental protocol was approved by the Institutional Review Board (IRB) at the Korea Institute of Science and Technology (KIST). All participants provided written informed consent for the study prior to participation.
Candidate Factors to Overcome Errors Caused by Foot Drop
When shuffling occurs due to foot drop, the magnitude of the GRF measured by an individual FSR is greatly affected, but the effect on the COP is relatively less because the COP is calculated using the GRF data measured by multiple FSRs. Accordingly, the present study aimed to use the COP for state classification. As shown in Figure 2, the GRF data were used to calculate the positions of the COP as follows.
Here, COP X and COP Y denote the positions of the combined COP in the x-and y-axis directions, respectively; COP LX and COP RX denote the positions of the COP in the x-axis direction, measured from the left and right foot, respectively; and COP LY and COP RY represent the positions of the COP in the y-axis direction, measured from the left and right foot, respectively. Regarding the sensor position (SP), the values defined by the authors in the previous study were used as the positional coordinates of the FSR [9]. With respect to the coordinate axes, the x-and y-axis were defined as the left/right and forward/backward directions when the body was facing forward, respectively. When shuffling occurs due to foot drop, the magnitude of the GRF m individual FSR is greatly affected, but the effect on the COP is relatively les COP is calculated using the GRF data measured by multiple FSRs. Accordi sent study aimed to use the COP for state classification. As shown in Figu data were used to calculate the positions of the COP as follows. As shown in Figure 2, as the participants began walking, the positions of the COPs of the left and right feet, projected on the surface, shifted by repeatedly alternating between the forward/backward directions. The angle (θ) was formed by the segment connecting these two points and the frontal plane, and COP gradient was calculated using the equation below. In this equation, L, which was defined as the distance between the left and right feet, was assumed to be the same as the width of the hips (Figure 2), and its value was set based on the height of the participant [22].
The amount of change in COP gradient over time was derived by dividing COP gradient by the sampling rate of the FSRs.
Selection of Factor Based on Approximate Entropy
Entropy is reported to be a measure of the complexity of the deterministic dynamics of a time series [23]. In the field of gait analysis, the entropy of the COP value is used to test the therapeutic effect of rehabilitation or as an index for distinguishing between healthy adults and patients. Schmit et al. [24] compared the COP variations of patients with Parkinson's disease against those of healthy elderly persons and determined that the former have relatively lower complexity in the COP variations. A study by Bar-Haim et al. [20] showed that the entropy of COP increased when the gait function of the patients was improved. Such results indicated that the pattern of change in the COP that appears during gait might be different between healthy adults and patients. Hence, applying the COP data from patients to an algorithm for gait detection developed using the COP data of healthy adults may not yield accurate detection results [15]. Therefore, unlike gait complexity assessment, it is necessary to identify the factors that do not show a significant difference between healthy adults and patients for state classification. COP X , COP Y , COP LX , COP RX , COP LY , COP RY , COP gradient , and .
COP obtained from the GRF data were used to calculate the entropy values for healthy adult and patient groups; the results are shown in Figure 3. The approximate entropies (ApEn) of the variables listed above were calculated using the following steps.
4. The cumulative entropy was used to calculate ApEn by the following equation.
In Figure 3, the factor that showed the smallest difference in entropy between the two groups was , which represents the amount of change in . In other words, the results indicated that is the most appropriate factor for state classification. The explanation of the application of the entropy for is introduced in Appendix B.
Waveform Length of
has a value of zero when the two feet alternate during gait or when standing with both feet parallel to each other. Therefore, the standing and walking states cannot be differentiated by calculating and comparing the values. To rectify this issue, the waveform length was derived by combining within an arbitrary time window and using it for state classification, as shown below [25].
= (13) In other words, the gait data from the previous step are used when classifying the current walking state using . 1. m number of sample data X(i), as defined by the pattern length, were generated.
2. The generated sample data were used to calculate their correlations by the following equation.
3. After determining the log of the correlation value, the mean log value was used to derive the cumulative entropy.
4. The cumulative entropy was used to calculate ApEn by the following equation.
In Figure 3, the factor that showed the smallest difference in entropy between the two groups was . COP, which represents the amount of change in COP gradient . In other words, the results indicated that . COP is the most appropriate factor for state classification. The explanation of the application of the entropy for . COP is introduced in Appendix B. COP COP gradient has a value of zero when the two feet alternate during gait or when standing with both feet parallel to each other. Therefore, the standing and walking states cannot be differentiated by calculating and comparing the . COP values. To rectify this issue, the waveform length was derived by combining . COP within an arbitrary time window and using it for state classification, as shown below [25].
In other words, the gait data from the previous step are used when classifying the current walking state using COP W . The Timing Analysis Module (TAM) method [7,8] used in previous studies was used to differentiate between the standing and walking states. This method determines the contact between the foot and ground based on the magnitude of the GRF, and the threshold is calculated using the following equation.
Here, GRF max and GRF min represent the maximum and minimum values of the sum of GRFs measured by multiple FSRs, respectively. Because the GRF data were applied with the participants segregated into the healthy adult and patient groups, GRF max and GRF min were different for each group. The TAM method classifies the state as the standing state when the sum of the GRFs measured is greater than GRF TH from the above equation; otherwise, the state is classified as the walking state.
Using COP w Threshold After setting the COP W threshold (COP W.TH ), the following criteria were applied to define the standing and walking states as 0 and 1, respectively. Figure 4 shows the probability of COP W , which was calculated using the GRF data generated in the standing and walking states. In the histograms shown in Figure 4, COP W.TH was determined as COP W with the highest probability in the area of overlap of the standing and walking states. Therefore, the healthy adult and patient groups have different threshold values, as shown in Figure 4a,b. The state determined using Equation (15) was compared with the actual state determined by a motion capture system to assess the accuracy of the state classification technique proposed in the present study.
Artificial Neural Network Model
For comparison with the aforementioned threshold method, a machine learningbased state classification model was developed, as shown in Figure 5. In the ANN model, GRF, COP X , COP Y , COP LX , COP RX , COP LY , COP RY , COP gradient , . COP, and COP W were used as the input data, while supervised learning was performed by applying the state classification results from the motion capture system as the learning data. The learning data were normalized by dividing by the maximum value that appeared for each type of input data to reduce the influence due to the size of each data value. The ANN model was developed as a single layer to allow the use of Garson's algorithm for assessing the relative importance of the factors, and it consisted of 20 nodes. In the model for state classification of the healthy adult group, 200 sets of experimental data randomly selected from a total 280 sets of gait experiment data were used for model learning, and the remaining 80 sets of experimental data were used for state classification. In the model for state classification of the patient group, 30 sets of experimental data randomly selected from 40 sets of gait experiment data were used for model learning, and the remaining 10 sets of experimental data were used for state classification.
Using Threshold
After setting the threshold ( . ), the following criteria were applied to define the standing and walking states as 0 and 1, respectively. Figure 4 shows the probability of , which was calculated using the GRF data generated in the standing and walking states. In the histograms shown in Figure 4, . was determined as with the highest probability in the area of overlap of the standing and walking states. Therefore, the healthy adult and patient groups have different threshold values, as shown in Figure 4a,b. The state determined using Equation (15) was compared with the actual state determined by a motion capture system to assess the accuracy of the state classification technique proposed in the present study.
Artificial Neural Network Model
For comparison with the aforementioned threshold method, a machine learningbased state classification model was developed, as shown in Figure 5. In the ANN model, GRF, , , , , , , , , and were used as the input data, while supervised learning was performed by applying the state classification results from the motion capture system as the learning data. The learning data were normalized by dividing by the maximum value that appeared for each type of input data to reduce the influence due to the size of each data value. The ANN model was developed as a single layer to allow the use of Garson's algorithm for assessing the relative importance of the factors, and it consisted of 20 nodes. In the model for state classification of the healthy adult group, 200 sets of experimental data randomly selected from a total 280 sets of gait experiment data were used for model learning, and the remaining 80 sets of experimental data were used for state classification. In the model for state classification of the patient group, 30 sets of experimental data randomly selected from 40 sets of gait experiment data were used for model learning, and the remaining 10 sets of experimental data were used for state classification. Moreover, Garson's algorithm was applied to the factors used in the ANN model to assess the relative importance of each factor [26,27]. The relative importance of each input factor used in the model created after the completion of learning was calculated by the following equation. Relative importance represents a relative value and the sum of the relative importance of all input factors used in a single system should be 100%.
Results
According to a study by Pappas et al. [15], gait analysis algorithms for treatment or rehabilitation, which show a classification accuracy of <90%, are difficult to use in actual clinical practice. Accordingly, the present study set the goal of achieving a classification accuracy of ≥90% for the state classification algorithm developed in the present study. Moreover, Garson's algorithm was applied to the factors used in the ANN model to assess the relative importance of each factor [26,27]. The relative importance of each input factor used in the model created after the completion of learning was calculated by the following equation. Relative importance represents a relative value and the sum of the relative importance of all input factors used in a single system should be 100%.
Results
According to a study by Pappas et al. [15], gait analysis algorithms for treatment or rehabilitation, which show a classification accuracy of <90%, are difficult to use in actual clinical practice. Accordingly, the present study set the goal of achieving a classification accuracy of ≥90% for the state classification algorithm developed in the present study. Figure 6 shows the results of the state classification accuracy obtained in the present study. The mean state classification accuracies in the gait experiments on the healthy adult group were 98.52% and 95.69% when using the TAM method and threshold method using COP W , respectively, showing that both methods exceeded the target classification accuracy of ≥90%. In the healthy adult group, the classification accuracies for the standing and walking states were higher when the TAM method was used, when compared with the threshold method using COP W . This could be attributed to the fact that the threshold method using COP W is influenced by the previously collected data when calculating the waveform length within an arbitrary time window. Therefore, such methods may not only incorrectly classify the current walking state, but also show classification delay. Figure 6 shows the results of the state classification accuracy obtained in the present study. The mean state classification accuracies in the gait experiments on the healthy adult group were 98.52% and 95.69% when using the TAM method and threshold method using , respectively, showing that both methods exceeded the target classification accuracy of ≥90%. In the healthy adult group, the classification accuracies for the standing and walking states were higher when the TAM method was used, when compared with the threshold method using . This could be attributed to the fact that the threshold method using is influenced by the previously collected data when calculating the waveform length within an arbitrary time window. Therefore, such methods may not only incorrectly classify the current walking state, but also show classification delay.
State Classification Accuracy When Using Threshold Methods
The mean state classification accuracies in the gait experiments on the patient group were 91.52% and 95.05% when using the TAM method and , respectively, showing that both methods exceeded the target classification accuracy of ≥90%. However, the classification accuracy for the walking state (P-walk. in Figure 6) obtained using the TAM method was unsatisfactory at 87.22%. This may be attributed to the misinterpretation of the walking state as standing state due to shuffling by the patients when the GRF was measured in the swing foot. Figure 7 shows the GRF data collected from gait experiments on the healthy adult and patient groups. The graphs show the sum of the GRFs of both feet in the standing state and the GRF of the swing foot in the walking state. In Figure 7a, which shows the GRF data for the healthy adult group, the mode value of the GRFs for the standing and walking states are separated from each other, whereas in Figure 7b, which shows the GRF data for the patient group, some of the GRF values measured at the swing foot overlaps with the mode value region of the GRF in the standing state. This indicated that the GRF was measured when shuffling occurred in the swing foot; consequently, an error occurred where the standing state was detected despite the fact that the patient was actually walking, which caused the state classification accuracy to decrease.
On the other hand, the classification accuracy of the threshold method using for the walking state of the patient group (P-walk. in Figure 6) was high, reaching up to 95.50%. It is believed that such results were due to significant reduction in errors caused by shuffling by applying the COP-based factors, as intended by the authors. The mean state classification accuracies in the gait experiments on the patient group were 91.52% and 95.05% when using the TAM method and COP W , respectively, showing that both methods exceeded the target classification accuracy of ≥90%. However, the classification accuracy for the walking state (P-walk. in Figure 6) obtained using the TAM method was unsatisfactory at 87.22%. This may be attributed to the misinterpretation of the walking state as standing state due to shuffling by the patients when the GRF was measured in the swing foot. Figure 7 shows the GRF data collected from gait experiments on the healthy adult and patient groups. The graphs show the sum of the GRFs of both feet in the standing state and the GRF of the swing foot in the walking state. In Figure 7a, which shows the GRF data for the healthy adult group, the mode value of the GRFs for the standing and walking states are separated from each other, whereas in Figure 7b, which shows the GRF data for the patient group, some of the GRF values measured at the swing foot overlaps with the mode value region of the GRF in the standing state. This indicated that the GRF was measured when shuffling occurred in the swing foot; consequently, an error occurred where the standing state was detected despite the fact that the patient was actually walking, which caused the state classification accuracy to decrease. as the factor for classifying the states in the patient group will be more effective than using GRF with the threshold method.
Overall, relatively higher classification accuracies were achieved by the TAM method using GRF for the healthy adult group (98.52%) and the threshold method using for the patient group (95.05%). Moreover, applying the threshold method using showed very high mean state classification accuracies of 95.69% and 95.05% for the healthy adult and patient groups, respectively, which suggests its applicability to actual clinical experiments. Figures 8 and 9 show the GRF data measured in the gait experiments on the healthy adult and patient groups, respectively, and and calculated from these data, along with the examples of state classification by the two methods-the TAM method and threshold method using . Figures 8e and 9e show the actual state classification On the other hand, the classification accuracy of the threshold method using COP W for the walking state of the patient group (P-walk. in Figure 6) was high, reaching up to 95.50%. It is believed that such results were due to significant reduction in errors caused by shuffling by applying the COP-based factors, as intended by the authors.
In Table 2, the percentile of the GRF mode values in the walking state, measured by the TAM method, were 96.00% and 89.50% for the healthy adult and patient groups, respectively. Both groups also showed similar state classification accuracies of 98.56% and 87.22%, respectively. Moreover, the percentile of the GRF mode values in the standing state, measured by the threshold method using COP W , were 90.00% and 94.50% for the healthy adult and patient groups, respectively; both groups also showed very similar state classification accuracies of 91.26% and 94.52%, respectively. Such results indicate that it is easier to classify between the standing and walking states when there is a greater separation between the datasets generated in those states. Therefore, it is inferred that using COP W as the factor for classifying the states in the patient group will be more effective than using GRF with the threshold method. Overall, relatively higher classification accuracies were achieved by the TAM method using GRF for the healthy adult group (98.52%) and the threshold method using COP W for the patient group (95.05%). Moreover, applying the threshold method using COP W showed very high mean state classification accuracies of 95.69% and 95.05% for the healthy adult and patient groups, respectively, which suggests its applicability to actual clinical experiments. Figures 8 and 9 show the GRF data measured in the gait experiments on the healthy adult and patient groups, respectively, and . COP and COP W calculated from these data, along with the examples of state classification by the two methods-the TAM method and threshold method using COP W . Figures 8e and 9e show the actual state classification results (actual states) of the motion capture system and those of the TAM method and threshold method using COP W together for comparison. In Figure 8e, which shows the gait experiment results for the healthy adult group, both the TAM method and threshold method using COP W showed very similar results as the actual state classified by the motion capture system. The TAM method accurately classified the start and end of walking, with occasional error in misinterpreting the walking state as the standing state. The threshold method using COP W showed superior state classification accuracy than the TAM method, but it also showed delay in classifying the transition from the walking state to the standing state. This classification delay was due to using the data from the previous gait step when calculating COP W , as described earlier, and additional time was required to avoid this influence. In Figure 9e, which shows the gait experiment results for the patient group, several classification errors occurred when the TAM method was used for state classification. This classification error was due to the patient group showing shuffling of the swing foot during walking, which caused the walking state to be misinterpreted as the standing state. On the other hand, the threshold method using COP W showed a sharp decrease in the frequency of state classification errors and significant improvement in the classification accuracy, even in the patient group.
State Classification Accuracy by Machine Learning
As shown in Figure 6, the ANN model demonstrated very high mean state classification accuracies of 99.23% and 98.33% for the healthy adult and patient groups, respectively. In comparison with the threshold method using COP W , the classification accuracy increased by 3.54% and 3.28% for the healthy adult and patient groups, respectively, and there was no noticeable classification delay. Figure 10 shows the results of the relative importance of the input factors used in the ANN model, calculated using Equation (16). As shown in Figure 10, the factor with the highest relative importance in both the healthy adult and patient groups was COP W . All other factors showed a relative importance within the range of 0.045-0.09 in both groups, whereas COP W showed a relative importance of 0.11 and 0.30 in the healthy adult and patient groups, respectively, which confirmed that it was the most important factor in both groups. Especially in the state classification model for the patient group, the relative importance of COP W was much greater than that of all other factors. Based on these results, it was determined that using COP W as the factor for machine learning-based state classification was the right decision. Figure 10 shows the results of the relative importance of the input factors used in the ANN model, calculated using Equation (16). As shown in Figure 10, the factor with the highest relative importance in both the healthy adult and patient groups was . All other factors showed a relative importance within the range of 0.045-0.09 in both groups, whereas showed a relative importance of 0.11 and 0.30 in the healthy adult and patient groups, respectively, which confirmed that it was the most important factor in both groups. Especially in the state classification model for the patient group, the relative importance of was much greater than that of all other factors. Based on these results, it was determined that using as the factor for machine learning-based state classification was the right decision.
Discussion
A recent study by Tang et al. [5] used three different threshold values for distinguishing between the standing and walking states, and proposed a method of tuning the threshold values at each step by using the maximum and minimum GRF values from the previous step, which they referred to as the self-tuning triple threshold algorithm (STTTA). However, STTTA had the inherent limitation of decreased state classification accuracy due to shuffling of the swing foot in the patient group. Djamaa et al. [28] used the GRF to classify the walking state as shuffle walk, toe walk, and normal walk, and demonstrated that such classification could be useful in diagnosing the presence of a disorder. However, this method too could not avoid the classification error caused by shuffling.
On the other hand, the application of COP W , which was introduced as a new factor in the present study, to the threshold method yielded high mean state classification accuracies of 95.69% and 95.05% in the experiments on the healthy adult and patient groups, respectively, which indicated significant reduction in classification errors caused by shuffling. Figure 11 shows the state classification by applying the experimental data shown in Figures 8 and 9 to the ANN model. The results showed almost no state classification error. Moreover, the ANN model using COP W as the factor showed no classification delay, unlike the threshold method using COP W . Table 3 shows the delay in the state classification time in all gait experiments. The start and end time points of walking represented the time points when the motion capture marker placed on the solar plexus moved in the direction of walking, and the time delay was defined as the difference in time between these time points, observed by the motion capture system, and the time points for completion of state classification by the algorithm developed in the present study. Table 3 shows the delay in the state classification time in all gait experiments. The start and end time points of walking represented the time points when the motion capture marker placed on the solar plexus moved in the direction of walking, and the time delay was defined as the difference in time between these time points, observed by the motion capture system, and the time points for completion of state classification by the algorithm developed in the present study. Figure 11. Comparison of the classification results using the test data: (a) healthy adults (b) patients. Figure 11. Comparison of the classification results using the test data: (a) healthy adults (b) patients. The most prominent gait characteristics of the patient group were (1) the time that the foot stayed in contact with the ground was relatively longer than that in the healthy adult group, owing to shuffling of the affected foot; (2) when the affected foot was lifted, a significant weight shift occurred to the unaffected foot. Therefore, when the GRF is used for state classification in the patient group, the foot touches down on the ground after a time delay owing to shuffling of the affected foot. Consequently, there is a delay in detecting the start of walking; the mean classification delay was found to be 52.1 ms. At the end of walking, the swing foot touches down on the ground earlier due to shuffling, when compared with the healthy adult group. Consequently, the classification process prematurely assumes that the participant is in the standing state; the mean classification delay was found to be −283.4 ms. The state classification delays in the healthy adult group when classified by the GRF were −11.8 ms and −2.6 ms at the start and end of walking, respectively. Because these classification delays were negligible, the results could be considered as real-time classification.
The threshold method using COP W showed the mean state classification delay at end of walking as 193.3 ms and 139.3 ms in the healthy adult and patient groups, respectively. There were two causes for this classification delay. The first cause was the fact that the waveform length was derived by a method that uses the sum of data within an arbitrary time window. Therefore, it was influenced by the data from the previous time window. In such cases, the calculation included the data from the previous walking state, even though the participant had already stopped, and, thus, the results may have continued to show the participant in the walking state. The second cause is that even when the participant stops walking, changes in the COP may continue to occur owing to the shaking of the body until the participant comes to a complete stop. Such changes may be reflected in COP W and the results may continue to show the participant in the walking state.
The mean time delay at the start of walking was −24.8 ms and −155.5 ms in the healthy adult and patient groups, respectively, meaning that state classification occurred before the start of walking in both groups. In the patient group, when the affected foot was lifted, excessive weight shift occurred to the unaffected foot. This caused a sudden increase in the amount of change in the COP before the knee movement, which could have been classified as the walking state. In the healthy adult group, anticipatory postural adjustments (APAs) appear, which are slight movements to maintain the balance before the body moves [29]. In other words, when the participants in the healthy adult group began walking from the standing position, the COP that was positioned between the two feet entered an imbalance phase of moving toward the heel of the swing foot before walking began. In the healthy adult group, such changes in the COP influenced COP W , whereby the start of walking was classified prematurely. When the ANN model was used, the time delay in state classification showed an absolute value of <10 ms regardless of the group and type of walking state. Therefore, classification using the ANN model can be considered as real-time state classification. In Figure 6, the results of state classification using the ANN model showed very high mean classification accuracies of 99.23% and 98.33% in the healthy adult and patient groups, respectively. Therefore, the ANN model developed in the present study can be viewed as a technique capable of real-time state classification with very high state classification accuracy, suggesting its suitability for application in clinical practice.
The lower classification accuracy of the threshold method using COP W for the standing state in the healthy adult group when compared with the other cases, as shown in Figure 6, can be explained based on the occurrence of APAs. In other words, the COP changed as the body wavered in the standing state, which would show a very similar pattern as the change in COP due to APAs before the start of walking. Therefore, wavering of the body in the standing state may have been erroneously classified as the walking state.
The method using COP W and the TAM method both require boolean-type logical comparison calculations, whereas the ANN model goes through the matrix calculation, so the amount of calculation is relatively increased compared to the above-mentioned two methods. However, the amount of matrix computation performed by the ANN model used in this study can be implemented in real-time.
Conclusions
The present study proposed a method for using the GRF data for state classification, while also reducing the errors caused by shuffling. An appropriate input factor was selected for state classification, and this factor was applied to the threshold method and machine learning-based method to examine the state classification accuracy.
Consequently, COP w , the waveform length derived from the sum of all .
COP within an arbitrary time window, was selected as the state classification factor. The threshold method using COP w showed the mean classification accuracy of ≥95% in state classification experiments on the healthy adult and patient groups. This method was found to show significant improvement in the state classification error caused by shuffling, especially in the experiments on the patient group. However, because the data from the previous step were used in the computation to obtain COP w , a classification delay was also detected. Moreover, changes in the COP that appear in the normal standing state may influence COP w , which can lead to classification error.
However, the ANN model that used COP w as the factor showed excellent state classification accuracy of ≥98% in both the healthy adult and patient groups, while also showing no classification delay, which was observed in the threshold method using COP w .
In conclusion, it was determined that the selection of COP w as the factor for state classification was an appropriate choice. Moreover, the ANN model using COP w cannot only fundamentally resolve the problem of state classification error caused by shuffling, but is also capable of real-time state classification. Furthermore, the GRF measurement device used in the present study was fabricated as an insole type that can be inserted into a shoe. Thus, it can be worn conveniently and operated for a long time, which enhances its applicability to actual clinical trials.
In future studies, the authors aim to develop a classification method with reduced state classification errors caused by differences in the gait characteristics between healthy adults and patients; change in direction of walking or turning during walking, and differences in the walking speed will be examined, so that the proposed method can be adopted in actual clinical experiments. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to ethical concerns since they were obtained in a clinical trial.
Conflicts of Interest:
The authors declare no conflict of interest.
Appendix A
As illustrated in Figure A1a, the GRF measuring insole device developed in the present study comprised a total of 10 FSRs (FSR-402, Interlink Electronics, USA), of which five were attached to the bottom of the insole for each foot. The locations of the FSR attachment were as illustrated in Figure A1c, and as presented in Table A1. The locations were standardized to use the height of each participant [22]. The distance between the center points of both heels, depicted in Figure A1c, was assumed to be equivalent to the shoulder width.
The schematic of the gait analysis system is illustrated in Figure A1b. First, the GRF data collected from the left insole were transmitted to the right insole via a Bluetooth device; these data were subsequently combined with right foot GRF data and transmitted to a PC wirelessly by Bluetooth for use in the analysis.
were standardized to use the height of each participant [22]. The distance between the center points of both heels, depicted in Figure A1c, was assumed to be equivalent to the shoulder width.
The schematic of the gait analysis system is illustrated in Figure A1b. First, the GRF data collected from the left insole were transmitted to the right insole via a Bluetooth device; these data were subsequently combined with right foot GRF data and transmitted to a PC wirelessly by Bluetooth for use in the analysis.
Appendix B
In this study, the phase randomized Fourier surrogate method was used to determine whether . COP was deterministic. In this method, the . COP data is Fourier transformed, multiplied by a randomly assigned phase term, and then inverse Fourier transformed to obtain surrogate data [30,31]. ApEn was calculated for each of the original . COP data and surrogate data, and if there was a difference between these two ApEn values, the original . COP data was interpreted as a deterministic signal rather than a linear noise. The ApEn values calculated from the . COP data of the healthy adult and patient groups obtained in this study are shown in Figure A2. It can be seen that both groups have different ApEn values obtained from original . COP data and surrogate data, and these results indicate that the . COP data obtained in this study is a deterministic signal. data was interpreted as a deterministic signal rather than a linear noise. The ApEn values calculated from the data of the healthy adult and patient groups obtained in this study are shown in Figure A2. It can be seen that both groups have different ApEn values obtained from original data and surrogate data, and these results indicate that the data obtained in this study is a deterministic signal. | 10,068.8 | 2021-03-01T00:00:00.000 | [
"Engineering"
] |
Langmuir–Blodgett Films with Immobilized Glucose Oxidase Enzyme Molecules for Acoustic Glucose Sensor Application
In this work, a sensitive coating based on Langmuir–Blodgett (LB) films containing monolayers of 1,2-dipalmitoyl-sn-glycero-3-phosphoethanolamine (DPPE) with an immobilized glucose oxidase (GOx) enzyme was created. The immobilization of the enzyme in the LB film occurred during the formation of the monolayer. The effect of the immobilization of GOx enzyme molecules on the surface properties of a Langmuir DPPE monolayer was investigated. The sensory properties of the resulting LB DPPE film with an immobilized GOx enzyme in a glucose solution of various concentrations were studied. It has shown that the immobilization of GOx enzyme molecules into the LB DPPE film leads to a rising LB film conductivity with an increasing glucose concentration. Such an effect made it possible to conclude that acoustic methods can be used to determine the concentration of glucose molecules in an aqueous solution. It was found that for an aqueous glucose solution in the concentration range from 0 to 0.8 mg/mL the phase response of the acoustic mode at a frequency of 42.7 MHz has a linear form, and its maximum change is 55°. The maximum change in the insertion loss for this mode was 18 dB for a glucose concentration in the working solution of 0.4 mg/mL. The range of glucose concentrations measured using this method, from 0 to 0.9 mg/mL, corresponds to the corresponding range in the blood. The possibility of changing the conductivity range of a glucose solution depending on the concentration of the GOx enzyme in the LB film will make it possible to develop glucose sensors for higher concentrations. Such technological sensors would be in demand in the food and pharmaceutical industries. The developed technology can become the basis for creating a new generation of acoustoelectronic biosensors in the case of using other enzymatic reactions.
Introduction
The measurement of sugar concentration in liquid solutions is an important task for many fields of science and technology. The control of this parameter is important in the production of food beverages, pharmaceuticals, cosmetics, plastics, and tobacco products [1][2][3][4][5]. In addition, blood glucose levels are an important indicator of human health. By controlling their measurement, it is possible to determine the presence of diabetes, as well as many diseases associated with metabolic disorders [6,7]. Almost all modern glucometers operate using the electrochemical principle. It is based on measuring the change in electrophysical parameters resulting from the interaction of a sugar-containing liquid with a specific reagent deposited on the electrode structure of the glucometer [8]. Glucometers based on the use of calorimetric and optical methods are also known [9][10][11]. In recent years, noninvasive glucometers based on the indirect measurement of blood glucose concentration have been actively developed. In this case, the parameters of biological fluids are measured: sweat, urine, saliva, or the blood filling of blood vessels [12][13][14][15][16][17].
It should be noted that, despite the great demand for such devices, there are still problems associated with increasing the reliability, sensitivity, and repeatability of their results. Machine learning technologies can be used to solve these problems [18]. This will make it possible to predict the dynamics of the course of the pathology. This is important in the development of systems for the automatic dosing of drugs. Another solution to these problems is the creation of enzymatic biosensors. These sensors have a high sensitivity and selectivity to target molecules [19,20]. This approach is applicable, among other things, to the creation of wearable multisensor systems [21,22], as well as systems for the continuous monitoring of glucose levels [23].
The immobilization of enzyme molecules on a substrate is one of the main tasks in the development of enzymatic biosensors. There are several ways to create sensor coatings with immobilized enzyme molecules. It is possible to use such methods as linker bonds with epoxy groups, fixation in various hydrogels, biopolymers, polyelectrolytes, etc. [24][25][26][27][28][29][30]. It should be noted that in this case the enzyme is located inside the matrix and this complicates access to it by the test liquid. This leads to a decrease in the sensitivity of the sensor and a decrease in its reliability. The problem of increasing the lifetime and stability of the enzyme is important due to the need to create systems for monitoring changes in glucose levels over a long time [31,32].
Langmuir-Blodgett (LB) technology can be used to solve these problems. This technology makes it possible to form a highly ordered monolayer of surfactant molecules at the water-air interface with a simultaneous immobilization of enzyme molecules in this monolayer. The successive transfer of such monolayers onto solid substrates makes it possible to form sensor coatings whose structures reproduce an element of the microbial cell membrane. In this case, the outer monolayers of the film will perform the function of protecting the enzyme from environmental influences. It should be noted that, as in the case of the immobilization of enzymes in films, there is a possibility of reducing the sensitivity of the created coating due to the presence of a layer of close-packed surfactant molecules covering the enzyme [33,34]. However, to overcome this drawback, it is possible to immobilize the enzyme in an LB film with a heterogeneous morphology, for example, by creating a mixed film based on a monolayer of surfactant molecules of different types or a film with incorporated nanocarbon structures [35]. In this regard, an urgent task is to study the process of the immobilization of enzyme molecules in mixed Langmuir monolayers of surfactant molecules of various types. The process of the immobilization of glucose oxidase enzyme molecules was studied in [33,36]. The influence of the charge of the head group of surfactant molecules, the type of surfactant molecules, and the length of the hydrocarbon radical on the process of adsorption of enzyme molecules were also studied [37][38][39].
It should be noted that LB films can be used as a sensor layer for sensors of various types. For example, an LB film of tyrosine hydroxylase was used to create an electrochemical biosensor for drug compounds [40]. A Langmuir film based on a monolayer of ammonium octadecyltrimethyl and Prussian blue with an immobilized glucose oxidase enzyme was used to create an amperometric glucometer [41]. Mixed Langmuir films of polyaniline and stearic acid with immobilized cholesterol oxidase were proposed to be used to create an electrochemical cholesterol biosensor [42]. An electrochemical sensor for penicillin based on a mixed Langmuir monolayer of penicillinase-DMPA incorporated with carbon nanotubes was proposed in [43]. An LB film based on octadecylamine with an immobilized phenol oxidase enzyme was used to create an electrochemical biosensor for the fruit browning enzyme (phenol) [44]. Along with typical electrochemical, impedance, and optical glucose sensors, LB films containing enzyme molecules can be used to fabricate efficient acoustic sensors.
The principle of their operation is based on recording the characteristics of acoustic waves in piezoelectric structures that are in contact with the sensor film. Due to changes in the electrical and/or acoustic properties of the film as a result of biospecific interaction with Sensors 2023, 23, 5290 3 of 16 the analyzed liquid, the phase and amplitude of the used acoustic wave change [45,46]. These changes can be easily detected by existing devices and using acoustic technology for studying polydisperse liquids of various compositions [47][48][49][50]. One of the advantages of acoustoelectronic technologies is the absence of contact between the substance under study and the electrode structure, in contrast to the electrochemical principle of biosensor implementation. Moreover, the advantages of the acoustoelectronic method include the possibility of cleaning the surface of the sensor coating from nonspecifically bound protein molecules using an acoustic stream [51].
It is necessary to clarify the question of the influence of LB films on the properties of acoustic waves to use them as sensor coatings. The influence of LB films on the characteristics of devices based on surface and bulk acoustic waves [52][53][54], Love waves [55], FBAR [56], and HF SAW resonators [57] was studied. As a result, it was shown that these films have almost no effect on the characteristics of acoustic waves of various types and can be effectively used as sensor coatings. It was shown that these films and acoustic devices can be used to create various gas sensors [57][58][59][60].
As for biological sensors, it was proposed in [61] to use the immobilization of a specific phage in an LB film deposited on the surface of a quartz microbalance. As a result, the possibility of registering β-galactosidase from Escherichia coli was demonstrated. Works on the study of the possibility of the immobilization of various enzymes in LB films and the creation of biosensors on this basis are currently at the initial stage of development and may lead to the creation of a new generation of biosensors [53,62].
In this article, a technology for immobilizing the enzyme glucose oxidase (GOx) into bilayer membrane-like LB films based on phospholipid molecules 1,2-dipalmitoyl-sn-glycero-3-phosphoethanolamine (DPPE) was developed. The morphology of the resulting LB films and their sensory properties to glucose solutions of various concentrations were studied. It is known that GOx molecules are selectively sensitive to glucose [63]. In this work, a solution of the GOx enzyme in distilled water was used as an aqueous subphase. The enzyme concentration was 0.015 mg/mL. As is known, the DPPE phospholipid retains a neutral charge at pH values in the range of 5 to 7 [64]. The shift of the pH value to the acidic region allows one to polarize the DPPE molecule, creating an excess positive charge in the hydrophilic part of the molecule. In this regard, to increase the efficiency of GOx enzyme adsorption by a Langmuir DPPE monolayer, it is necessary to change the surface charge of its molecules from positive to negative. It is known that the GOx molecule has an isoelectric point at a pH of 4.2 [65], so the pH of the aqueous subphase was 4. A sodium acetate buffer solution was used to achieve such a pH. This solution was prepared by mixing aqueous solutions of sodium acetate (CH 3 COONa) and acetic acid (CH 3 COOH) with molar concentrations of 0.2 M/L and a volume ratio of CH 3 COONa:CH 3 COOH equal to 18:82. The resulting buffer solution was mixed with an aqueous subphase to obtain pH = 4. The pH value was controlled using a pH meter, pH-150MI (IzmTeh, Moscow, Russia).
Materials and Methods
The formation of Langmuir monolayers and LB films was carried out on a KSV Nima LB Trough KN2001 setup (Nima KSV, Espoo, Finland) with a working surface area of 243 cm 2 . A solution of DPPE in chloroform was applied to the surface of the aqueous subphase with an aliquot volume of 50 µL. After 120 min from the moment the solution was applied to the water surface, the monolayer was compressed by movable barriers at a constant area loss rate of 0.7 cm 2 /min. The dependence of the surface pressure in the monolayer on the area occupied in it by one DPPE molecule (π-A isotherm) was recorded automatically using a Wilhelmy balance sensor. Figure 1 shows π-A isotherms of Langmuir DPPE monolayers formed on an aqueous subphase in the absence of dissolved GOx enzyme (1) and in its presence (2).
Nima LB Trough KN2001 setup (Nima KSV, Espoo, Finland) with a working surface area of 243 cm 2 . A solution of DPPE in chloroform was applied to the surface of the aqueous subphase with an aliquot volume of 50 µ L. After 120 min from the moment the solution was applied to the water surface, the monolayer was compressed by movable barriers at a constant area loss rate of 0.7 cm 2 /min. The dependence of the surface pressure in the monolayer on the area occupied in it by one DPPE molecule (π-A isotherm) was recorded automatically using a Wilhelmy balance sensor. Figure 1 shows π-A isotherms of Langmuir DPPE monolayers formed on an aqueous subphase in the absence of dissolved GOx enzyme (1) and in its presence (2). Figure 1. π-A isotherms of the Langmuir monolayer of DPPE molecules formed in the absence of dissolved GOx enzyme molecules in the aqueous subphase (1) and in their presence (2). In regions of compression isotherms I-II, II-III, and III-IV, the monolayer was in the gas, liquid, and condensed phases, respectively. IIa is additional phase transition point.
The specific area per DPPE molecule in the condensed phase (A0) and the compression modulus (k) of the Langmuir monolayer were determined using the obtained π-A isotherms [66]. The A0 value was determined using a tangent drawn to the π-A section of the III-IV isotherm corresponding to the condensed phase of the monolayer. The compression modulus was calculated by (1): where the ratio dπ/dA is numerically equal to the tangent of the slope of the tangent drawn to section III-IV of the π-A monolayer isotherm ( Figure 1). The transfer of monolayers to solid substrates was carried out using the Langmuir-Blodgett method. This method was applied for the production of sensitive films on the surfaces of acoustic devices described in detail in [39]. Lithium niobate (LiNbO3) polished on both sides was used as a substrate. The monolayer was compressed by movable barriers until a surface pressure of 40 mN/m was reached. Next, the substrate oriented perpendicularly to the water-air interface passed through the monolayer with the immobilized GOx enzyme at a speed of 1 mm/min. After complete immersion, the substrate remained under the water for 30 s. Next, the substrate was pulled out from under the water at a rate of 1 mm/min. Thus, a bilayer membrane-like DPPE film with immobilized GOx enzyme molecules was formed on both surfaces of the LiNbO3 plate. The resulting Figure 1. π-A isotherms of the Langmuir monolayer of DPPE molecules formed in the absence of dissolved GOx enzyme molecules in the aqueous subphase (1) and in their presence (2). In regions of compression isotherms I-II, II-III, and III-IV, the monolayer was in the gas, liquid, and condensed phases, respectively. IIa is additional phase transition point.
The specific area per DPPE molecule in the condensed phase (A 0 ) and the compression modulus (k) of the Langmuir monolayer were determined using the obtained π-A isotherms [66]. The A 0 value was determined using a tangent drawn to the π-A section of the III-IV isotherm corresponding to the condensed phase of the monolayer. The compression modulus was calculated by (1): where the ratio dπ/dA is numerically equal to the tangent of the slope of the tangent drawn to section III-IV of the π-A monolayer isotherm ( Figure 1). The transfer of monolayers to solid substrates was carried out using the Langmuir-Blodgett method. This method was applied for the production of sensitive films on the surfaces of acoustic devices described in detail in [39]. Lithium niobate (LiNbO 3 ) polished on both sides was used as a substrate. The monolayer was compressed by movable barriers until a surface pressure of 40 mN/m was reached. Next, the substrate oriented perpendicularly to the water-air interface passed through the monolayer with the immobilized GOx enzyme at a speed of 1 mm/min. After complete immersion, the substrate remained under the water for 30 s. Next, the substrate was pulled out from under the water at a rate of 1 mm/min. Thus, a bilayer membrane-like DPPE film with immobilized GOx enzyme molecules was formed on both surfaces of the LiNbO 3 plate. The resulting sensitive coating dried for 4 h under a hood. Schematically, the process of Langmuir-Blodgett film formation is shown in Figure 2.
After the completion of the coating drying process, the side of the substrate with deposited interdigital transducers (IDTs) was cleaned from the DPPE film using chloroform (Sigma-Aldrich, St. Louis, MO, USA, 99%) and ethanol (Sigma-Aldrich, St. Louis, MO, USA, 95%). Thus, the sensor film remained only on the side of the plate free from the IDT. were formed on the plate surface by magnetron sputtering and projection photolithography. The wavelength S 12 specified by the period of the IDT, the aperture, and the distance between the IDTs were 660 µm, 8 mm, and 10 mm, respectively. The operating frequencies of the formed DL were in the range of 1 to 50 MHz. The frequency dependences of the S 12 parameter of the manufactured acoustic delay line without a LB film, with a LB film, and with a LB film with immobilized GOx enzyme are shown in Figure 3. After the completion of the coating drying process, the side of the substrate w deposited interdigital transducers (IDTs) was cleaned from the DPPE film using chl form (Sigma-Aldrich, St. Louis, MO, USA, 99%) and ethanol (Sigma-Aldrich, St. Lo MO, USA, 95%). Thus, the sensor film remained only on the side of the plate free from IDT.
Production of an Acoustic Delay Line
An acoustic delay line (DL) was fabricated on a YX LiNbO3 plate polished on b sides with a thickness of 320 µm and 23.5 mm × 13.5 mm in size. Interdigital transdu (IDTs) were formed on the plate surface by magnetron sputtering and projection ph lithography. The wavelength specified by the period of the IDT, the aperture, and distance between the IDTs were 660 µm, 8 mm, and 10 mm, respectively. The opera frequencies of the formed DL were in the range of 1 to 50 MHz. The frequency depe ences of the S12 parameter of the manufactured acoustic delay line without a LB film, w a LB film, and with a LB film with immobilized GOx enzyme are shown in Figure 3.
Setup and Methods for the Study of Glucose-Sensitive Properties of the LB Film
A special experimental setup was designed and manufactured to study the sensory properties of the created LB DPPE film with the immobilized GOx enzyme ( Figure 5).
The experimental setup for studying the sensory properties of the created LB DPPE film with an immobilized GOx enzyme deposited on the DL consisted of a PC (1) with an installed control program for a vector network analyzer (2) and an automatic glucose solution supply system (3,4,5). A two-port vector network analyzer Obzor TR1300/1 (Planar, Chelyabinsk, Russia) with an operating frequency range from 0.3 to 1300 MHz was used to measure the characteristics of acoustic waves. The created acoustic DL (6) was connected to the vector network analyzer via phase-stable cables. The sensitive film was formed on Sensors 2023, 23, 5290 6 of 16 the side of the LZ free from IDTs. A cell with the liquid under study was placed between two IDTs on the side with the applied sensitive film. The cell was fabricated by DLP (Digital Light Processing) printing on an Anycubic PhotonS photopolymer printer (Anycubic, Shenzhen, China). The photopolymer resin Anycubic Basic (Anycubic, Shenzhen, China) was used for printing. After fabrication, the cell was washed in isopropyl alcohol to remove residual unpolymerized photopolymer resin and subjected to UV drying for 30 min. The cell edges were treated with salicylic acid phenol ester to ensure waterproofing. The automatic glucose solution supply system consisted of a frame with a dosing syringe (3), a stepper motor, and a microcontroller (4) to control the solution supply rate. The solution was supplied through a needle with a diameter of 0.1 mm (5). The speed of the syringe piston was 2.08 mm/h. At the indicated speed, a drop of 10 µL was formed at the end of the needle for 5 min and placed in the cell. Thus, there was a controlled change in the concentration of glucose solution in the cell.
Setup and Methods for the Study of Glucose-Sensitive Properties of the LB Film
A special experimental setup was designed and manufactured to study the sensory properties of the created LB DPPE film with the immobilized GOx enzyme ( Figure 5). (1); a two-port vector network analyzer (2); an automated system for supplying a solution containing glucose (3,4,5); and an acoustic DL with an installed liquid cell (6).
The experimental setup for studying the sensory properties of the created LB DPPE film with an immobilized GOx enzyme deposited on the DL consisted of a PC (1) with an installed control program for a vector network analyzer (2) and an automatic glucose solution supply system (3,4,5). A two-port vector network analyzer Obzor TR1300/1
Setup and Methods for the Study of Glucose-Sensitive Properties of the LB Film
A special experimental setup was designed and manufactured to study the sensory properties of the created LB DPPE film with the immobilized GOx enzyme ( Figure 5). (1); a two-port vector network analyzer (2); an automated system for supplying a solution containing glucose (3,4,5); and an acoustic DL with an installed liquid cell (6).
The experimental setup for studying the sensory properties of the created LB DPPE film with an immobilized GOx enzyme deposited on the DL consisted of a PC (1) with an installed control program for a vector network analyzer (2) and an automatic glucose solution supply system (3,4,5). A two-port vector network analyzer Obzor TR1300/1 (Planar, Chelyabinsk, Russia) with an operating frequency range from 0.3 to 1300 MHz was used to measure the characteristics of acoustic waves. The created acoustic DL (6) was connected to the vector network analyzer via phase-stable cables. The sensitive film was formed on the side of the LZ free from IDTs. A cell with the liquid under study was placed between two IDTs on the side with the applied sensitive film. The cell was fabri- (1); a two-port vector network analyzer (2); an automated system for supplying a solution containing glucose (3,4,5); and an acoustic DL with an installed liquid cell (6).
During measurements, the cell was filled with distilled water (200 µL). With the indicated volume of liquid, a further increase in the height of the water column did not lead to a change in the amplitude-frequency characteristic (AFC) of the acoustic DL. A needle was placed above the hole located in the lid of the cell. This lid is used for delivering a glucose solution with a concentration of 2 mg/mL drop by drop into the cell. Thus, the glucose concentration in the measuring cell varied in the range from 0 to 1 mg/mL. The AFC was measured after each addition of glucose solution to the working mixture in the cell with liquid. The phase shift of the acoustic signal was measured for each of the glucose concentrations at frequencies of 27.81 and 42.73 MHz. These frequencies were selected based on the analysis of the frequency dependences of the S 12 parameter. The change in this parameter for the chosen frequencies was at maximum at the glucose concentration change. Figure 6 shows the time dependence of the change in glucose concentration in the measuring cell. (5). The speed of the syringe piston was 2.08 mm/h. At the indicated speed, a drop of 10 µL was formed at the end of the needle for 5 min and placed in the cell. Thus, there was a controlled change in the concentration of glucose solution in the cell.
During measurements, the cell was filled with distilled water (200 µL). With the indicated volume of liquid, a further increase in the height of the water column did not lead to a change in the amplitude-frequency characteristic (AFC) of the acoustic DL. A needle was placed above the hole located in the lid of the cell. This lid is used for delivering a glucose solution with a concentration of 2 mg/mL drop by drop into the cell. Thus, the glucose concentration in the measuring cell varied in the range from 0 to 1 mg/mL. The AFC was measured after each addition of glucose solution to the working mixture in the cell with liquid. The phase shift of the acoustic signal was measured for each of the glucose concentrations at frequencies of 27.81 and 42.73 MHz. These frequencies were selected based on the analysis of the frequency dependences of the S12 parameter. The change in this parameter for the chosen frequencies was at maximum at the glucose concentration change. Figure 6 shows the time dependence of the change in glucose concentration in the measuring cell.
Influence of Adsorption of GOx Enzyme Molecules on the Surface Properties of a Langmuir DPPE Monolayer
The compression isotherms of the DPPE monolayer formed on the subphase in the absence of dissolved molecules of the GOx enzyme (1) and in their presence (2) are shown in Figure 1. Three regions, I-II, II-III, and III-IV, were observed on the compression isotherm of the DPPE monolayer. In these regions, the monolayer was in the gas, liquid, and condensed phases, respectively. The surface pressure in the DPPE monolayer is varied from 0.5 to 12.5 mN/m and from 12.5 to 61 mN/m in the liquid and condensed phases, respectively. The compression modulus of the DPPE monolayer without the GOx enzyme in the condensed phase was 133 mN/m and the A0 was 32.5 Å 2 . The addition of GOx enzyme molecules to the subphase led to a change in the shape of the compression isotherm. Before the start of the compression process of the DPPE monolayer, the molecules of the GOx enzyme were adsorbed at the water-air interface [67]. The adsorption of GOx molecules led to an insignificant increase in the surface pressure in the gas phase up to 2 mN/m. An increase in the number of molecules located on the water surface led to a shift in the compression isotherm towards larger areas. Thus, the phase transition points in the monolayer also shifted (Figure 1). For example, the phase transition point from the gas phase to the liquid phase for a DPPE monolayer without the enzyme corresponded to
Influence of Adsorption of GOx Enzyme Molecules on the Surface Properties of a Langmuir DPPE Monolayer
The compression isotherms of the DPPE monolayer formed on the subphase in the absence of dissolved molecules of the GOx enzyme (1) and in their presence (2) are shown in Figure 1. Three regions, I-II, II-III, and III-IV, were observed on the compression isotherm of the DPPE monolayer. In these regions, the monolayer was in the gas, liquid, and condensed phases, respectively. The surface pressure in the DPPE monolayer is varied from 0.5 to 12.5 mN/m and from 12.5 to 61 mN/m in the liquid and condensed phases, respectively. The compression modulus of the DPPE monolayer without the GOx enzyme in the condensed phase was 133 mN/m and the A 0 was 32.5 Å 2 . The addition of GOx enzyme molecules to the subphase led to a change in the shape of the compression isotherm. Before the start of the compression process of the DPPE monolayer, the molecules of the GOx enzyme were adsorbed at the water-air interface [67]. The adsorption of GOx molecules led to an insignificant increase in the surface pressure in the gas phase up to 2 mN/m. An increase in the number of molecules located on the water surface led to a shift in the compression isotherm towards larger areas. Thus, the phase transition points in the monolayer also shifted (Figure 1). For example, the phase transition point from the gas phase to the liquid phase for a DPPE monolayer without the enzyme corresponded to A 0 = 65 Å 2 . The adsorption of GOx molecules at the boundary of the water surface led to a shift of this phase transition point (A 0 = 140 Å 2 ). In addition, an additional phase transition point (IIa) appeared in the section of the II-III compression isotherm. The phase transition point IIa corresponds to a surface pressure of 9 mN/m and an A 0 value of 90 Å 2 . In region II-IIa of the compression isotherm of the DPPE monolayer formed on the subphase with dissolved GOx molecules, an increase in surface pressure from 2 mN/m to 9 mN/m was observed. The increase in the surface pressure in the DPPE monolayer is due to the onset of the process of interaction between the molecules of the GOx enzyme adsorbed on the water surface and the islands of DPPE molecules. In regions IIa-III of this compression isotherm, the surface pressure increased from 9 to 10 mN/m. The existence of such regions can be explained by the transition of the monolayer from the liquid-expanded phase to the liquid-condensed phase. Similar phase states were also observed in [68,69]. In this case, the presence of such regions was associated with a change in the structure of the monolayer and the formation of a multilayer film. However, in our case such a behavior of the compression isotherm can be associated with the interaction of hydrophobic parts of the enzyme molecules and hydrocarbon chains of lipid molecules [39]. At the same time, there was no tendency to form a multilayer structure on the water's surface. Similar results were presented in [69]. Here, the presence of a plateau was explained by an increase in the strength of the electrostatic interaction between the head groups of fatty acid molecules during their deprotonation. In this regard, the existence of a plateau in the section of the IIa-III compression isotherm in our case can also be explained by an increase in the contribution from the electrostatic interaction between the GOx molecules and the head parts of the DPPE molecules to the intermolecular interaction in the monolayer. Further compression of the DPPE monolayer led to an increase in the surface pressure and A 0 to 60 mN/m and 49 Å 2 , respectively. The compression modulus of the DPPE monolayer formed on the subphase with dissolved GOx molecules decreased to 87 mN/m. This indicates a decrease in the structural perfection of the monolayer.
Study of the Morphology of a Sensitive Coating Based on an LB DPPE Film with Immobilized GOx Enzyme Molecules
The study of the surface morphology was carried out using the method of atomic force microscopy on an NT-MDT Ntegra setup in a semi-contact mode with a scanning speed of one line of 0. 65 Hz. An NT-MDT NSG10 series cantilever with a tip radius of <10 nm was used. The mathematical processing of images obtained in the study of the film surface morphology was carried out using the Gwyddion 2.61 software (Czech Metrology Institute, Brno, Czech Republic) [70,71]. Formula (2) was used to calculate the average film surface roughness: where R a is the arithmetic mean deviation of the profile from the baseline (middle line of the profile), N is the number of points with the roughness parameter R a measured, and r j is the absolute deviation of the profile height value from the midline at each roughness measurement point. The images of the surface morphology of a clean LiNbO 3 plate (Figure 7a), a LiNbO 3 plate with a DPPE film (Figure 7b), and a LiNbO 3 plate with a DPPE film with immobilized GOx enzyme molecules (Figure 7c) are presented in Figure 7. The surface of the LiNbO 3 plate had a roughness of about 1 nm. The presence of stripes on the surface of the plate is associated with its polishing. The depth of the bands is ranged from 3 to 5 nm. The morphology of the plate surface is changed due to the deposition of an LB film of DPPE on it. The surface roughness increased up to 1.2 nm. This can be associated with the formation of defects (pores) in the film during its transfer and further drying.
Schematically, the process of defect formation in the LB DPPE film is shown in Figure 8. The surface of the lithium niobate substrate is hydrophilic due to the presence of an uncompensated surface charge.
Its interaction with the water molecule dipole leads to better spreading of the water's drop over the plate surface. During the formation of the first layer of the LB film, an interaction occurs between the hydrophilic substrate and DPPE molecules in the Langmuir monolayer, whose hydrophobic parts are oriented toward the substrate.
The presence of scratches and other defects on the surface of the lithium niobate substrate leads to breaks in the transferred monolayer (Figure 8a). In the process of applying the second layer of the film, water molecules are drawn into the gaps formed by capillary forces (Figure 8b). The evaporation of water during the drying of the film leads to the formation of pores in it (Figure 8c,d). With a large number of formed pores, a local change in the film thickness leads to a change in its roughness. The adsorption of GOx enzyme molecules by a Langmuir monolayer leads to a change in the morphology of the LB film formed on its basis. The film surface becomes more developed as the average film roughness increases from 1.2 to 5.5 nm. As is known, the size of the glucose oxidase enzyme molecule is 5.2 × 7.7 × 6.0 nm [72]. In the resulting LB film, aggregates with a height of 5 to 15 nm and an area of 0.03 µm 2 to 0.1 µm 2 are visible (Figure 7d). The heights of the aggregates in the LB film are comparable to the sizes of a single molecule of the GOx enzyme. In this connection, it can be concluded that immobilized molecules of the GOx enzyme are present in the created LB film.
Study of the Sensitivity of LB Film with DPPE and Immobilized GOx Enzyme to Glucose Solution
The frequency dependences of the S 12 parameter of the manufactured acoustic DL with a DPPE LB film with an immobilized GOx enzyme were obtained in the absence of liquid in the cell, in the presence of distilled water, and in the presence of a glucose solution with various concentrations in the range from 0 to 1 mg/mL in the cell.
As an example, the frequency dependences of the parameter S 12 of the manufactured acoustic DL with a DPPE LB film with an immobilized GOx enzyme in the absence of liquid in the cell (1), in the presence of distilled water (2), and the glucose solution with a concentration of 0.3 mg/mL (3) in the cell are presented in Figure 9.
zyme molecule is 5.2 × 7.7 × 6.0 nm [72]. In the resulting LB film, aggregates with a heigh of 5 to 15 nm and an area of 0.03 µm 2 to 0.1 µm 2 are visible (Figure 7d). The heights of th aggregates in the LB film are comparable to the sizes of a single molecule of the GOx enzyme. In this connection, it can be concluded that immobilized molecules of the GOx enzyme are present in the created LB film.
Study of the Sensitivity of LB Film with DPPE and Immobilized GOx Enzyme to Glucose Solution
The frequency dependences of the S12 parameter of the manufactured acoustic DL with a DPPE LB film with an immobilized GOx enzyme were obtained in the absence o liquid in the cell, in the presence of distilled water, and in the presence of a glucose solu tion with various concentrations in the range from 0 to 1 mg/mL in the cell.
As an example, the frequency dependences of the parameter S12 of the manufactured acoustic DL with a DPPE LB film with an immobilized GOx enzyme in the absence o liquid in the cell (1), in the presence of distilled water (2), and the glucose solution with concentration of 0.3 mg/mL (3) in the cell are presented in Figure 9. It can be seen that the addition of distilled water or glucose solution to the cell lead to an increase in insertion loss and a change in the view of AFС. It should be noted tha some modes are characterized by strong attenuation in the presence of liquid. At th same time, other modes react insignificantly to the appearance of liquid. This is due to different polarizations of higher-order waves excited at different frequencies [73]. As result of the analysis of the obtained results, modes with frequencies of 27.8 and 42.7 MHz were chosen as working ones. For these modes, the change in the S12 parameter wa It can be seen that the addition of distilled water or glucose solution to the cell leads to an increase in insertion loss and a change in the view of AFC. It should be noted that some modes are characterized by strong attenuation in the presence of liquid. At the same time, other modes react insignificantly to the appearance of liquid. This is due to different polarizations of higher-order waves excited at different frequencies [73]. As a result of the analysis of the obtained results, modes with frequencies of 27.8 and 42.7 MHz were chosen as working ones. For these modes, the change in the S 12 parameter was maximal with a change in the concentration of glucose, and they reacted weakly to the presence of distilled water.
First, the effect of glucose solutions of various concentrations on the insertion loss and phase shift of the selected acoustic modes of the created DL without an LB film was studied. The corresponding concentration dependences of the S 12 parameter and phase shift for the selected operating modes are shown in Figure 10. The measurement error of the S 12 parameters and phase shift were 0.1 dB and 0.1 0 , respectively.
It can be seen that an increase in the concentration of glucose in distilled water leads to significant changes in the controlled parameters of the selected modes. The maximal changes in insertion loss and phase shift were 0.6 dB and 8 • , respectively, for the mode at a frequency of 42.7 MHz. This can be explained by the insignificant influence of the conductivity of an aqueous glucose solution on piezoactive waves under these conditions. The experiments have shown that a change in the concentration of glucose in water in the range from 0 to 1 mg/mL leads to a change in its conductivity in the range from 0.04 to 0.1 µS/m. This range of conductivities is outside the range of applicability of the acoustoelectronic method [74].
Then, the concentration dependences of the change in the S 12 parameter and the phase shift were measured for the same modes in the case of the presence of an LB DPPE film without GOx on the DL surface. The resulting dependencies are shown in Figure 11. In this case, the maximum change in insertion loss for the 42.7 MHz mode increased from 0.6 to 1.6 dB, and the phase shift increased from 8 to 15 • . presence of distilled water.
First, the effect of glucose solutions of various concentrations on the insertion loss and phase shift of the selected acoustic modes of the created DL without an LB film was studied. The corresponding concentration dependences of the S12 parameter and phase shift for the selected operating modes are shown in Figure 10. The measurement error of the S12 parameters and phase shift were 0.1 dB and 0.1 0 , respectively. It can be seen that an increase in the concentration of glucose in distilled water leads to significant changes in the controlled parameters of the selected modes. The maximal changes in insertion loss and phase shift were 0.6 dB and 8°, respectively, for the mode at a frequency of 42.7 MHz. This can be explained by the insignificant influence of the conductivity of an aqueous glucose solution on piezoactive waves under these conditions. The experiments have shown that a change in the concentration of glucose in water in the range from 0 to 1 mg/mL leads to a change in its conductivity in the range from 0.04 to 0.1 µS/m. This range of conductivities is outside the range of applicability of the acoustoelectronic method [74].
Then, the concentration dependences of the change in the S12 parameter and the phase shift were measured for the same modes in the case of the presence of an LB DPPE film without GOx on the DL surface. The resulting dependencies are shown in Figure 11. In this case, the maximum change in insertion loss for the 42.7 MHz mode increased from 0.6 to 1.6 dB, and the phase shift increased from 8 to 15°. It can be seen that an increase in the concentration of glucose in distilled water leads to significant changes in the controlled parameters of the selected modes. The maximal changes in insertion loss and phase shift were 0.6 dB and 8°, respectively, for the mode at a frequency of 42.7 MHz. This can be explained by the insignificant influence of the conductivity of an aqueous glucose solution on piezoactive waves under these conditions. The experiments have shown that a change in the concentration of glucose in water in the range from 0 to 1 mg/mL leads to a change in its conductivity in the range from 0.04 to 0.1 µS/m. This range of conductivities is outside the range of applicability of the acoustoelectronic method [74].
Then, the concentration dependences of the change in the S12 parameter and the phase shift were measured for the same modes in the case of the presence of an LB DPPE film without GOx on the DL surface. The resulting dependencies are shown in Figure 11. In this case, the maximum change in insertion loss for the 42.7 MHz mode increased from 0.6 to 1.6 dB, and the phase shift increased from 8 to 15°. As shown earlier (Figure 3), the LB DPPE film has little effect on the S 12 parameter for the selected modes with frequencies of 27.81 MHz and 42.73 MHz compared to the unloaded surface. Thus, the changes in the S 12 parameter and the phase shift with an increase in the conductivity of the glucose on the surface of the sensor film without the GOx enzyme are associated with the interaction of glucose molecules directly with the LB film, its penetration into the pores, and an increase in the mass load. However, it should be noted that the changes in the measured parameters are still insignificant with increasing glucose concentrations in the solution.
Finally, the concentration dependences of the changes in the S 12 parameter and the phase shift were measured for the same modes in the presence of an LB film of DPPE with an immobilized GOx enzyme on the surface of the DL. The resulting dependencies are shown in Figure 12. It can be seen that as the glucose concentration increases, the insertion loss of the recorded modes increases, reaches a maximum, and then decreases. In this case, the phase of these modes decreases almost linearly over a wide range of glucose concentrations in the solution and reaches saturation at 0.9 mg/mL for the mode at a frequency of 42.7 MHz. The largest change in parameter S 12 for the mode at a frequency of 42.7 MHz was 18 dB at a glucose concentration of 0.35 mg/mL. The largest phase shift, in this case, was 55 • at a glucose concentration of 0.9 mg/mL for the same mode. Such behavior may be associated with a shift in the range of changes in the conductivity of the analyzed liquid in the range of 0.01-10 S/m [74]. In this case, it is possible to use acoustic methods to create glucose sensors. It should be noted that such a shift in the region of change in conductivity became possible by changing the concentration of the GOx enzyme in the LB DPPE film. Studies performed with other LB films, for example, based on stearic acid (SA) did not allow such an effect to be achieved. This may be due to the peculiarities of the interaction of the GOx enzyme with the molecules of the working solutions used. It was found that LB DPPE films modified with the GOx enzyme had a more developed surface than LB SA films with the same concentration of the enzyme. In addition, DPPE molecules have a higher affinity for GOx than SA molecules. sertion loss of the recorded modes increases, reaches a maximum, and then decreases. In this case, the phase of these modes decreases almost linearly over a wide range of glucose concentrations in the solution and reaches saturation at 0.9 mg/mL for the mode at a frequency of 42.7 MHz. The largest change in parameter S12 for the mode at a frequency of 42.7 MHz was 18 dB at a glucose concentration of 0.35 mg/mL. The largest phase shift, in this case, was 55° at a glucose concentration of 0.9 mg/mL for the same mode. Such behavior may be associated with a shift in the range of changes in the conductivity of the analyzed liquid in the range of 0.01-10 S/m [74]. In this case, it is possible to use acoustic methods to create glucose sensors. It should be noted that such a shift in the region of change in conductivity became possible by changing the concentration of the GOx enzyme in the LB DPPE film. Studies performed with other LB films, for example, based on stearic acid (SA) did not allow such an effect to be achieved. This may be due to the peculiarities of the interaction of the GOx enzyme with the molecules of the working solutions used. It was found that LB DPPE films modified with the GOx enzyme had a more developed surface than LB SA films with the same concentration of the enzyme. In addition, DPPE molecules have a higher affinity for GOx than SA molecules. The shift in the conductivity region of the film and the increase in the range of its variation can be explained as follows. It is known that in the presence of the GOx enzyme The shift in the conductivity region of the film and the increase in the range of its variation can be explained as follows. It is known that in the presence of the GOx enzyme the glucose molecule is oxidized to form gluconic acid and hydrogen peroxide [75]. During the formation of hydrogen peroxide, the GOx molecule participates in the transfer of an electron to an oxygen molecule. As a result of this process, the conductivity changes in the localization region of the GOx molecule. Since the GOx molecules were immobilized in the LB film located on the surface of the acoustic delay line, the conductivity changed mainly in the near-surface layer of the acoustic delay line. In this case, an increase in the number of GOx molecules involved in the catalytic decomposition of glucose led to a greater change in the conductivity of the near-surface layer of the acoustic DL. Exceeding the threshold value of conductivity in the near-surface layer led to the screening of the electric field of the acoustic wave and a decrease in its phase velocity. In turn, this process led to an increase in the insertion loss and a decrease in the amplitude response of the S 21 acoustic wave at glucose concentrations exceeding 0.4 mg/mL.
As mentioned above, the acoustic mode at a frequency of 42.7 MHz has the highest sensitivity to changes in glucose concentration. It should be noted that for the acoustic mode at a frequency of 27.8 MHz, the S 12 parameter increases from 1.75 to 7.5 dB. At the same time, the maximum phase shift for this acoustic mode did not change compared to the case of the absence of the enzyme in the LB film and amounted to 10 • . Such differences in the responses of these waves are associated with different coefficients of their electromechanical coupling.
Conclusions
In this work, LB films of DPPE phospholipid molecules with immobilized GOx enzyme molecules were created. The effect of the adsorption of GOx enzyme molecules on the surface properties of a Langmuir DPPE monolayer was studied. A change in the shape of the monolayer compression isotherm was found upon the addition of GOx enzyme Sensors 2023, 23, 5290 13 of 16 molecules to the subphase. This may be due to the interaction of the hydrophobic parts of the enzyme molecules and the hydrocarbon chains of lipid molecules. This effect leads to a decrease in the structural perfection of the monolayer. The adsorption of GOx enzyme molecules by a Langmuir monolayer leads to a change in the morphology of the LB film formed on its basis, and the film surface becomes more developed.
The sensory properties of the obtained LB film of DPPE phospholipid molecules with immobilized GOx enzyme molecules in a glucose solution of various concentrations were studied. It has been shown that the introduction of GOx enzyme molecules into an LB DPPE film leads to a shift in the conduction region of the solution toward higher values with an increasing glucose concentration. Such an effect made it possible to conclude for the first time that acoustic methods can be used to determine the concentration of glucose molecules in an aqueous solution. It was found that for an aqueous solution of glucose in the concentration range from 0 to 0.8 mg/mL, the phase response of the acoustic mode at a frequency of 42.7 MHz has a linear form and its maximum change was 55 • . The maximum change in the S 12 parameter for this mode was 18 dB for a glucose concentration in the working solution of 0.4 mg/mL.
Thus, it was concluded that it is possible to create an acoustoelectronic enzymatic glucose sensor based on an LB film of DPPE phospholipid molecules with an immobilized glucose oxidase enzyme. The main advantage of such sensors is their high selectivity and sensitivity to the detected molecules. At the same time, for the direct use of such sensors outside of laboratory conditions, it is necessary to solve several problems. In particular, this is the task of finding optimal storage conditions for a sensor with a film, which will ensure its durability and reusability. In addition, it is necessary to solve the problem of the possibility of restoring the sensory properties of enzyme films in case of a violation of their storage conditions. The study of the process of the formation of sensory coverage requires additional research. In particular, the relationship between the thickness of the sensor coating, the amount of enzyme immobilized in it, and the resulting phase and amplitude responses under the influence of glucose have not been sufficiently studied. An important issue is to increase the reproducibility of the sensory properties of such coatings. This issue is relevant due to the dependence of the sensitivity threshold of sensors of this type and the amount of enzyme immobilized in the created sensor coating.
It should also be noted that the measured concentration range of 0-0.9 mg/mL corresponds to the range of blood glucose concentration. The possibility of changing the glucose conductivity range depending on the concentration of the GOx enzyme in the LB film will make it possible to develop technological glucose sensors for its higher concentrations. Such sensors would be in demand in the food and pharmaceutical industries. The developed technology can become the basis for creating a new generation of acoustoelectronic biosensors in case of using other enzymatic reactions. Funding: In the frame of LB film study and acoustic sensor development, the work was partially funded by the RUSSIAN SCIENCE FOUNDATION, grant number 22-29-20317; in the frame of the production of LB films and its characterization, this research was partially funded by the Bulgarian National Science Foundation, contract number KP-06-OPR 03/9.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
No new data were created or analyzed in this study. Data sharing is not applicable to this article. | 12,002.2 | 2023-06-01T00:00:00.000 | [
"Chemistry",
"Materials Science",
"Engineering"
] |
Correlation Analysis of Pyrolysis Yield Using a Linear Regression Model
: The paper focuses on the impact of pyrolysis combination and mixture ratio on the yield of various pyrolysis products, such as tar, water, coke residue, and syngas, in the field of catalytic reaction analysis for pyrolysis product generation. Initially, data preprocessing was carried out, outlier detection and missing value processing were respectively performed, and a model was established to explain the relationship between mixing ratio and yield. Descriptive statistical analysis was employed to gain an initial understanding of the overall data situation. Subsequently, statistical indices such as mean, range, and standard deviation of pyrolysis products for each combination were quantified to unveil the extent of influence of different mixing ratios on yield. The correlation analysis and linear regression model were then used to establish the relationship model between mixture ratio and yield, with further explanation of the correlation and presentation of the functional expression. Finally, mathematical formulas and graphs were utilized to analyze the linear trend between product and mixing ratio under various pyrolysis combinations. This study offers robust data analysis and modeling support for comprehending the impact of different mixing ratios on pyrolysis product yield.
Introduction
With the gradual improvement of human life quality, the search for renewable energy materials has become the focus of the current world.Xinjiang is a region with abundant cotton stalk production, and its cotton planting area accounts for more than 80% of the country's total planting area.[1] However, as a renewable energy source, cellulose, lignin and other biomass contained in cotton stalk have attracted much attention, and biomass as an energy source has gradually become a trend.[2]Sun Zhiao [3] found that cotton stalk had better comprehensive combustion performance and lower ignition temperature and burnout temperature.Liu Simeng [4] found that hydrothermal oxidation treatment could produce higher fixed carbon content after pyrolysis of cotton stalk.Zhao Jiaxing [5]Zhao Jiaxing found in his research on the quality improvement of desulfurization ash and biomass pyrolysis products that desulfurization ash promoted the formation of small and medium molecular compounds during pyrolysis, thus increasing the yield of pyrolysis water and pyrolysis gas, while significantly decreasing the yield of pyrolysis oil.In order to further step into the field of the relationship between pyrolysis combination and pyrolysis products, this paper summarizes the influence of changing the mixture ratio of pyrolysis combination on the yield of various pyrolysis products such as tar, water, coke residue and syngas based on the research experience of the former.According to the data analysis provided by https://www.nmmcm.org.cn/index/,Firstly, descriptive statistical analysis was carried out on the average and standard difference data of each pyrolysis combination, and scatter plot was drawn to observe the influence trend of different mixing ratios of pyrolysis combinations on pyrolysis products and whether there was any significant influence.Then correlation analysis and linear regression model were established, under the condition that different mixing ratios were taken as independent variables and the yield of pyrolysis products was taken as dependent variables.The relationship between the mixture ratio and the yield of pyrolysis products and the yield of pyrolysis products under different mixture ratio were quantitatively analyzed, and the corresponding influencing factors and rules were obtained.
Establishment and Solution based on
Linear Regression Model
Data Pre-Processing
According to the data, first of all, the data are preliminarily preprocessed to check whether there are abnormal values or missing values in the data respectively.It can be clearly known from the three tables in the data that all provided are the yields of decomposition products of the pyrolysis combination, and the unit is 100%.Therefore, it can be seen that the sum of products under different mixing ratios should be 100.As can be seen from Table 1, there are no missing values in the sample data.
Based on observation, outliers are selected, that is, data that is different from the sample data.Two decimal places are retained for all data, and some data are integers or only one decimal place.Outliers account for a small proportion of the total sample data, so in order to maintain data consistency, they are selected to change the outliers.The specific data after the change is shown in the data marked in red in Table 2 and 3.After the above outliers and missing values are determined and changed, descriptive statistics are carried out on the three groups of data to understand the general data distribution.The mean value, range and standard deviation of corresponding pyrolysis products were calculated respectively, and some of the results were shown in Table 4.By calculating the average value of pyrolysis products of each combination, the average yield level can be roughly understood.If the range of pyrolysis products of each group is large, such as the range value of tar production in the table is 7.33, 10.86 and 9.87 respectively, which are all too large, it indicates that tar is significantly affected under different mixing ratios of pyrolysis combinations.Then, the dispersion degree of a data set is reflected according to the standard deviation data.In the yield table of DFA/CE pyrolysis decomposition products, the standard deviation values of water production and tar production are too large, indicating that the dispersion degree is poor and the yield deviation under different mixing ratios is large, while in the DA/CS pyrolysis decomposition products table, the standard deviation of coke production is only 0.246.It shows that the product is basically unaffected by different mixing ratios.
Model Establishment
Through descriptive statistics of the data, the products susceptible to different mixing ratios can be roughly understood.In order to more intuitively state their relationships, scatter plots of different pyrolysis combinations are drawn, as shown in FIG. 1, 2a and 2b.It can be seen from FIG. 1 that desulfurization ash has a significant effect on tar yield, while other pyrolysis products all show a slight upward trend, and the overall stability is relatively stable, indicating that desulfurization ash plays an insignificant role in the pyrolysis reaction of cotton stalk.Furthermore, desulfurization ash/cellulose pyrolysis plays a relatively significant role in promoting the pyrolysis reaction, and the yield of decomposition products can be intuitively obtained.The tar yield increased with the increase of mixing ratio, indicating that the water yield of desulfurization ash in cellulose was slightly affected by the different mixing ratio.
The pyrolysis reaction of desulfurization ash/lignin showed that the output of tar, water, coke and syngas were not affected by different mixing ratios.The above has preliminarily analyzed whether desulfurization ash promotes the pyrolysis of cotton stalks, cellulose, and lignin, further establish correlation analysis, using Pearson correlation coefficient to illustrate the mixing ratio relationship the specific modeling process is as follows: In numbers that are not all zeroλ2, λ2, ⋯, λk make: established, of which μ Is a random error term.When μ= At 0, there is complete collinearity between explanatory variables, otherwise, it is called incomplete collinearity.According to the size of the semi partial correlation coefficient, explanatory variables that have no significant impact on the dependent variable can be selected.Consider the following two linear regression models The judgment coefficients are R^2 and R respectively_ 1^2,the relationship between the two is: The correlation analysis model can be obtained from the above formula.Since the data we are studying is a continuous variable, we use Pearson correlation coefficient to complete the correlation test, due to the fluctuation of X and Y, the covariance value cannot fully demonstrate the correlation between the two variables.Therefore, we need to standardize the covariance to obtain the Pearson correlation coefficient.The specific steps are as follows: In addition, in order to obtain the relationship between the yield of pyrolysis products and the mixture ratio of pyrolysis combinations more accurately, a linear regression model was constructed to solve the function expression of the yield of pyrolysis products under each pyrolysis combination, the Pearson correlation coefficients are 0.969, 0.979, 0.905, 0.949, 0.968, 0.968 and 0.959 respectively.The lowest is more than 0.9, so as to set the following multiple linear regression model: Where the independent variables x1, x2..... xn is the predictor, y is the dependent variable, ε is the random variable, and β is the parameter to be estimated.β can be calculated by least square method.The calculation formula is as follows: Findingβminimize Q(β) gives a least squares estimate ofβ Linear regression equations are usually verified by model fitting degree R2, analysis of variance (F test) and T test.R is the correlation coefficient between the predicted value and the observed value, which can represent the interpretation degree of the model to the observed data, and it is generally required to reach more than 0.85.The expression for R is as follows:
Model Solving and Analysis
Pearson correlation coefficient was established to represent the relationship between the yield of pyrolysis products (tar, water, coke slag, syngas) and the mixing ratio of corresponding pyrolysis combinations, and the correlation diagram between desulfurization ash/cotton stalk, desulfurization ash/cellulose, desulfurization ash/lignin pyrolysis combinations and their pyrolysis products under different mixing ratios was made respectively.Figure 3 below takes desulfurization ash/cotton stalk pyrolysis combination as an example.It can be intuitively seen from the above figure that, except for the poor correlation between the pyrolysis product tar and the mixture ratio of the pyrolysis combination, the overall correlation is high, and the Pearson coefficient obtained by solving it is 0.983.Then the corresponding function expression was solved by the linear regression model, and the linear diagram of the mixture ratio of pyrolysis products and pyrolysis combinations was obtained.Part of the linear diagram was shown in FIG.4a and 4b.The specific solution expression is shown in Table 5 below.For DFA/CE (cellulose) pyrolysis products, the linear regression model coefficient of the relationship between tar yield and desulfurization fly ash mixture ratio is 9.38, and the intercept is 37.51.This indicates that the tar yield tends to increase with the increase of desulfurization fly ash mixture ratio.
For DFA/LG (lignin) pyrolysis products, the linear regression model coefficient of the relationship between tar yield and desulfurization fly ash mixture ratio is -8.36, and the intercept is 15.38.This indicates that the tar yield tends to decrease with the increase of desulfurization fly ash mixture ratio.
In both graphs, we can see the actual data points (blue) and the lines predicted by the model (red).In the case of DFA/CE, the model predicts an increase in tar yield as the proportion of desulphurized fly ash increases.In the case of DFA/LG, the model predicts that the tar yield will decrease as the proportion of desulfurized fly ash increases.
These results show that desulphurized fly ash has different catalytic effects on the pyrolysis of cellulose and lignin.In particular, desulphurized fly ash seems to promote the pyrolysis of cellulose to produce tar, while inhibiting the pyrolysis of lignin to produce tar.The relationship between the yield of pyrolysis products and the mixing ratio of the corresponding pyrolysis combination can be clearly understood from Table 5, which further explains the results of correlation analysis and improves the uncertainty of correlation analysis.In the end, the tar yield in desulfurization ash/cotton stalk, tar yield in desulfurization ash/cellulose and water yield has a significant impact on the mixing ratio of pyrolysis combination.
Conclusion
"Carbon peaking" and "carbon neutrality" were first included in the government work report during the 2021 Two Sessions, The vigorous development of new energy and the search for more renewable energy have become important tasks for China to achieve sustainable development strategy In this context, the impact of pyrolysis combination mixing ratio on the yield of various pyrolysis products is proposed, Using descriptive statistics such as mean and standard deviation combined with drawing scatter plots of pyrolysis products, we observed that the combination of desulfurization ash and cotton straw pyrolysis has a significant inhibitory effect on tar yield, The combination of desulfurization ash and cellulose pyrolysis has a significant promoting effect on tar.Next, conduct a deeper correlation analysis to obtain a correlation heatmap, It was found that there is a high correlation between tar production and changes in catalyst mixing ratio, the Pearson coefficient is 0.983.Further establish a linear regression model to calculate the functional expressions of each pyrolysis combination and pyrolysis product yield, Draw the conclusion that Water yield to DFA/CS mix ratio, Coke yield and DFA/CS mixture ratio, Tar production to DFA/CE mix ratio, Water yield mix ratio the mixing ratio of the four pyrolysis combinations shows an upward trend, The tar yield in desulfurization ash/cotton straw, tar yield in desulfurization ash/cellulose, and water yield have a significant impact on the final mixing ratio with pyrolysis.
Figure 1 .
Figure 1.Yield of Decomposition Products from DFA/CS Pyrolysis
Figure 2a .
Figure 2a.Yield of DFA/CE pyrolysis decomposition products Figure 2b.Yield of DFA/LG pyrolysis decomposition products
Figure 4a .Figure 4b .
Figure 4a.Linear graph of tar production and different mixing ratios of desulfurization ash/cellulose Figure 4b.Linear graph of tar yield and different mixing ratios of desulfurized ash/cotton stalk Through linear regression analysis, we get the following results:For DFA/CE (cellulose) pyrolysis products, the linear regression model coefficient of the relationship between tar yield and desulfurization fly ash mixture ratio is 9.38, and the intercept is 37.51.This indicates that the tar yield tends to increase with the increase of desulfurization fly ash mixture ratio.For DFA/LG (lignin) pyrolysis products, the linear regression model coefficient of the relationship between tar yield and desulfurization fly ash mixture ratio is -8.36, and the intercept is 15.38.This indicates that the tar yield tends to decrease with the increase of desulfurization fly ash mixture ratio.In both graphs, we can see the actual data points (blue) and the lines predicted by the model (red).In the case of DFA/CE, the model predicts an increase in tar yield as the proportion of desulphurized fly ash increases.In the case of DFA/LG, the model predicts that the tar yield will decrease as the proportion of desulfurized fly ash increases.These results show that desulphurized fly ash has different catalytic effects on the pyrolysis of cellulose and lignin.In particular, desulphurized fly ash seems to promote the pyrolysis of cellulose to produce tar, while inhibiting the pyrolysis of lignin to produce tar.
Table 1 .
Total yield of some products Yield of Decomposition Products from DFA/CS Pyrolysis wt.%(daf)
Table 4 .
Corresponds to the mean value, range and standard deviation of pyrolysis products
Table 5 .
The function expression of each pyrolysis combination and pyrolysis product | 3,311 | 2024-08-20T00:00:00.000 | [
"Chemistry",
"Engineering"
] |
Off-axis parabolas super polished under stress: the case of the Roman Space Telescope Coronagraphic instrument mirrors
Direct imaging of exoplanets requires high contrast imaging techniques that demand tight tolerances on the optical surface error. The Nancy Grace Roman Space Telescope (RST) (previously named WFIRST) aims to perform direct imaging of SuperEarth-like exoplanets through its active coronographic instrument (CGI). Eight off-axis parabola (OAP) mirrors are utilised within the CGI to create a compact instrument and to ensure access to the pupil and focal planes. The surface form error and surface roughness of these relay optics directly impact the quality of the dark hole, and therfore the observable location for exoplanets. A new fabrication process for OAP manufacture via stressed mirror polishing (SMP) is presented in this paper. First, the design of the mirror substrate is investigated to create an innovative thickness distribution capable of producing the OAP geometry with a simple warping harness composed by two micrometer screws. Second, the novel design is implemented on a 60 mm diameter OAP prototype in Zerodur; a description of the fabrication process chain and the characterisation of the optical surface over all spatial frequencies are presented. Results from this first prototype demonstrated that the surface form error deviates from < 1 nm root means square (RMS) from the simulations and with a surface roughness of 2.1 ̊ Ra. © 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
whereas, the RST AO will correct the fabrication form errors within the optics and drifts incurred by either thermal or dynamic effects [7]. A combination of two deformable mirrors (DM) are utilised to correct both amplitude and phase of the wavefront errors. A dedicated Low Order Wavefront Sensing and Control (LOWFS/C) system, inspired by ground-based instrumentation, corrects the first 11 Zernike modes, which are mainly produced by long duration thermal drifts [8]. Higher order modes, called mid-spatial frequency, are controlled via focal plane wavefront correction with the DMs to create the dark hole and maintain its high-contrast [9]. The high-spatial frequencies are defined outside the correction circle of the AO and are mainly caused by inaccuracies in the coronagraph masks and surface roughness errors on the optics. The Lyot coronagraphic technique used in CGI, is sensitive to the high frequencies overlapping, which directly impacts the dark hole, hiding the faint Earth-like planet signal within the noise. Compensation can be implemented in the focal plane to limit the impact of these quasi-static aberrations, but it is preferable to limit the high frequency errors from the source: during optical fabrication. It has been demonstrated that the point spread function post coronagraphy is equal to the Power Spectral Density (PSD) of the instrument [10]. Therefore minimising PSD of optics is essential to reduce residual speckles in coronagraphic instrument. In this study, the optics are characterised by computing the PSD and are compared to the final image specification.
In the case of the RST CGI, off-axis parabolas (OAPs) have been used within the optical design as relay optics [7], this creates a compact instrument, which fits within a limited space and ensures access to the different pupil and focal planes. The OAPs are not located in the focal plane, thus their surface form errors and roughness cannot be compensated by the DMs, therefore, the OAP optical quality is crucial because it is directly transmitted to the Fourier plane and the error is double due to the reflection.
OAP fabrication and difficulties
OAPs are challenging to manufacture and measure. Often OAPs are manufactured by creating a large on-axis parabola and cutting out the required optical geometry. This technique requires large machinery for a small work piece, leads to significant waste material, and typically results in low to mid spatial frequency errors [11]. An alternative approach is single point diamond turning (SPDT) with computer numerical control (CNC). This technique requires a complex and very accurate positioning of the OAP with respect to the machine, which is possible via adaptive tools and results in freeform optics that can be manufactured with < 5 nm RMS surface roughness [12]. However, the SPDT approach produces tool marks, ripples, which significantly contribute to the high frequencies errors [13]. In a similar approach, Computer-controlled optical surfacing (CCOS) can be employed that uses an automated arm to move a polishing tool slowly upon the substrate surface [14]. CCOS is suited for large OAP workpieces, as demonstrated by R. Jones et al. (1994) [15] and Kim et al. (2016) [16] by reaching < 1 nm roughness on a large off-axis aspheric mirror. Coupling CCOS with a non-linear fluid and a predictable tool influence function provides similar roughness < 1 nm [17], this method has been applied to the giant magellan telescope (GMT) 8.4 m off-axis primary mirror segments.
Likewise, finishing techniques have been developed to improve the surface quality of these complex shape mirrors. MagnetoRheological Finishing (MRF) shows impressive results by eliminating damage from polishing and can smooth the RMS roughness to < 1 nm [18] [19]. MRF uses a magnetically-stiffened abrasive fluid to remove damage on the surface; however, this process takes a significant duration to converge to the required surface quality. Ion beam figuring (IBF) is another deterministic method, which involves removing matter from optical surface with a stable beam of accelerated particles. Experiments show that this technique can remove surface forms errors while maintaining a very low roughness [20] [21].
Stress polishing apply to OAP fabrication
An alternative technique for OAP fabrication is Stressed Mirror Polishing (SMP): a process first developed in the 1930s by the German astronomer Bernhard Schmidt [22]. This technique applies a pressure on a mirror substrate during polishing to obtain the desired optical surface figure. Several experiments and prototypes have been undertaken, such as adapting the method using vacuum to apply the pressure to the mirror to create aspherisation plate with single zone method as E. Everhart et al. (1966) [23], or double zone method developed by G. Lemaitre et al. (1972) [24]. Later, the generation of single aberration modes by SMP with a minimum number of actuators is investigated by Lemaitre et al [25] using a novel thickness distribution at the back of the mirror and new pressure devices, such as actuators.
The investigations undertaken by G. Lemaitre have led to the development of toric mirrors at Laboratoire d'Astrophysique de Marseille (LAM) via the SMP method. In this case, forces are applied through a warping harness using only one actuator. The substrate is then polished by hand via spherical polishing and utilising a tool size with the same dimensions as the substrate to minimise surface form errors. The deformation induced by the warping harness is then imprinted upon the surface during polishing. SMP is ideally suited for high contrast imaging, as demonstrated by the three toric mirrors manufactured by LAM for the SPHERE instrument where surface roughness values between 0.2 nm RMS and 0.9 nm RMS were realised [26]. A further example of a LAM SMP toric mirror was for the Hi-CAT project, where the final roughness was < 1 nm RMS [27].
To generate an OAP via SMP, [28] showed that an OAP geometry can be decomposed using Zernike polynomials and this led to the realisation of the 10 m diameter segmented primary mirror of Keck telescopes. The hexagonal OAP segments, 1.8 m in diameter, were fabricated using SMP by [29]; however, twenty-four radial arms were needed to apply the deformation. In this paper, a solution to adapt the LAM toric mirror SMP technique to fabricate the RST OAPs, whilst ensuring simplicity of manufacture, is presented. This paper will outline the OAP SMP methodology and includes: the RST CGI design requirements (Section 2), the warping harness design study to generate the OAP profile (Section 3), and the prototyping phase to validate the OAP warping harness design (Section 4).
Requirements
The RST CGI is a Lyot coronagraph composed of eight OAPs; the optical design is presented in Fig. 1. The beam arriving from the telescope, located at FSM in Fig. 1, is relayed and magnified to the two DMs by OAP1 and OAP2. Then OAP3 and OAP4 relay the pupil beam to the reflective masks wheel (SPM). OAP5 focus the light at the focal plane masks (FPM) also mounted on a wheel. The reflected light is then injected to the LOWFS camera, while the transmitted light is collimated on the Lyot stop by OAP6. Then OAP7 focuses the beam on the field stop. Finally OAP8 collimates the light to the pupil plane of the colour filter. The beam finishes its course into the spectograph camera IFScam [30]. Table 1 presents the geometrical specifications of the eight OAPs, including physical diameter, off-axis distance, parent radius and effective focal length.
From the sag of the conic surface, we can compute the asphericity coefficients , and thus express the sag in its local coordinates ( , ) [28] [29]. The coefficients lead to the Zernike coefficients and reveal that an OAP is primarily composed of the third order aberrations astigmatism 3x and coma 3x, as shown in Fig. 2. Trefoil 5x and higher terms are negligible in comparison and are not taken into account within the specifications. Using the parameters in Table 1, the Zernike coefficients have been computed for the eight OAPs on their clear aperture ( Table 2). The local radius of curvature of the mirror, which is used during the polishing, is defined as twice the effective focal length and termed radius of polishing (RoP). In this study, the Zernike polynomials are described using the convention by R.J. Noll et al. [31]. The surface is decomposed using the first 21 polynomials. The surface form error (SFE) requirements are split into two domains. The low spatial frequency domain (LoF) SFE, defined for frequencies lower than 3 cycles per pupil (c/p), has a requirement of 5 nm RMS. The mid spatial frequency (MidF) SFE for frequencies above 3 c/p and below the AO correcting circle radius at 30 c/p, has a requirement of 2 nm RMS. The errors above 30 c/p are considered as roughness and are limited to 5˚ RMS. The polishing radius of curvature has a tolerance of +/-1 mm.
Error budget
The total error budget for OAP SMP is carefully considered along the simulation and experimental chains to respect the OAP and SFE requirements. The total error budget can be expressed as : and arise from an irregular mesh used in the Finite Element Analysis (FEA) simulation [32] [33] and the discrepancy in the interferometric phase map measurement respectively. The simulation errors are avoided by interpolating the surface shape displacement onto a Cartesian grid before the Zernike decomposition. Regarding the measurement errors, the interferometric phase map acquisitions are undertaken by a M ller-Wedel V-100 interferometer in a ISO 5 clean room environment and the final phase map is an average of 36 consecutive measurements. Following this method, the random component of uncertainty is removed and the remaining errors corresponding to the systematic errors are < 1 nm.
SMP employs spherical polishing and matches the tool diameter with that of the mirror, which avoids high frequency errors and offers a high quality surface. From in-house experience, ℎ represents < 5 nm in the error budget.
is the dominant and critical error, producing initially 1 m error. It includes errors due to the warping harness described in Section 4.1 and in particular the force positioning, the gluing interface and the screw precision. To reduce it, the warping harness was designed to absorb part of the deformation, which results in more precision via the micrometer screws on the surface errors, reducing to < 2 nm. Positioning tools and interfaces have been designed to facilitate the assembly and minimise errors. Finally, by combining the errors as per Equation 1, the total error budget is estimated at ∼5.4 nm. One fundamental condition to respect while using SMP is to not exceed the elastic yield point of the material during the deformation. This condition can be ensured during the simulations by computing the von Mises stress, which should be below 10 MPa, the Zerodur safe design bending strength [34].
Methods
Generating the astigmatism 3x deformation via SMP has been demonstrated via the fabrication of toric mirrors for SPHERE [26] and Hi-CAT [27]. The astigmatic form is created using a combination of forces applied on the back of the mirror: two pairs of opposite forces as shown in Fig. 3 left, which is simple to implement with a one actuator system or two screws. The back side of the toric mirror design, developed by Hugot et al. (2011) [26], is defined as a ring with a variable thickness distribution to avoid astigmatism 5x generation. Figure 3 middle shows a simplified model with a constant ring inspired from toric mirror design, which is used as a reference in this study. The part is made out a single piece of material and is defined in two sections; the dark grey corresponding to the optical mirror substrate and the light grey corresponding to the ring thickness distribution, in this case a uniform ring. The system generates astigmatism 3x as shown on the simulated displacement fringes of the optical surface in Fig. 3 right. Simulations of the deformation are performed using the FEA software PATRAN/NASTRAN. We used a tetrahedral mesh with a minimum of 80000 elements to have an accurate simulated deformation. Zerodur is modelled with a Young's modulus of 90.6 GPa, a density of 2.53 g/cm 3 and Poisson ratio of 0.24 [35]. The model is constrained in its centre along the z-axis, in the 3 translational degrees of freedom and forces are applied on a square area on the back surface external border (Fig. 3 middle).
To generate the additional coma 3x we need to break the symmetry of the deformation process, while keeping the same combination of forces to generate the astigmatism 3x (Fig. 3 left). The ring structure on the back of the mirror is studied in terms of ring dimensions and thickness distribution to generate both coma and astigmatism components, this part is represented in light grey in Fig. 3 middle. The ring thickness distribution of the mirror is characterised by a combination of uneven radial Zernike polynomials presented in Equation 2. They have been carefully chosen to break the axisymmetry of the actual design of the toric mirrors and so generate coma 3x. With 0 being the initial thickness of the ring and ( , ) the local coordinates of the mirror, the thickness distribution of the ring can be expressed as: ( , ) = 0 + 11 cos + 31 (3 2 − 2) cos + 33 3 cos 3 + 55 5 cos 5 + ...
To evaluate the suitability of the different models we defined three ratios: / , and / , which represent the magnitude of coma 3x, residuals and trefoil 5x all normalised by astigmatism 3x respectively. combines the first 21 Zernike coefficients except piston, tip, tilt, astigmatism 3x and coma 3x. Trefoil 5x is not desired in the OAP design, but it is readily generated within the simulations and in an amplitude comparable to that of the required coma 3x. In this parametric study, the reference is the diameter of the mirror , all the additional parameters and results are expressed as a fraction of . The deformation behaviour is linear assuming that the elastic limit of the Zerodur is not exceeded. The models can be enlarged or shrunk by homothety while maintaining the same deformation behaviour.
Pre-selection of possible shape
Starting from the reference mirror design in Fig. 3, we incorporated different thickness distributions based on the Equation 2 to the ring. The first uneven Zernike polynomial, tilt, was trialled, but it did not generate acceptable deformation. In Fig. 4, the three following uneven Zernike polynomials thickness distributions are presented: coma, trefoil and pentafoil, and for each distribution the thickness amplitude (Fig. 5) varies, from 0 to 3 /30. The geometrical description of the thickness distribution is calculated externally [36] and then imported within the computer aided design (CAD) software. The objective is to maximise / to cover the set of requirements presented in Table 2, while minimising and / . The amplitude = 0 represents the reference design in Fig. 3 middle, with the thickness of the The trend of the ratios is dependent upon the thickness distribution, as shown in Fig. 4. The coma and pentafoil distributions do not generate the required coma 3x, but increase the contribution of the residuals and trefoil 5x. These distributions were chosen to break the symmetry of the warping harness, but they have not generated the desired magnitude of coma 3x.
The trefoil distribution shows promising results. The / increases linearly with the amplitude up to 6.7%, and with a steeper gradient in comparison to the and / . However, the magnitudes of the and / are not negligible, and therefore, further design iterations are required to optimise the trefoil distribution.
Optimisation of the selected design
To iterate upon the initial trefoil thickness distribution, additional design parameters are investigated to minimise the unwanted aberrations and increase the magnitude of coma 3x in the warping function. Figure 5 is a 3D representation of the design with a trefoil thickness distribution, a cross section in the xz plan is performed to highlight the different parameters used in the study. Three parameters describe the ring: the depth of the central hole, the diameter of the hole and the amplitude of the thickness distribution. is central thickness of the mirror substrate.
The amplitude of the thickness distribution is fixed at 3 /30 as it generates the higher / in the first investigation, presented in Fig. 4 middle. Figure 6 highlights the change in ratios with diameter (left), depth (middle), and thickness (right). The magnitude of astigmatism 3x generated in the deformation is added as an indicator of the force required to reach the desired magnitude. A low magnitude of astigmatism 3x corresponds to a greater force required by the actuators, which is not desired due to the constraint of the elastic limit. Each of the plots demonstrate that the effect of / is approximately invariant with increasing diameter, depth, or thickness.
right).
In Fig. 6 middle, the depth variation shows a maximum / when = . When < , the thickness distribution merges with the central thickness , meaning that the minimum thickness at the border is less than . As a result, the warping function is then affected as we can see with the decrease of / when < . By increasing the depth ( > ), the residuals are dampened by the thickness of the hole and less present upon the optical surface; however, the amount of astigmatism 3x also decreases, which means that more force is needed to reach the desired magnitude.
Different trends are observed in the ratios as a function of mirror thickness . Both / and astigmatism 3x increase sharply for decreasing , also increases but at a slower rate. However, having a high central thickness dampens the warping function, resulting in a loss of astigmatism 3x amplitude and a higher ratio than the desired / . From these observations, the optimal model should have a low , to maximise / and the amplitude of astigmatism 3x, and a high to minimise the residuals.
Design constraint due to stress
However, there are manufacturing considerations that negate the benefit of a low central thickness when considering the deformation required by SMP. For example, in Fig. 7, the von Mises stress fringes are mapped upon the mirror design with = 2 /30, where = 30 mm and with a realistic force application of 20N. The maximum stress is located at the internal diameter of the ring where a force is applied on the minimal thickness border . In this model the maximum von Mises stress is 9.21 MPa, which is below the safe design strength of Zerodur (10 MPa). In this case, decreasing the central thickness will increase the astigmatism amplitude, but exceed the safe design strength of Zerodur. Respecting the constraint on the allowable von Mises stress provides a boundary in the design of the mirror and therefore, the minimal central thickness recommended is = 2 /30.
Final designs
From the described design investigations, the solutions to generate the OAP geometry via SMP are adapted to the RST CGI OAP requirements. CGI OAP diameters are larger than the example model, therefore the designs are scaled appropriately. In the configuration where = 60 mm, the optimal design becomes = 4 mm, = 6 mm, and = 6 mm, this model generates a / = 4.51% on the clear aperture, corresponding to a deformation between the OAP6 and the OAP7 requirement (Table 2). Starting with OAP7 requirements as a baseline, the design was adjusted to ensure the correct / and amplitude of astigmatism 3x on the 22 mm clear aperture, which gave the final , , and values of 7 mm, 5 mm, and 7 mm respectively. The final design is presented in Fig. 8 left with the thickness distribution in light grey and the mirror substrate in dark grey; the displacement fringes on the optical surface after FEA deformation are presented in Fig. 8 right.
From the baseline design of OAP7 in Fig. 8 left, similar adjustments of the model were implemented to match the specifications of the entire set of OAPs. In many cases, the design stayed in configuration = . However, in the case of larger clear aperture, was increased with respect to , taking the advantage of reducing the residuals while maintaining a constant / . Table 3 summarises the performance reached for each OAPs with their optimised , and parameters. The majority of the OAPs are in the specification for the RST CGI; however, OAP1 and OAP2 show higher residuals due to their larger clear aperture and the border effect of the warping function. Further investigations are on-going, for example: using the shape optimisation method developed by S. Lemared et al. (2020) [37] that aims to improve the thickness distribution; accommodating the RST CGI optical design to incorporate the surface errors; or, taking the advantage of finishing techniques, such as MRF and IBF to remove surface errors.
Experimental methodology and polishing
The parameters of OAP7 were selected for the first prototype because it was the most advanced design ready for manufacturing and it was considered low risk due to exhibiting the minimal value of astigmatism 3x for the CGI OAP set. The Zerodur substrate was made by SCHOTT from the design shown in Fig. 8 left, the thickness distribution was created by milling the shape into the Zerodur blank. A chamfer of ∼1 mm was added and polished on to the edge of the 60 mm physical diameter to prevent micro-cracks during polishing. First, the substrate is polished spherically to obtain the required radius of polishing (Table 2) and to remove the machining defects upon the substrate surface. During this first step, significant material is removed; the central thickness of the OAP7 is reduced to 4.43 mm and a new simulation is performed to verify the deformation function in Table 4: simulation. The material removal at the spherical polishing stage must be considered upstream in the design process. Figure 9 left highlights the OAP7 after spherical polishing, the resulting reflective surface is suitable for direct interferometric measurement at = 633 nm.
The warping harness is assembled on the back side of the mirror in Fig. 9 right. The interfacing components where force is applied have to be precisely mounted; 3D printed counter-form tools help to locate the gluing zones. The pushing forces are realised by pads glued upon the thickness distribution and topped by two micrometer screws. The pulling forces are achieved by wires glued both on the thickness distribution and the ring, the wires ensure a normal force. To apply the deformation the screws are turned, which deforms the ring and pulls upon the wires.
The substrate and its warping harness are placed front of an interferometer to start the deformation function. The prototype is mounted to allow access to the screws and to prevent constraints upon the mirror during the measurement. The warping harness is adjusted in front of the interferometer to produce the correct fringe pattern and then the substrate is left to settle under constraint for 12 hours to stabilise the warping function. The deformation obtained is shown in Table 4: deformation. Fig. 9. OAP7 after unwarped spherical polishing phase, giving a reflective optical surface (left). OAP7 optical face down, assembled with its warping harness composed by a ring, two micrometer screws for the pushing forces and two wires for the pulling forces (right). After the validation of the deformation, the substrate is polished under stress to imprint the warping function upon the reflective surface. The warping harness requires adjustments during polishing as vibrations slightly alter the stability of the warping function. Thus, several iterations of interferometric measurements and polishing phases allow convergence upon a flat interferogram, which confirms the perfect sphere and indicates the polishing completion, as shown in Fig. 10 left. In the final stage, forces are removed and final interferometric measurement is performed to validate the OAP surface shape as shown in Fig. 10 right.
Results
A complete characterisation of the prototype surface is performed; the optical quality of the mirror is characterised in the three spatial frequency domains, as described in Section 2.1. The assessment of the LoF SFE and MidF SFE is validated with the phase map acquisition from interferometric measurement performed on the complete polished surface of 58 mm in diameter, the number of pixels on this diameter is 870. Figure 11 shows the phase maps at 50 mm and 22 mm diameter computed with the Intelliwave software from the interferometric fringe image (Fig. 10 right). The measurement on the 50 mm diameter is added to provide an insight on how the process is expected to scale for the larger clear aperture diameter OAPs. On the clear aperture of 22 mm, the number of pixels is 330 corresponding to a maximal spatial frequency of 165 c/p, which covers the LoF and MidF domains. The MidF SFE is evaluated with the PSD calculation from the surface phase map obtained by the interferometry in Fig. 11. Figure 12 left shows the phase maps computed after removing the first 36 Zernike polynomials (corresponding to the LoF SFE below 3 c/p) on five different zones to characterise the entire mirror surface; the central zone corresponds to the clear aperture of the mirror, and four additional zones on the border of the mirror are termed East, West, North and South. The PSDs obtained are then compared to the theoretical requirement, defined by the function 31/ 2.5 , in Fig. 12 right. All the measured zones show a better quality than the theoretical function and particularly for the highest spatial frequencies. Roughness is evaluated using a microscope interferometer (WYKO NT9100). Five zones are measured on the surface: the centre, and at 11 mm and 25 mm from the centre along the x and y axes, as shown in Fig. 13 left. For each location a Mirau objective X50 is used with two magnifications giving 62 × 47 m 2 and 126 × 94 m 2 of surface area measured with spatial resolution of 100 nm and 200 nm. 25 measurements are averaged and tip, tilt and piston are subtracted. Figure 13 right shows the roughness obtained for each zone at the two magnifications. Roughness is computed using the arithmetic average of the profile deviation Ra; on the 22 mm clear aperture and the 50 mm measurement aperture the average Ra in both instances is 2.1˚ to 1 decimal place. Further to the results on the 22 mm diameter, the measurements on the 50 mm diameter were also analysed to explore scaling to larger clear aperture diameter OAPs. On the 50 mm diameter the LoF SFE demonstrated 21 nm RMS, which includes a 19 nm RMS contribution from pentafoil and this represents the border effect simulated for the large clear aperture OAPs. As discussed in Section 3.4, it is anticipated that the border effect can be removed using additional manufacturing steps. The MidF SFE at 50 mm, calculated from the phase map in Fig. 11 left, already meets the CGI OAP requirements.
The results obtained on the surface quality in terms of LoF SFE, MidF SFE and roughness for both diameters, are summarised in Table 5. The results meet the requirements on the 22mm aperture for OAP7 and correspond to the FEA simulations. The very low roughness confirms the super-polishing on the mirror surface.
Conclusion
In this paper we detail a new manufacturing process for OAP mirror fabrication, which broadens the range optical geometries manufacturable via the SMP technique. Investigating the design of the mirror substrate, we created an innovative thickness distribution composed of trefoil, which is capable of producing the OAP geometry while maintaining a simple warping harness system.
The key design parameters to adapt the thickness distribution to a wide range of OAP geometries were highlighted and demonstrate the versatility of the approach. A prototype corresponding to the OAP7 model of the Roman Space Telescope CGI is manufactured in Zerodur and tested. The experimental results confirm the suitability of SMP for high contrast imaging: the optical quality of the OAP7 prototype is within the specifications, with a surface form error deviating from the simulation by < 1 nm RMS and an average surface roughness of 2.1˚ Ra. Further investigations to attenuate the border effect in the case of large clear aperture OAPs and to implement the technique on more complex shapes, such as freeforms, are under study. | 6,934.2 | 2020-10-12T00:00:00.000 | [
"Physics"
] |
Towards Virtuous Cloud Data Storage Using Access Policy Hiding in Ciphertext Policy Attribute-Based Encryption
: Managing and controlling access to the tremendous data in Cloud storage is very challenging. Due to various entities engaged in the Cloud environment, there is a high possibility of data tampering. Cloud encryption is being employed to control data access while securing Cloud data. The encrypted data are sent to Cloud storage with an access policy defined by the data owner. Only authorized users can decrypt the encrypted data. However, the access policy of the encrypted data is in readable form, which results in privacy leakage. To address this issue, we proposed a reinforcement hiding in access policy over Cloud storage by enhancing the Ciphertext Policy Attribute-based Encryption (CP-ABE) algorithm. Besides the encryption process, the reinforced CP-ABE used logical connective operations to hide the attribute value of data in the access policy. These attributes were converted into scrambled data along with a ciphertext form that provides a better unreadability feature. It means that a two-level concealed tactic is employed to secure data from any unauthorized access during a data transaction. Experimental results revealed that our reinforced CP-ABE had a low computational overhead and consumed low storage costs. Furthermore, a case study on security analysis shows that our approach is secure against a passive attack such as traffic analysis.
Introduction
Cloud computing has become a priority and integral to modernizing the information technology (IT) environment. It spawned a whole new dimension of IT, which utilized a wide range of resources that contributed to many domains, such as business, military, health, and medical. In the healthcare and medical domain, Cloud computing has been adopted to facilitate day-to-day operations. This is because the Cloud provides 'on-thefly' services, that is, storage, computation, and data sharing, which allow healthcare and medical practitioners to run their business according to their required operations. For example, electronic health records (EHRs) have been widely employed in the healthcare industry to improve the accessibility and sharing of medical data among medical practitioners. Patients' information, laboratory results, medication lists, diagnostic tests, physical examinations, and historical observations are all kept in the EHR, which is stored in Cloud storage. Furthermore, resource management and system administration (infrastructure) can be effectively monitored through Cloud computing, making healthcare services easy to maintain [1]. Besides, Cloud Service Providers (CSPs) facilitated Cloud users by granting resource sharing regardless of geographical boundaries using the pay-per-use model [2][3][4], which can reduce organizations' management and maintenance costs. Moreover, Cloud adoption in organizations is expected to lift performance with the high-speed deployment of services and to improve clients' satisfaction.
Organizations are willing to adapt their business operations to the Cloud environment due to ultimately easier access to Cloud services and numerous benefits. One of the areas Therefore, inspired by the literature [23], we present a Policy Hiding using Logical Connective in CP-ABE (PHLC) scheme for Cloud storage, which adopts XOR operation to modify access policy information. This proposed work can overcome the encryption challenges outlined by [24] since it was designed to provide data confidentiality using a symmetric encryption scheme and offered fine-grained access control with data sharing provision. Besides, this scheme preserved users' privacy through an access policy hiding scheme. In addition, PHLC achieved efficient data storage utilization by using a preprocessing process to eliminate redundant data in raw shared data. This operation helps the scheme to reduce ciphertext size and decrease the computational overhead for the encryption process. Security analysis shows that the proposed scheme preserved Cloud users' privacy and guaranteed secure data sharing in cloud storage. Furthermore, we conducted an extensive simulation to demonstrate that PHLC CP-ABE was secure against passive attacks.
The rest of the paper is arranged as follows. Section 2 discusses related work and Section 3 provides some preliminaries of CP-ABE, whereas Section 4 presents the proposed scheme's implementation. Section 5 discusses security analysis and Section 6 provides results and discussion. Finally, Section 7 concludes this study.
Related Work
Numerous works of research such as revocable CP-ABE [25], lightweight CP-ABE [26], multi-authority CP-ABE [4,27], and large universe CP-ABE [28] have been developed in response to upgrading the competency of the Ciphertext Policy Attribute-based Encryption (CP-ABE) scheme. However, most of the schemes exposed the access policy in plaintext, which incurred privacy leakage. As a result, there are other works that focus on encrypting the data while hiding the access policy during data sharing. Nishide et al. [29] proposed the hiding access policy by splitting attributes into attribute names and multiple attribute values, where they then hid the attribute values. Meanwhile, Zhou et al. [30] proposed the partially hidden access algorithm, whereby the hidden access policies are implemented with wildcards regardless of the number of attributes. Although the data construction effectively secures the shared information, it fails to offer total data security. It is due to the wildcard attributes not yet being in readable form and might cause privacy leakage. The authors of [23] supported fast decryption while hiding the access policy by sending the access matrix and the defined function along with ciphertext to the Cloud environment. However, it is unable to preserve privacy in the access policy. In [31], the authors eliminated all redundant attributes in the access policy, and their approach significantly reduced computation overhead. With the same intention, the author of [32] proposed the 'testdecrypt-verify' approach to reduce the computation cost in their CP-ABE scheme. In their scheme, the testing phase is added before the encryption phase, and the new component, Outsourcing Cloud Server, is adopted as an outsource agent to reduce the decryption calculation. However, the proposed scheme only employed a partially hidden access policy. The authors of [33] also provided a partially hidden access policy by removing the attribute name from the access structure of the ciphertext. However, the probability of data exploitation by dishonest entities or unauthorized users is still high because only a part of the access policy is hidden.
Other researchers expressed access policies using Linear Secret Sharing Schemes (LSSS) in the CP-ABE scheme for better data access control. In Lai et al. [11], they proposed partial hiding access structures with LSSS that are able to accommodate any access structure. They proved that their scheme was suitable for outsourcing data with attribute values of the data that had been hidden. Other studies, such as [34][35][36] also constructed the LSSS-based access policy schemes with CP-ABE, which hid attribute values to secure the information. Besides the partially hidden access policy, Xiong's scheme [34] supported attribute revocation and verifiable outsourced decryption. However, the attribute values carry more intricate information in comparison to generic attribute names, which leads to high computing overhead during the decryption process. Meanwhile, the authors in [35] proposed the control access scheme to provide privacy protection via partial concealment of access policy. They have utilized CP-ABE in the intelligent healthcare system known as the privacy-aware-health access control system (PASH). PASH hides the attribute value of the access policy in the encrypted Smart Health Record and only specifies the attribute name, which destroys the system-user privacy. Therefore, the existing CP-ABE scheme needs improvement to prevent information leakage from access policies and ensure data sharing on the Cloud is secured. Although various approaches have been proposed in the existing literature on the policy hiding scheme, the study on privacy-preserving with a fully hidden access policy is still inadequate. It could not solve the privacy leakage issues completely. Thus, the privacy-preserving of data users cannot be guaranteed.
Design of CP-ABE
This section discussed the preliminary work involved in designing the CP-ABE. In addition, it also covers the overview of CP-ABE, PHLC scheme's system model, security goals, and security model.
Preliminaries Works
This section presents the bilinear map, Linear Secret Sharing Scheme (LSSS), XORbased Logical Connective, and notation definition.
Definition of Notations
This section gives notation explanations used in this research, as shown in Table 1. In our CP-ABE scheme, bilinear pairing is used to create a public key. The Attribute Authority generates this public key based on a composite order bilinear group with a distinct prime order N, which we adopted from [23]. The algorithm takes an input 1 λ where λ is a security parameter and produces a tuple (G, G T , e, p1, p2, p3, p4) where p1, p2, p3, p4 are distinct primes. The order of cyclic group G and G T is N= p1p2p3p4, and map e: G × G → G T , with properties: i.
Bilinearity: for all g, y ∈ G and d, w ∈ Z N , where e(g d , y w ) = e(g, y) dw ii. Non-degenerate: there exists g ∈ G such that e(g,g) has order N in G iii. Computable: e can be computed efficiently.
Linear Secret Sharing Schemes (LSSS)
The LSSS is used to express access policy in access structure (A, ρ), where A is a policy matrix and ρ is a mapping of each row A i of the matrix A to an attribute [37]. In this scheme, the presence of an attribute universe was denoted as AU, which has n categories of attributes.
Definition 1 (LSSS).
Let AU = (Att 1 , Att 2 , Att 3 , . . . , Att n ). Each attribute Att n contains two-part which are attribute name and attribute values. Possible values of the attribute value, Zp refers to a share-generating matrix, while each row in A is a map to an attribute name index, and that mapping was denoted as ρ. LSSS consisting of the two following algorithms: i.
Secret share: The secret shared, s Zp and the value λ x is computed for each row A x of A, where V = (s, y 2 , y 3 , . . . , y n )− ∈ R Z n p and y 2 , y 3 , . . . , y n are chosen randomly from Zp. Hence, the secret share value is given λ ii. Secret Construction: This algorithm takes in the secret share {λ x } and set P which contains the authorized attribute name index. Then it sets I = {x|ρ(x) ∈ P} ⊆ {1, 2, . . . , l} and computes the constant {ω x } x∈I such that ∑ x∈I ω x A x = (1, 0, 0, . . . , 0). Then the secret s is reconstructed by s = ∑ x∈I ω x λ x .
Similar to [23], we construct the LSSS matrices over Z N . In our proposed scheme, we denoted the user's attribute as S = (B s , J s ), where B s , ⊆ Z N is the attribute name index, and
XOR-Based Logical Connective Policy Hiding
The proposed policy hiding algorithm uses an XOR-based logical connective to hide the access policy in the CP-ABE scheme. We utilized XOR to modify the entire access policy to form a reliable proposition. According to Rosen [38], logical connective is defined by the truth table, which declares that a fact could be either true or false, but not both. Rosen described proposition logic as: 'Let p and q be propositions. The exclusive-or of p and q, denoted by p XOR q, is the proposition that is true when exactly one of p and q is true and is false otherwise. According to [39], XOR is a lightweight operation that does not incur much computational overhead. Therefore, it is very appropriate to be adopted in this scheme as a second layer of protection.
Overview of Ciphertext Policy Attribute-Based Encryption
In this work, we enhanced the functionality of the CP-ABE scheme to embrace the fully hidden access policy. Therefore, four significant modules of CP-ABE were involved in constructing our hiding reinforcement approach.
System Setup (1 λ ): It takes security parameter, 1 λ as an input, and produces a public parameter key PK, also a master key MSK as the outputs. They are used as input for the key generation algorithm.
Key Generation (PK, MSK, S): Attribute Authority is the main component in CP-ABE that we utilized to execute the key generation algorithm. The Attribute Authority used inputs from the system setup (i.e., public parameters PK, master key MSK, and users' attributes S) for generating a secret key SK. Later, the secret key is employed by the data user to decipher the encrypted data.
Encryption (PK, M, A): In this module, it captures the public parameters PK, a message M and an access structure (A, ρ, T ) as the inputs. The encryption algorithm then produced the ciphertext CT. The data owner sends out the ciphertext along with the hashed value in the Cloud environment.
Decryption (PK, CT, SK): The decryption module took the public parameters PK, a secret key SK associated with the attributes set (A, ρ), and a ciphertext CT as the input. All these three inputs are used to decrypt the ciphertext CT and produce message M. The data owner defined the access policy and performed a policy hiding process. While in the pre-processing process, the redundant data in the raw message is eliminated before it is encrypted in the encryption process. Next, the data owner uploads the ciphertext with the hidden value of the access policy to the Cloud storage. Hence, the DU (or recipients) who satisfy the access policy will be able to decrypt the data. In this work, Cloud Service Providers (CSPs) are assumed to be solitary entities that do not interest their users/clients. It means acting as a platform where various data and users can reach, passing the messages, and storing the files. The access policy is handled on the user's side (DO). The Attribute Authority (AA) is an accountable entity that works as a key generation center. The users' attributes will be authenticated by AA before granting access privileges to the authorized users to interact with the system. The AA might receive authentication privileges from the DO or any other Cloud security mechanism agreements.
Security Goals
Our reinforcement hiding in access policy means to conceal the attributes' details. Hence, it ought to consider the security features as follows: Confidentiality: It is achieved when unauthorized users are unable to access the encrypted data, and only users who satisfy the access policy can perform the encryption and decryption module. Data confidentiality is also achieved when other entities, including the CSPs, cannot read/access any information from the encrypted data.
Data privacy: The access policy in our scheme complied with the encrypted data features where it had been hidden even though it was already in the ciphertext. It is performed in a two-level data concealment strategy that fully hides the attributes and data compression. When it reaches its destination, the decryption is performed with the respective key to disclose the data in a readable form. Therefore, it prevented the user's privacy from being exposed. Fine-grained access control: Users of the Cloud do not have the equal privilege of retrieving data. The privilege depends on the extent to which the user is involved or responsible. In our design, users are assigned to dissimilar access privileges defined based on the access policy imposed by the Attribute Authority (AA). All the attributes should be matched with the user access policy structure to retrieve the required information.
Security Model
This section discusses the security model for the proposed PHLC scheme. This scheme was constructed based on Zhang's scheme [23] by re-simulating their works. Hence, this scheme's security model is based on a security game between adversary A and challenger B as presented in [23]. The following is a full description of the game: Setup. To obtain the public parameters PK and the master key MSK, Challenger B runs the setup algorithm. The Challenger B holds the master key MSK and sends adversary A the public parameters PK. Phase 1: Adversary A adaptively issues a secret key query to the key generation module. For each query on an attribute set S i , challenger B returns a Secret Key SK si to Adversary A.
Challenge. Adversary A generates two messages M * 0 , M * 1 , (|M * 0 | = |M * 1 |) and an access structure ((A * , ρ * ), T * 0 ), ((A * , ρ * ), T * 1 ) in Phase 1 with the restriction that none of them can fulfill any of the queried attributes set S i . In response, challenger B selects a bit b ← {0,1}, choose Q 0 , Q x ∈ Gp4 at random. The challenge ciphertext CT * of the message M * b was then computed under the access structure ((A * , ρ * ), T * ), and the challenge ciphertext CT * was sent to Adversary A.
Phase 2: Repeat Phase 1. The adversary A request a private key, however none of the attribute S i met the access structure.
Guess: If b = b , the adversary A produces a guess bit b ∈ {0, 1} and wins the game. In this game, the adversary A's advantage is defined as Pr where the probability is divided by the number of random bits used by the adversary A and the challenger B.
Implementation of Policy Hiding in CP-ABE Using Logical Connective
Note that the attribute values of the access policy contain sensitive users' data. For example, the medical and healthcare Cloud system's attribute value could include information on patients' ailments and family history of hereditary diseases. Hence, such information needs to be concealed to protect the users' privacy. Applying the attribute hiding in access policy preserves the attribute values for gaining Cloud data privacy.
In our encryption solution, the access policy is constructed in CP-ABE using the policy hiding logical connective (PHLC) strategy. Specifically, as mentioned earlier, we integrate policy hiding schemes into CP-ABE components, that is, Setup, Key Generation, Encryption, and Decryption. We improved the Data Owner (DO) roles in the CP-ABE by appointing the encryption process with a hiding policy. Figure 3 describes in detail our enforcement of hiding by enhancing CP-ABE.
SETUP (1 λ ) → PK, MSK
The Attribute Authority (AA) ran the setup algorithm by taking 1 λ as a security parameter in the setup module. The setup algorithm produced a tuple N = (p1p2p3p4; G; G T ; e) which contained four prime numbers, p1, p2, p3, p4. Meanwhile, four distinct ordered subgroups given as Gp1, Gp2, Gp3, Gp4 are structured based on the prime numbers. Let G and G T ; be a cyclic group with order N. Therefore, the attribute authority uniformly chooses a, α, α1,β ∈ R Z N and g,g1 ∈ Gp. H, H 1 were set as public hash functions, with H mapped the attribute value, AV x to an element in Z N , and H 1 was a pseudo-random function that mapped elements in G and M to elements in M. e is a bilinear map, and it will be computed value Y, Public Parameters PK, and Master Key MSK as: Y = e(g, g 1 ) αα1 , PK = N, g, g a , g α1 , g β , Y , MSK = {a, α, α 1 , β, g 1 }
KEYGEN (PK, MSK, S) → SK
Attribute Authority (AA) checked the users to ensure that only legitimate users could access the file system. The AA aborted any illegal access towards the file system by generating the secret key. Specifically, it takes input public parameters PK, master key MSK, users' attributes S = (B s , J s ) to generate the secret key, SK. Given B s represents the attribute name index set and J s is the attribute value set for the user. The AA ran the KeyGen module as follows: AA chose t ∈ RZ N and R, R 1 , R i ∈ R Gp3 for i ∈ B s . It computed K 1 , K 2 , K i as Then, the secret keys associated with attribute set S = (B s , J s ) were calculated as
PRE-PROCESSES (rM) → (M)
This new module is constructed to eliminate redundant data in a raw file before it has been encrypted in the encryption module. Each data in the input message is assigned an index number. For each re-appearance word, the index number is combined with the existing word, and the new message without redundant data is saved as message M. Algorithm 1 represents the pre-processing process named FWA pre-processing.
ENCRYPT (PK, M, A) → CT
In this module, we input public parameter PK, Message M and access structure A = ((A, ρ)T )) and produced the ciphertext, CT. In access structure A, A is an access matrix l × n and ρ mapped each row A x to an index of the attribute name. Mean- is a set of attribute-value related to the access policy (A, ρ). The encryption algorithm selected a random vector V = (s, y 2 , y 3 , . . . , y n where s, y 2 , y 3 , . . . , y n was randomly selected from Z N and s is a shared value. For x = 1 to l, it computed λ x = A x × V, where A x corresponded to the x th row of A and calculated X = E Enc (k, M ), F = H 1 (k M ). Additionally, it also randomly took Q 0 , {Q x } 1≤x≤1∈R G p4 . Finally, it calculated the entire ciphertext components C 0 , C 1 {C x } 1≤x≤1 as follows: Previously, the access policy ( (A, ρ), T ) is appended to the ciphertext CT then outsourced to the cloud storage. However, this access policy is in a readable format, possibly exposing several sensitive information about the users. The researchers in [25] had emphasized that the attribute mapping function ρ will be caused attribute leakage. Hence, we improved the scheme to prevent the user's privacy leakage by eliminating the attribute mapping function. In this scheme, we replaced attribute value in access structure A = ((A, ρ)T )) with attribute location in the form of (x, y). Nonetheless, this strategy is insufficient to preserve Cloud data privacy. Therefore, XOR-based logical connective is used in policy hiding strategy to enhance the privacy of access policy.
Our policy hiding logical connective (PHLC) strategy is used to convert the location attribute value (T ) in the access policy to ciphertext. We specifically extracted the exact location of attribute value from the access policy ( (A, ρ), T ). Once the location is obtained, it has then been converted into a ciphertext based on the XOR operation. Figure 4 portrays the example of the access matrix Att (x,y) where the −x and −y values are represented as the location of the attribute value, which comprises of attribute name, attribute value, and the act of mapping ρ to the index of the attribute name. We removed the mapping ρ and used the attribute location (Att x,y ). The post-encryption module is then executed where it is where our main contribution of CP-ABE extension takes place. It is deliberated as below:
Policy Hiding ((A, ρ), T ) → HV
In this process, we derived the hidden value HV, which is the location of the attribute value that has been encrypted. The policy hiding algorithm adopts an access policy ( (A, ρ), T ) as an input. It first extracted the attribute values' set associated with access policy (T ) and then got the exact location (x, y) of each attribute value. Each location is converted to ciphertext via operation ⊕ and . Finally, our CP-ABE solution produced the output of ciphertext and hidden value in (CT, HV) and outsourced it to Cloud servers. Algorithm 2 depicts the entire Policy Hiding Algorithm.
Algorithm 2 Policy Hiding Algorithm
Input: Access policy ((A, ρ), T ) Output: Hidden Access Policy (HV) Begin Convert Binary to Hexadecimal for each β x , store as = HV x End the process
Decryption
Data users (recipients) accessed the encrypted data from the Cloud and downloaded it according to their preferences. Nonetheless, the decryption process is secured via access controls in which only legitimate data users are allowed to decrypt the ciphertext. Hence, when the user attribute S = (B s , J s ) satisfied with the access policy; the users are eligible to perform the decryption process. In prior, the data user (DU) needs to extract the hidden policy after receiving CT = (C 0 , C 1 {C x } 1≤x≤1 , HV) from the Cloud storage based on the following decryption algorithm.
Ext Hidden Policy (CT, HV) → (T )
In the decryption process, the hidden value, HV is used as an input that is retrieved from the cloud storage. Based on HV, it converted the value of HV into a binary set, then formed the operation of ⊕ and obtained the attribute's original location. It describes further in Algorithm 3.
Algorithm 3 Extracting Hidden Access Policy
Input: Hidden Access Policy (HV) Output: Access policy ( (A, ρ), T ) Begin foreach HV x of access policy ∀ x = index of HV. Convert hexadecimal HV x to binary; store as (X x, Y x ) Convert Ω x into Unicode Text End the process Upon successful concealment of the hidden access policy, the data users run the following decryption algorithm below: 4.8. Decrypt (PK, SK, CT, S) → M Similar to [23], after accepting CT, the decrypt algorithm checks whether the hash value of H(J s ) = H t ρ(x) . If the value is equal, then the system authorized the DU to decrypt the CT as below: Given that if the key k of the symmetric encryption scheme had been successfully computed, then the decryption algorithm determined the values F = H 1 (k M ). Only if the equation F = F holds, the message M will be produced. Otherwise, the process will be terminated. The plaintext is recovered via the calculation M = E Dec (k, x). Specifically, the reinforcement hiding in access policy is employed to maintain the Cloud data privacy by concealing the values of attributes. The policy hiding (Algorithm 2) is concealed in the encryption module, and it runs after the encryption algorithm. Algorithm 3 then extracted the policy hiding algorithm before the decryption algorithm.
Security Analysis
In this section, we have extended our investigation to security analysis. We focused on policy privacy preservation and designed the security analysis in the following discussion.
Security Proof
This section discusses the security proof of the proposed scheme. Our proposed scheme is constructed by re-simulating the CP-ABE scheme published in [23], which has been proved to be secure by attaining full security in the standard model utilizing the dual system encryption approach under static assumptions. Therefore, we provide a security analysis of the enhanced scheme on hidden access policy presented in Theorem 1. While in theorem 2 we discussed our scheme against traffic analysis. Theorem 1. PHLC preserves the privacy of access policy against the polynomial-time adversary in the security parameter λ.
Proof. In the PHLC scheme, the attribute's location in the access policy is converted into a hidden value using X-OR operation. So, this hidden access policy stored in Cloud together with CT = (C 0 , C 1 {C x } 1≤x≤1 , HV)). DU with attributes set s that satisfied access structure A = ((A, ρ)T )) can decrypt CT. The adversary A who has no knowledge about the scheme used to convert the hidden access policy could not launch the brute force attack to guess the attribute string within polynomial time. Furthermore, they are unable to sniff any sensitive data from the modified access policy established as HVx. DU is only permitted to validate their attributes in the hidden access policy, and it is forbidden to inspect any attributes in the attribute universe unless they collude with others.
Theorem 2. PHLC secures against Traffic Analysis.
Proof. In our scheme, the CT is stored in Cloud storage together with HV. We assume that an adversary A successfully analyses the packet during the transmission to the Cloud and gain access to ciphertext and hidden value (CT = (C 0 , C 1 {C x } 1≤x≤1 , HV)), thus the adversary A could not read the message because it is in the unreadable format (ciphertext). In this case, we assume that the symmetric encryption key based on AES is secured. In addition, the adversary A also could not gather any information from the access policy because the access policy is in the ciphertext. Hence it is intractable to compute the access policy.
Case Scenario Security Analysis Simulation
To prove the compelling of our CP-ABE enhancement solution against traffic analysis, we conducted a case study, as shown in Figure 5. In this experiment, we conducted a passive attack that attempted to learn the system's information without affecting the system's resources. We employed FileZilla Transfer Protocol (FTP) Client and Server Tools for transferring data between the owner and user, as shown in Figure 6. As an adversary in this experiment, the attacker had been designed to perform an unauthorized sniff and read the information through the access policy. In this experiment, the FileZilla Client had been installed on the DO computer. The file is transferred via FTP upon a successful connection between the DO s computer and the DU. The adversary A exploited the communication between the DO and DU via the Wireshark packet analysis tool by sniffing the inter-communication and capture the data (in packet form).
We experimented with two scenarios. The first scenario involved the DO sending the ciphertext CT with access policy ((A, ρ), T ) in a readable format to the DU, as shown in Figure 7. While Figure 8 discovers the second scenario where the packet of the ciphertext had been applied with the Policy Hiding Algorithm and sent to the DU. The packet shown in Figure 8 are in unreadable format (HV).
Based on these results, the study proved that the adversary A unable to retrieve any information regarding the data because it is in unreadable (ciphertext) format. Additionally, the adversary A could not learn anything because the policy is also generated in scrambled form. This experiment shows that our proposed scheme resists any malicious act to sniff the information during the data transmission.
Result and Discussion
In this section, we addressed the experimental setting, performance evaluation and discussed the result.
Experiment Configuration
We developed our proposed scheme based on the Java Pairing-based Cryptography library (JPBC) in the Eclipse IDE for Java developers in 2019-03. In this experiment, the number of attributes used varies from 2 to 14, based on the author's experimental settings [23]. Our work placed Zhang's [23] research work as the benchmark because Zhang's work is closely related to our policy hiding approach, which focused on preserving Cloud data privacy. In addition, we also compared our scheme with the expressive CP-ABE scheme proposed by [33]. Similar to our scheme, both CP-ABE in [23,33] also employed LSSS to express the access structure and exhibited the same Cloud storage system architecture.
Result
In this section, we compared our scheme's performance regarding the storage cost of ciphertext and encryption time with previous works [23,33]. Figure 9 shows the ciphertext size for our PHLC-based CP-ABE scheme, Zhang's work [23,33]. As illustrated in this figure, our proposed scheme is comparable with the with Zhang's work and [33]. This figure indicates that our data privacy strategy realizes effective ciphertext size even though the PHLC scheme hide both attribute names and attribute values and attached it to the ciphertext. While in Figure 10 illustrates the time taken for the Data Owner (DO) to encrypt the file before outsourcing it to the Cloud storage. Based on the figure, it shows that the encryption time exhibits a linear increase with the number of attributes. Further, the PHLC encryption time is less than in Zhang's work [23], on average 20%. Even though the difference is slight, it greatly impacts the processing overhead and complexity of the encryption process. The data privacy approach by [33] showed the maximum encryption time, which can be improved to gain effective encryption time. To reveal the effectiveness of our PHLC approach in terms of ciphertext size, we enhanced the experiment by adding a pre-processing module. This module eliminates redundant data in a raw shared file using a new technique called frequent wording appearance pre-processing. Based on Figure 11, the ciphertext size in PHLC after applying the pre-processing module have decreased by approximately 17% on average. Reducing the size of the ciphertext helps reduce the storage space and leads to efficient decryption process performance.
Conclusions
In the ever-increasing era of data breaches, providing the integrity and privacy of Healthcare Cloud storage is a challenging issue in the Cloud environment, especially when the demands and requests for Cloud services are increasing. Cloud computing must deliver a security solution that protects sensitive information and data sharing processes. This solution can prevent the third party from eavesdropping or tampering with the data while it is being transmitted. Therefore, this work enhances the CP-ABE approach by proposing two new modules: the pre-processing and the fully hidden access policy modules. In addition to performing the encryption algorithm, the data attributes in the access policy are further concealed. It makes the unauthorized party unable to learn any details about the access policy and encrypted data. The simulation results demonstrated that our CP-ABE preserved the Cloud data without incurring large sizes of ciphertext, compelling encryption time. As in the case of the passive attack, the security analysis also revealed that the enhanced CP-ABE is secure enough to be executed in the real environment. Therefore, Cloud data privacy has become an essential feature in multi-tenant computing environments where any form of data disruption is unacceptable. Data Availability Statement: Not Applicable, the study does not report any data. | 7,947.8 | 2021-10-30T00:00:00.000 | [
"Computer Science"
] |
Duality for Convex Monoids
Every C*-algebra gives rise to an effect module and a convex space of states, which are connected via Kadison duality. We explore this duality in several examples, where the C*-algebra is equipped with the structure of a finite-dimensional Hopf algebra. When the Hopf algebra is the function algebra or group algebra of a finite group, the resulting state spaces form convex monoids. We will prove that both these convex monoids can be obtained from the other one by taking a coproduct of density matrices on the irreducible representations. We will also show that the same holds for a tensor product of a group and a function algebra.
Introduction
States and observables of a physical system are connected via dualities between certain categories.There are several dualities that can be used for this connection.Known examples include the Gelfand duality theorem and the Kadison duality theorem.For a system in classical physics, the state space is modeled by a topological space, and the observables are given by functions on this space.In this way, the algebra of observables forms a commutative C * -algebra.The celebrated Gelfand theorem states that the category of locally compact Hausdorff spaces is dually equivalent to the category of commutative C * -algebras, and thus it provides an intimate connection between states and observables.A useful special case occurs when the C * -algebras under consideration have a unit.Gelfand duality in this setting states that the category of unital C * -algebras is dually equivalent to the category of compact Hausdorff spaces.
Gelfand duality does not apply to quantum mechanical systems, since their algebra of observables is in general a non-commutative C * -algebra.There is no good non-commutative analogue of Gelfand duality, but there is a duality theorem due to Kadison that can be useful to describe quantum systems.Kadison duality is not based on C * -algebras, but on the unit interval within a unital C * -algebra.This unit interval forms a structure called an effect module, and there is a dual equivalence between a certain category of effect modules and a certain category of convex spaces.The state space of a quantum system forms a convex space and the corresponding effect module contains its observables; hence Kadison duality connects states and observables of quantum systems.It does not directly generalize Gelfand duality, since the unit interval of a C * -algebra contains less information than the C * -algebra itself.
When studying physical systems, one often wants to take the symmetry group of the system into account.In the C * -algebraic picture, this leads to quantum groups.For ordinary Gelfand duality, we use locally compact Hausdorff spaces as state spaces.If we take the symmetry of a system into account, the state space becomes a locally compact group.On the dual side, this gives a coalgebra structure on the C * -algebra, making it into a structure called a quantum group.There is an analogue of the Gelfand duality theorem that takes the symmetry into account.This theorem states that the category of compact (Hausdorff) groups is dually equivalent to the category of commutative compact quantum groups.
Summarizing, there are two dualities involving topological spaces and C * -algebras: one for systems without symmetry, and one for systems with symmetry.Furthermore, Kadison duality relates convex spaces and effect modules for systems without symmetry.In this article we shall will describe a variant of Kadison duality for systems with symmetry.This will lead to a notion of a quantum group whose underlying algebra is an effect module instead of a C * -algebra.Schematically, we wish to complete the following diagram: The categories and functors occuring in this diagram will be explained in more detail in the next section.We will restrict our attention to finite groups.In the theory of C * -algebraic quantum groups, there is only one way to assign a commutative quantum group or Hopf-algebra to any finite group.We show that there are two ways to assign an effect module (and a dual convex space) to a finite group, arising from two different Hopf algebras associated to the group.Both ways to form "effect quantum groups" are related via a version of Pontryagin duality.The outline of this paper is as follows.Section 2 contains preliminary material about convex spaces, effect modules, and quantum groups.In particular we will describe the various dualities that connect these objects.In Section 3 we will determine the effect modules and convex spaces associated to the group algebra and the function algebra of a finite group.The two convex spaces obtained in this way are both convex monoids, that is, monoids in the category of convex spaces.The connection between these two monoids will be established in Section 4. We will prove that both convex monoids determine each other via essentially the same construction: if V 1 , . . ., V k are the irreducible linear representations of either of these monoids, then the coproduct DM( ) is a convex monoid isomorphic to the other one.Finally, in Section 5, we will prove a related result for the tensor product of a group and a function algebra.
Preliminaries
We will present the dualities alluded to in the Introduction in more detail here.The most basic duality that we will use is Gelfand duality.Throughout this paper, we will assume that all C * -algebras we encounter have a unit.Write C * for the category of C * -algebras with *-homomorphisms as maps.The full subcategory of commutative C * -algebras is denoted cC * .Furthermore write KHaus for the category of compact Hausdorff spaces with continuous maps.If X is a compact Hausdorff space, then the collection C(X) of complex-valued functions on X is a commutative C * -algebra with pointwise operations.This construction gives a contravariant functor C from KHaus to cC * by letting it act on morphisms via precomposition.The Gelfand spectrum provides a functor in the other direction: if A is a commutative C * -algebra, then its spectrum Spec A = Hom cC * (A, C) is a compact Hausdorff space.The spectrum construction forms a contravariant functor from cC * to KHaus, again using precomposition.
Theorem 1 (Gelfand).The compositions C •Spec and Spec •C are naturally equivalent to the identity functor.Hence the categories KHaus and cC * are dually equivalent.
There is a more general version of Gelfand duality involving non-unital C * -algebras and locally compact spaces, but we will only be concerned with compact spaces in the remainder of this article.
The Gelfand Theorem justifies viewing C * -algebras as a non-commutative generalization of spaces.Similarly it is useful to have a non-commutative generalization of topological groups.This gives the notion of a quantum group.There are several definitions of quantum groups; here we will use the compact quantum groups from Woronowicz [9].For a general overview of the theory of quantum groups see [8].
If G is a compact Hausdorff group, then its function algebra C(G) is a commutative C * -algebra.It can be made into a compact quantum group by defining ∆ : This construction provides a group-theoretic analogue of Gelfand duality.Instead of compact spaces, we use compact groups.They constitute a category KGrp with continuous homomorphisms as maps.Morphisms between compact quantum groups are unital * -homomorphisms preserving the comultiplication.They make compact quantum groups into a category KQGrp.As in Gelfand duality, we want to consider the full subcategory CKQGrp of commutative compact quantum groups.
Theorem 3. The functor C : KGrp op → CKQGrp is a dual equivalence between the category of compact Hausdorff groups and commutative compact quantum groups.
If A is a commutative compact quantum group, then the underlying space of its dual group is the spectrum of A, considered as C * -algebra.The multiplication on G arises from the comultiplication on A.
There is another way to assign a compact quantum group to a finite group G, namely the group algebra C[G].The elements are again functions from G to C, but now the multiplication is given by convolution: The standard basis of C[G] consists of Dirac functions λ g for g ∈ G, defined by λ g (g) = 1 and λ g (h) = 0 for h = g.The convolution product assumes a particularly easy form on these basis vectors, namely λ g * λ h = λ gh .The comultiplication is defined on basis vectors by ∆(λ g ) = λ g ⊗ λ g .
Effect algebras and modules.
Another duality that we will use involves the effects in a C * -algebra.Effects represent probabilistic measurements that can be performed on a physical system.Let A be any C * -algebra.An element a in A is said to be positive if it can be written as a = b * b for some b ∈ A. Positivity can be used to define an order on the self-adjoint part of A, called the Löwner order.Let a, b be self-adjoint elements in A, then we say that a ≤ b if and only if b − a is positive.An effect in A is a self-adjoint a ∈ A for which 0 ≤ a ≤ 1.
Effects in a C * -algebra can be organized into an algebraic structure called an effect module.Effect modules were introduced in [4], based on earlier work on effect algebras, which started in [1].For an overview of the theory about effect algebras, see [2].
Roughly speaking, an effect module looks like a vector space, but the addition is only a partial operation (since the sum of two effects may lie above 1), and we can only multiply by scalars in the unit interval [0, 1].Instead of complements with respect to 0, we have complements with respect to 1.This means that for every effect a there exists an effect b for which a + b = 1.The precise definition is as follows.
Definition 4.An effect module consists of a set A equipped with a partial binary operation ⊞ called addition, a unary operation (−) ⊥ called orthocomplement, a scalar multiplication • : [0, 1] × A → A and constants 0, 1 ∈ A, subject to the following axioms: • The operation ⊞ is commutative, which means that whenever a ⊞ b is defined, then also b ⊞ a is defined, and a ⊞ b = b ⊞ a. • The operation ⊞ is associative, which means that if a ⊞ b and (a ⊞ b) ⊞ c are defined, then also b⊞c and a⊞(b⊞c) are defined, and (a⊞b)⊞c = a⊞(b⊞c).
Effect modules form a category EMod, in which the morphisms are functions preserving addition, orthocomplement, scalar multiplication, and the constants 0 and 1.
The easiest example of an effect module is the unit interval [0, 1].The partial operation is addition, where a ⊞ b is defined if and only if a + b ≤ 1.The orthocomplement is given by a ⊥ = 1 − a, and the scalar multiplication is simply the multiplication on [0, 1].Another example are the effects in a C * -algebra, with the same operations.If A is a C * -algebra, then its collection of effects is denoted Ef (A).Any Hilbert space H gives rise to a C * -algebra B(H), hence to an effect module Ef (B(H)).We will often abbreviate this to Ef (H).
More generally, every partially ordered vector space V over R gives rise to an effect module.Pick an element u ∈ V for which u > 0, then the interval [0, u] = {v ∈ V | 0 ≤ v ≤ u} is an effect module.Addition serves as the partial binary operation, and the orthocomplement is v ⊥ = u − v.The scalar multiplication is obtained by restricting the scalar multiplication from R to [0, 1].In fact, every effect module is an interval in some partially ordered R-vector space, as shown in [4, Theorem 3.1].
To work with infinite-dimensional vector spaces, it is often necessary to require that they are complete in a certain metric.The same holds for effect modules.If A is an effect module, then a state on A is a morphism σ : A → [0, 1].The collection of all states is written as St(A).Define a metric on A via We call the effect module A a Banach effect module if it is complete in its associated metric.Banach effect modules give a full subcategory of EMod written as BEMod.
2.2.Convex spaces.The state space of an effect module is always a compact convex space.We will make this observation more precise by defining a suitable category of compact convex spaces, following [10].A topological vector space is said to be locally convex if its topology has a base of convex open sets.Let KConv be the category whose objects are compact convex subspaces of a locally convex vector space.A subspace X ⊂ V is called convex if, for all x, y ∈ X and λ ∈ [0, 1], we have that λx The state space of an effect module A is contained in the vector space {ϕ : Examples 6.We give some examples of convex spaces and their dual effect modules. ( This can be visualized as the standard simplex whose vertices are points in X.An element f ∈ D(X) is usually written as a formal convex combination x∈X a x x, where the coefficients are the function values a x = f (x).They are subject to the condition x a x = 1.This construction gives a functor D : FinSets → KConv, where on a morphism ϕ : X → Y we define D(ϕ)( x a x x) = x a x ϕ(x).The dual effect module of D(X) is Hom(D(X), [0, 1]), which is isomorphic to [0, 1] X .
(2) In the above example, D(X) can be thought of as the set of discrete probability measures or distributions on X.There is a continuous analogue of this construction.Let X now be a compact Hausdorff space, and let Σ X be its Borel σ-algebra.Denote the space of Radon measures on X by R(X).
A Radon measure is a probability measure µ : Σ X → [0, 1] that satisfies In [3] it is shown that R forms a monad on the category of compact Hausdorff spaces.Its category of Eilenberg-Moore algebras is equivalent to KConv, so convex spaces of the form R(X) can be thought of as the free convex spaces over a compact Hausdorff space.The dual effect module of R(X) is the collection of continuous functions from X into [0, 1].This fact is a categorical reformulation of the Riesz-Markov theorem.To see this, observe that there is a map R(X) → Hom(C(X, [0, 1]), [0, 1]) given by integration, i.e. µ → (−) dµ.The Riesz-Markov theorem states that this map is an isomorphism, so R(X) is the dual of C(X, [0, 1]).This shows that the following diagram, connecting Gelfand and Kadison duality, commutes: (3) Let H be a Hilbert space.A density matrix on H is a positive trace-class operator ρ : H → H with trace 1.The collection of all density matrices forms a convex space denoted DM(H).The importance of this example lies in its connection to the effects on H: there is an isomorphism Ef (H) → Hom(DM(H), [0, 1]), that maps an effect a to the function ρ → tr(ρa).Because this map is an isomorphism, Ef (H) is the dual effect module of DM(H).
There are several ways to construct new convex spaces from old ones.In the remainder of this paper we will sometimes use coproducts and tensor products of convex spaces, so we will describe these briefly here.
The category KConv has all coproducts.The coproduct of two convex spaces can be described geometrically, using the embedding in a locally convex vector space.The following description is a slight modification of the construction in [10].Suppose that X ⊆ V and Y ⊆ W are compact convex subsets of locally convex vector spaces.Then the coproduct X + Y can be embedded in the vector space V ⊕ W ⊕ R. To construct this coproduct, embed X in this larger vector space via the inclusion x → (x, 0, 1), and embed Y via the inclusion y → (0, y, 0).The convex hull of the disjoint union of X and Y is the coproduct of X and Y .This is made precise in the following.Proposition 7. If X ⊆ V and Y ⊆ W are objects in the category KConv, then their coproduct is Proof.Define embeddings i X : X → X+Y and i Y : Y → X+Y via i X (x) = (x, 0, 1) and i Y (y) = (0, y, 0).Given affine maps f : X → Z and g : Y → Z, define h : X + Y → Z by h(rx, (1 − r)y, r) = rf (x) + (1 − r)g(y).
Then h • i X = f and h • i Y = g, so it remains to be shown that h is the unique map with this property.Suppose that h ′ : X + Y → Z is an affine map for which which proves uniqueness.
Example 8. Denote the one-point convex space by 1.The coproduct 1 + • • • + 1 of n copies of this space is the convex hull of n points, embedded in R n−1 in such a way that they are all affinely independent.Therefore this coproduct is the standard simplex D(n).
We continue with a discussion of the tensor product of compact convex spaces.If X, Y , and Z are compact convex spaces, then a map X × Y → Z is called bi-affine is it is affine in both variables separately.A tensor product of X and Y is a compact convex space X ⊗ Y equipped with a bi-affine map ⊗ : X × Y → X ⊗ Y such that for every compact convex space Z and every bi-affine f : X × Y → Z there exists a unique affine map g : X ⊗ Y → Z such that g • ⊗ = f .Semadeni proves in [10] that any two compact convex spaces admit a tensor product, and that it is unique up to isomorphism.
The above tensor product enjoys many good properties.The one-point convex space 1 acts as a unit for the tensor.Furthermore, the tensor product distributes over coproducts.From these two facts, together with the isomorphism D(n) ∼ = 1 + • • • + 1, it can be deduced that the tensor product of standard simplices is D(n) ⊗ D(m) ∼ = D(nm).
Kadison duality for group and function algebras
Let G be a finite group.This gives rise to two Hopf-algebras, or compact quantum groups, namely the function algebra C(G) and the group algebra C[G].Of these two Hopf-algebras, the function algebra is commutative but in general not cocommutative, while for the group algebra, it is the other way round.Therefore the duality from Theorem 3 only applies to the function algebra C(G).However, Kadison duality also applies to unit intervals of non-commutative C * -algebras, so we can use this for both the group algebra and the function algebra.Definition 9. A convex monoid is an object X of the category KConv, together with a continuous multiplication map • : X × X → X and a constant 1 ∈ X, such that • The operation • is affine in both variables separately, that is, (λx • z and similarly for convex combinations on the right.
Equivalently, a convex monoid is a convex space X equipped with a map X⊗X → X that is associative and has a unit.
A variant of quantum groups in the framework of Kadison duality should give a duality between effect modules with a comultiplication and convex monoids.In this section we describe these objects for the Hopf-algebras C(G) and C[G].We will start with the function algebra C(G).In fact, this algebra can be defined for any compact group G, so we will now determine the effect module and convex space associated to C(G) for an arbitrary compact group G.The multiplication on R(G) can also be decribed directly in terms of the multiplication on G. Applying the functor R to the multiplication map , this provides a convex monoid structure on R(G), which is the dual of Ef (C(G)).This convex monoid has been studied categorically in [6].
The multiplication on the group algebra C[G] is more complicated than the one on the function algebra.Therefore the Löwner order on C[G] and the effect module are also more difficult to compute explicitly.The algebra C[G] is simultaneously a C*-algebra and a Hilbert space, and the algebra structure is compatible with the inner product, so C[G] forms a Hilbert algebra.We shall use some general facts about Hilbert algebras to compute the effect module and the state space of C[G].
Proposition 14.Let V be a unitary representation of G. Write the decomposition of V into irreducible representations as Proof.An intertwining effect ε : V → V can be written as a matrix of maps ε ij : n i V i → n j V j .By Schur's Lemma, each ε ij = 0 for i = j.The effects ε ii can in turn be decomposed into an n i × n i matrix of maps V i → V i , and these are all scalar multiples of the identity by Schur's Lemma.Therefore each ε ii corresponds to an effect on C ni .
Proof.By Proposition 13, the effect module . This is a Banach effect module, so we can use the duality between convex compact spaces and Banach effect modules to determine the dual space.Dualizing turns products into coproducts, so the dual space is Dualizing ∆ gives µ on the state space St(C[G]).The convex monoid structure on the state space of the group algebra satisfies (σ • τ )(λ g ) = σ(λ g )τ (λ g ) on basis vectors.
Convex Pontryagin duality for group and function algebras
The group algebra and the function algebra associated to a finite group are both finite-dimensional Hopf algebras.These are related via a non-commutative generalization of Pontryagin duality, see e.g.[8] for details.In the previous section, we found two convex monoids that can be obtained from a finite group: the state space D(G) of the function algebra, and the state space of the group algebra, where the V i are the irreducible representations of G.This section will present a construction to convert these two convex monoids into each other.This construction can be viewed as a convex counterpart of Pontryagin duality.
Definition 16.A linear representation of a convex monoid X consists of a vector space V and a monoid homomorphism ρ : X → Aut(V ) that preserves convex combinations.
As usual, a representation can also be written as an action of X on V , that is, a map X × V → V .A linear representation of a convex monoid is then required to be affine in the first variable and linear in the second variable.We will look at the linear representations of the convex monoid D(G).
Lemma 17.There is a one-to-one correspondence between representations of the finite group G and linear representations of D(G).
Proof.Representations of G are monoid homomorphisms G → Aut(V ), since all monoid homomorphisms between groups are automatically group homomorphisms.Linear representations of D(G) are monoid homomorphisms D(G) → Aut(V ) that are also morphisms of convex spaces.Since D(G) is the free convex space generated by G, it follows that there is a one-to-one correspondence between maps of sets G → Aut(V ) and maps of convex spaces D(G) → Aut(V ).It is easy to check that this equivalence restricts to monoid homomorphisms.This result produces an easy way to construct the state space of C[G] out of the state space of C(G), in the following steps: (1) Let V 1 , . .., V k be the irreducible linear representations of St(C(G)).
(2) Form the convex sets of density matrices DM(V i ) for each i.
We will now check that all 1-dimensional linear representations are of the form ρ g for some g ∈ G.
The map ρ is a representation, so these two expressions must be equal for all states σ and τ .Comparing coefficients shows that at most one a i is equal to 1, and all others are 0. The element a cannot be identically 0, since ρ preserves 1. Hence a is equal to λ g for some g ∈ G, which proves that the maps ρ g are indeed the only 1-dimensional representations.
There is only one density matrix on any 1-dimensional space.Therefore the space DM( We have shown that if we start with the convex monoid St(C(G)) ∼ = D(G) and apply the above construction twice, then we get back a convex space that is isomorphic to the underlying space of the original convex monoid.Now we wish to show that the multiplication is also preserved in this construction, so that we obtain an isomorphism of convex monoids, rather than just convex spaces.For this we have to endow the coproduct of density matrices with a multiplication.It is useful to have an explicit isomorphism between DM( Taking states of a C * -algebra provides a contravariant functor St : C * → KConv.Therefore, applying the state functor to Ψ gives a map St(End( given by α(ρ)(A) = tr(ρA), and hence St(Ψ) = Φ.Since Ψ is an isomorphism and St is a functor, Φ is also an isomorphism.
Using this isomorphism, the multiplication on the coproduct of density matrices can be described explicitly.Since we are working in a coproduct, it suffices to describe T • S, where T ∈ DM(V i ) and S ∈ DM(V j ).Applying the isomorphism Φ from the lemma above gives states λ g → tr(T ρ i (g)) and λ g → tr(Sρ j (g)) on C[G].Multiplying these states pointwise and using properties of the trace gives the map λ g → tr((T ⊗ S)(ρ i ⊗ ρ j )(g)).Since Φ is an isomorphism, there is a unique We define T • S to be this convex combination i λ i T i .With the proposition and lemma above, we have now proven the following result.
Theorem 20.Let G be a finite group, and let V 1 , . . ., V k be the irreducible linear representations of the convex monoid St(C(G)).Then the convex monoid DM(V 1 )+ • • •+DM(V k ) with multiplication described above is isomorphic to the convex monoid St(C[G]) with pointwise multiplication.
Convex Pontryagin duality for a tensor product
The category of finite-dimensional Hopf algebras is self-dual.The dual of a finite-dimensional Hopf algebra A is  = {f : A → C | f linear}.Its multiplication is derived from the comultiplication on A, and vice versa.The Hopf algebras C(G) and C[G] coming from a finite group G are duals of each other via this construction.
Let A be either C(G) or C[G].The main result from the previous section states that if V 1 , . . ., V k are the irreducible representations of the convex monoid St(A), then DM(V 1 ) + • • • + DM(V k ) is isomorphic to St( Â).This raises the question if this holds for all Hopf algebras.We do not yet know if this is the case in general, but we will now discuss another example of a Hopf algebra for which it holds, so this may be promising for the general case.
Let G be a finite group.Consider the Hopf algebra A = C(G) ⊗ C[G], i.e. the tensor product of the function algebra and the group algebra.This Hopf algebra is neither commutative nor cocommutative.Since dualizing preserves tensor products, the dual of A is isomorphic to A itself.Thus the statement that connects the state space of A to its dual amounts to the following.which is what we wanted to show.
which is locally convex.Therefore St(A) is an object in the category KConv, and St is a contravariant functor from BEMod to KConv.The functor Hom KConv (−, [0, 1]) is a contravariant functor in the other direction.The following result is taken from[5, Theorem 6], but see also[10, Section 4].Theorem 5.The functors St and Hom KConv (−, [0, 1]) are inverses of each other.Hence the categories KConv and BEMod are dually equivalent.
( 3 )
The coproduct (in the category KConv) of all DM(V i ) is the state space of C[G].Since irreducible representations of G are the same as irreducible linear representations of D(G), this construction yields exactly the state space of C[G].A surprising fact is that it works in two directions: if we apply exactly the same construction to the state space of C[G], we end up with the state space of C(G).Proposition 18.Let V 1 , . . ., V k be the irreducible linear representations of the convex monoid St(C[G]).Then the convex space DM Let ρ : St(C[G]) → C be an arbitrary representation.Then the map ρ extends to a function Hom(C[G], C) → C in the double dual of C[G], hence there exists a ∈ C[G] such that σ(a) = ρ(σ) for all states σ on C[G].We will show that a is actually an element in G ⊆ C[G].Express a as a = a 1 λ g1 + • • • + a n λ gn .Then, for any two states σ and τ , algebras, on basis vectors determined by λ g → (ρ 1 (g), . . ., ρ k (g)).We claim that this map is injective.Suppose that a, b ∈ C[G] are such that ρ i (a) = ρ i (b) for all i.Then a and b act in the same way in all irreducible representations of G. Since any representation of G can be decomposed into irreducibles, a and b act in the same way in all representations of G.In particular, they have the same action on the regular representation C[G].Thus a = a • e = b • e = b.Since Ψ is injective and its domain has the same dimension as its codomain, it is an isomorphism. | 6,960.8 | 2015-10-20T00:00:00.000 | [
"Mathematics"
] |
Modified Viterbi Algorithm with Feedback Using a Two ‐ Dimensional 3 ‐ Way Generalized Partial Response Target for Bit ‐ Patterned Media Recording Systems
: The ever ‐ increasing demand for data in recent times has led to the emergence of big data and cloud data. The growth in these fields has necessitated that data be centrally stored in data centers. To meet the need for large ‐ scale storage systems at data centers, innovative technology such as bit ‐ pattern media recording (BPMR) has been developed. With BPMR technology, we are able to achieve significant improvements in high areal density (AD) of magnetic data storage systems. However, two ‐ dimensional (2D) interference is a common issue faced with high AD. Intersymbol interference and intertrack interference occur when the distance between the islands is decreased in the down ‐ track and cross ‐ track, respectively. 2D interference adversely affects the performance of BPMR. In this paper, we propose an improved modified Viterbi algorithm (MVA) exploiting a feed ‐ back and a new 2D three ‐ way form of a generalized partial response (GPR) target. The proposed MVA with feedback is superior to the previous MVA by eliminating intertrack interference (ITI) more effectively. With the three ‐ way GPR target, the proposed algorithm achieves more stable per ‐ formance compared to the previous detection algorithms for the track misregistration effect.
Introduction
With the proliferation of electronic devices, more data are constantly being generated and consumed on a daily basis. Initially, massive amounts of data were mainly produced by industrial applications. However, the advent of handhelds and social networks resulted in more data being generated at a faster rate. Additionally, the Internet of Things, which is a system of interrelated devices, is able to create and transmit data. The scale of data owing to the factors mentioned above is commonly described as big data. The data are usually centrally stored in data centers. Data centers are often required to increase their storage capacity owing to the growth in big data and cloud data services. Cloud data services have become very popular owing to their many advantages such as data synchronization across all devices and improved data security. The transition to centralized storage has also been accelerated by the implementation of 5G technology, which offers significantly higher internet speeds. To build data centers with high storage capacity, two technologies exist, namely solid state and magnetic storage. The choice is a trade-off between access speed and price per bit. The price per bit for magnetic storage is much lower than that for solid-state storage [1]. The areal density (AD) for magnetic storage is limited to approximately 1 terabit per square inch (Tb/in 2 ) (0.155 Tb/cm 2 ) owing to the superparamagnetic phenomenon [2]. To combat this problem, several new technologies have been proposed, such as heat-assisted magnetic recording [3], microwave-assisted magnetic recording [4], two-dimensional (2D) magnetic recording [5], and bit-patterned media recording (BPMR) [6].
BPMR is an extremely promising technology to overcome the AD limitations seen in existing magnetic storage. For BPMR-based systems, data bits are stored on magnetic islands. To increase the AD, the distance between the magnetic islands has to be reduced. This increases the intersymbol interference (ISI) and intertrack interference (ITI) in the down-track and cross-track direction, respectively. Owing to the increase in 2D interference, such as ISI and ITI, the BPMR channel performance in terms of bit error rate (BER) is negatively affected.
Error correction codes and modulation code are two common methods to minimize 2D interference in signal processing. Jeong and Lee [7] proposed the use of modulation code and multilayer perceptron decoding for BPMR to reduce and correct errors due to ITI, respectively. Error-correcting 5/6 modulation code was introduced [8] by Nguyen and Lee to reduce 2D interference and correct errors. Buajong proposed a combination of rate-3/4 modulation code and ITI subtraction in [9] to reduce ITI. In addition, we can use an equalizer and detector with a generalized partial response (GPR) target to minimize 2D interference. Additionally, Kim proposed a 2D soft-output Viterbi algorithm (2D SOVA) [10] to reduce the effects of 2D interference. The 2D SOVA method was further developed as an iterative 2D SOVA for bit-patterned media [11]. In addition, the modified Viterbi algorithm (MVA) proposed by Nabavi, Kumar, and Zhu in [12] is widely used [13][14][15][16]. However, the MVA [12] was only used as part of the detector in their study. Therefore, in this paper, we propose a three-way GPR utilizing a three-way MVA [12] with feedback. This is a huge improvement over the original MVA [12] standalone implementation. Besides, there are detection schemes based on the Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm [17,18]. In [17], an iterative row-column soft decision feedback algorithm (IRCSDFA) is presented. Then, in [18], the IRCSDFA reduced the complexity by applying Gaussian approximation (IRCSDFA-GA). Although IRCSDFA-GA has low complexity compared to IRCSDFA, the main detection is still the BCJR algorithm, which is much more complex than the MVA.
The rest of the paper is divided as follows. Section II discusses the N-way GPR target and N-way detection. Section III explains the proposed model and details of our algorithm. The simulation results and conclusions are discussed in Section IV and Section V, respectively.
N-Way GPR Target and N-Way Detection
In this study, we propose a method utilizing an N-way GPR target and N-way detection where N is the number of neighboring symbols affecting the current symbol.
N-Way GPR Target
To understand the N-way GPR target works, we will first explain N-way interference. Consider a channel with matrix C as follows: If we send a serial one-dimensional (1D) signal through the channel C, the interference affecting both sides of the current symbol, also known as a two-way interference, is represented by the coefficients c0,−1 and c0,1. If we send a 2D signal through the channel, the interference affecting the signal will instead be on four sides of the current symbol represented by the coefficients c0,−1, c0,1, c−1,0, and c1,0. This is referred to as four-way interference. For three-way interference, we look at the data edge of the 2D signal (as shown in Figure 1). At the data edges, the interference coefficient is equal to 0 at one of the four-way interference paths, because there is no signal at that interference coefficient. For example, for the data at the left edge, the interference coefficient c0,−1 becomes zero, and at the top edge, coefficient c−1,0 is equal to zero. To estimate the coefficient of the N-way interference, we utilize its associated N-way GPR target. In Figure 2, we summarize the N-way GPR target. The first example illustrates the interference from the left and right side of the current data. In this case, it is referred to as two-way interference and a two-way GPR target. In the second example, the interference is from the left, right, and upper (or lower) sides of the data. This is referred to as three-way interference and a three-way GPR target. In the final one, we have interference coming from the left, right, upper, and lower sides of the data. This is referred to as four-way interference and a four-way GPR target.
N-Way Detection
When estimating a GPR target, the output signal will be similar to the original signal passing through the target. Therefore, the target structure has an effect on the value of the : data : interference output signal. As the channels are usually symmetrical, we can assume that the 2D target will be in the following form irrespective of ITI.
The output signal for this target, which includes [−2r−1, −1, 2r−1, −2r+1, 1, 2r+1], is listed in the form of a trellis diagram as shown in Figure 3: The detection scheme described above is known as the conventional Viterbi algorithm (VA). To implement the MVA [12], we would need to add and subtract a small coefficient p to the output value in each branch. The trellis diagram of the MVA is shown in Figure 4. Noiseless output signal In the MVA, the upper and lower channel interference coefficients are denoted by , corresponding to four-way interference. Therefore, the detection for four-way interference is equivalent to four-way detection. In addition, as interference originates from above and below the signal, p = 2 . For three-way interference, the trellis diagram for the MVA was used, and the corresponding detection was a three-way detection with p = . This is due to the interference originating only from above or below the signal. Finally, in the case of two-way interference, the corresponding detection is a two-way detection, where p = 0. The trellis diagram is the same as the conventional Viterbi algorithm (VA). Figure 5 illustrates the branch values for each N-way detection. In [12], the MVA is a combination of the two-way GPR target and four-way detection to minimize ITI. As the two-way GPR target was used, an appropriate ε had to be determined and kept constant for the algorithm. In our proposal, we used a three-way GPR target to help estimate ε. In other words, ε and the GPR target were estimated simultaneously.
In the proposed model, we use a GPR target represented in matrix form Gu and Gl such as Equation (3). The matrix consists of the three-way interference information relating to ITI interference.
However, ITI may be added to the signal when applying the three-way interference. Therefore, to eliminate ITI, we implemented a feedback path at the detector. Finally, we use a three-way MVA to match the three-way GPR target. The results show that our proposed model achieves higher BER performance compared to the MVA [12] by itself. Figure 6 illustrates that the output values of the trellis diagram can be aliased if p is large. In conventional trellis, the output value of each branch is distinct because r is a small value compared to 1. For the modified trellis, p is added or subtracted from the output values. Therefore, the output branches can overlap because r is on a similar scale to p. For example, the branch from (−1 −1) to (−1 −1) in conventional trellis will have a value of −2r−1. However, in modified trellis, there are three possible branch values, namely −2r-1−p, −2r−1, and −2r-1+p. If the p value is too large, the value of −2r-1+p will be greater than the value of −1−p or potentially greater than −1. Therefore, the branch from (−1 −1) to (−1 −1) may be confused with the branch from (−1 −1) to (1 −1). We refer to the overlapping of these values as the alias phenomenon. This phenomenon decreases the BER performance, i.e., a higher alias results in lower performance.
Proposed System Model
In this paper, we proposed a 2D three-way GPR target and three-way MVA with feedback. Our model is applied to the BPMR channel. In addition, we use a 2D equalizer instead of a 1D equalizer in the MVA [12]. Our proposed system model is shown in Figure 7. In the proposed model, the original data u[k] are modulated into a[j,k] and stored on the BPMR system. The BPMR channel, which includes the 2D ISI and electronic and media noise, is presented in Section 3.1. The output of the BPMR channel y[j,k] is balanced by equalization F, which is explained in Section 3.2. The estimation of the equalization F is implemented immediately after the estimation of the target Gu and Gl. After finishing the training process, the parameters and are decided by and of Gu and Gl, respectively. The equalizer output z[j,k] is detected by the proposed detection algorithm, which is analyzed in Section 3.3. Finally, the output of detector , ∈ 1,1 is demodulated into the original signal , ∈ 0,1 .
BPMR Channel
Before putting data through the channel, input data u[k] ∈ {0,1} are magnetized into 2D data a[j,k] ∈ {−1,1}. Data a[j,k] are then passed through the BPMR channel. At the receiver, the received data are modeled as additive white Gaussian noise (AWGN). For the simulations, a 2D Gaussian pulse response, representing the 2D island response for the BPMR channel, is expressed as follows [19][20][21]: where x and z are the down-and cross-track directions, respectively; ∆ and ∆ are the down-and cross-track bit location fluctuations, respectively; c represents the relationship between the standard deviation of a Gaussian function and PW50, which is the pulse width at half the peak amplitude, and c is 1/2.3548; and PWx and PWz are the PW50 components of the down-and cross-track pulses, respectively. The BPMR 2D channel island pulse response is expressed as follows: where j and k are the discrete indices in the down-and cross-track directions, respectively; Tx and Tz are the bit period and track pitch, respectively; and ∆ is the read-head offset Select and max , min , MMSE for the cross-track. Track misregistration (TMR) is defined as the head offset size divided by the magnetic-island period and can be expressed as follows: Readback signal y[j,k] for BPMR is given by where a[j,k], h [j,k], and n[j,k] are the 2D discrete input data, 2D channel response, and electronic noise modeled as AWGN with zero mean and variance , respectively.
Equalizer and GPR Target
In Figure 7, the equalizer is combined with the GPR target [22]. Here, the equalizer and GPR target are represented by matrix F and matrix G, respectively.
The output of equalizer z[j,k] is the 2D convolution of equalizer F and channel output y [j,k]. The output of target d[j,k] is also the 2D convolution of target G and input data a[j,k]. These outputs can be expressed as follows: We define the error signal e[j,k] between the equalizer and GPR target as follows: Using expressions (10), (11), and (16), we can calculate the error as follows: , T T e j k f y g a (17) Then, the mean square error (MSE) can be expressed as follows: f y g a f y g a f Rf f Tg g Ag (18) where R is the auto-correlation matrix of the channel output data, T is the cross-correlation between the input data and the channel output data, and A is the auto-correlation of the input data. Specifically, R = E{yy T }, T = E{ya T }, and A = E{aa T }, where E denotes the expectation, and T is the transpose operator.
To invalidate the trivial condition of f = g = 0 to minimize the MSE in (18), a different constraint should be imposed on g depending on the N-way GPR targets. First, we will set the constraint for the two-way GPR target. The matrix form G for the two-way GPR target is given below.
The constraint can be expressed as follows: For the three-way GPR target on the upper ITI, the matrix form Gu is as follows: The constraint can be expressed as follows: For the three-way GPR target on the lower ITI, the matrix form Gl is as follows: The constraint can be expressed as follows: Finally, for the four-way GPR target [20], the matrix form G is as follows: The constraint can be expressed as follows: With these constraints, we can derive the following Lagrange function: where is a vector containing the Lagrange multipliers. By setting the gradients of J with respect to f, g, and to zero vectors, we obtain the optimized target and equalizer coefficient vectors as follows:
Three-Way GPR Target and Three-Way MVA with Feedback
In this section, we consider a channel represented by the following matrix form H: 1, h1,−1, h1,1 0). Therefore, we can rewrite the output of the channel as follows: In our proposed system, we use a three-way GPR target with the matrix form in (46) or (47). These targets are achieved when applying (40)-(42) with the appropriate constraints. The reason we did not choose a two-way GPR target is that it ignores the ITI information. Meanwhile, the four-way GPR target would create a large alias, which degrades the performance. We will explain this based on simulations in section IV.
If the TMR effect is ∆ , we use the form in (46). If the TMR effect is ∆ , we use the form in (47). In other words, based on g−1,0 and g1,0, we choose the form in (46) if g−1,0 g1,0 ( ) and we choose the form in (47) if g−1,0 < g1,0 ( ). In this analysis, we assume the TMR effect is ∆ . The analysis is similar when the TMR effect is ∆ .
The output signal du[j,k] of the GPR target is expressed as follows: where g0,k coefficients are [r 1 r].
After performing the minimum mean square error (MMSE) algorithm, the output signal of equalizer z[j,k] will be close to the output signal of the GPR target.
[ , ] [ For the a[j−1,k] term, we design a feedback path with a factor of to eliminate this component from detection. For a[j+1,k] , we exploit three-way MVA detection. As a[j+1,k] has a value of {−1,1}, the coefficient is added to and subtracted from each branch in the trellis diagram similar to the three-way MVA. (Finding the coefficient is presented in Section IV.) This is referred to as the three-way MVA with feedback detection and shown in Figure 8. Our proposed method is summary as Algorithm 1. To easier visualization, we present Algorithm 1 in the Figure 9.
Simulation Results and Discussion
In this section, we simulated the model as shown in Figure 7. The original signal u[k] ∈ {0,1} with dimension 1 1,440,000 is converted into a 2D signal a[j,k] ∈ {−1,1} with dimensions of 1200 1200 by modulation, which is also the size of a data page. Input data a[j,k] pass through the BPMR channel with AWGN. The output of channel y[j,k] becomes the input to a 2D equalizer. We used the 2D equalizer with dimensions 5 5; the equalizer coefficients and the GPR target were estimated by calculating the error signal e[j,k] and applying the MMSE algorithm [20]. Then, the output of equalizer z[j,k] is passed through our detection scheme. Finally, the output [j,k] from the detection scheme is demodulated to obtain [k]. In this paper, we define the channel signal-to-noise ratio (SNR) as 10log10(1/ ), where is the AWGN power. In this experiment, we simulated 10 pages of sample data and an AD of 3 Tb/in 2 (0.465 Tb/cm 2 ) (Tx = Tz = 14.5 nm) [9]. Our program is built in Matlab R2018a. The coefficients used in the channel simulation are as follows: Figure 10 shows the BER performances for the 2, 3, and 4-way GPR target and its associated 2, 3, and 4-way MVA, respectively. In this experiment, we did not use the feedback path, and the coefficient for the N-way MVA was set to the GPR target coefficient such that = max ( , ), which is the ITI coefficient from the estimated GPR target. We found that the two-way method, which corresponds to the conventional VA algorithm, achieved the best BER performance. This shows that our signal had additional ITI information when using the three and four-way GPR targets. When the three and four-way MVA were used by themselves, we were not able to take full advantage of the ITI information. For the two-way GPR target, there was no ITI information, whereas for the fourway GPR target, there was too much ITI information, which resulted in a large alias. Therefore, we chose a three-way GPR target with a three-way MVA so that our model would be able to utilize the ITI information in the detection process. In addition, the accuracy of the detection of the first row is very important owing to the addition of a feedback path in the model. If the first row has many errors, these errors will be propagated through to the next row of the algorithm. Therefore, we monitored the accuracy of the detection of the first rows in all data pages by calculating the BER performance for the first rows on each N-way GPR target and N-way MVA method. In other words, we take the first row of data in each page, then we apply the 2,3,4-way GPR and 2,3,4-way MVA. The results are shown in Figure 11. The three-way GPR target and threeway MVA achieved the best BER performance. This is because the data in the first row are affected by three-way interference. Consequently, the three-way GPR target and threeway MVA were well suited for this condition. Unlike the two-way GPR target, we were able to extract all ITI with the three-way GPR target. Meanwhile, the four-way GPR target and four-way MVA did not appear to affect the BER performance. This is due to the alias effect. 2-way GPR target and 2-way modified VA 3-way GPR target and 3-way modified VA 4-way GPR target and 4-way modified VA Figure 11. Combined N-way GPR target and N-way MVA for the data in the first row. Figure 12 shows that the output values of the trellis are separated into two parts. One part is less than zero, whereas the other is greater than zero. However, if r or p is large enough, these components may be swapped. For instance, when −1+2r+p originates from the negative side, there is a high chance of crossover to the positive side. Tables 1-3 show the r and p values in the simulation when p = 0 for two-way GPR target, p = for threeway GPR target, and p = 2 for four-way GPR target, respectively; is the ITI factor for the GPR target. 0.0999 0.8420 0.0418 Based on the results, the four-way GPR target has the largest alias due to the overlap of the lower branches to the negative side and vice versa. In addition, as the data in the first row are affected by three-way interference, the three-way GPR target and three-way MVA was the most suitable scheme.
The Proposed Model
In the next experiment, we compared our proposed model to the MVA. To implement the MVA [12], we had to first determine the interference coefficient . Here, we fixed SNR = 15 dB and simulated the BER performance according to . Based on the results in Figure 13, we chose = 0.25 for the MVA in [12]. In our algorithm, we will calculate the coefficient in (51). As the coefficient is the channel's ITI factor, this means that the coefficient is equal to h1,0 or h−1,0. In practice, however, we are not able to determine the values for h1,0 and h−1,0. As an alternative, we can use the target factor or instead of the interference factor. This is possible because the or in the GPR target is an estimation of h1,0 or h−1,0. In addition, this helps in avoiding the simulation for finding the optimum coefficient unlike finding . In Figure 14, the two-way GPR target and four-way MVA is the MVA in [12], whereas the two-way GPR target and two-way MVA is the conventional VA in [12]. Based on the results shown in Figure 14, the three-way GPR target and three-way MVA with feedback indicate that our proposed algorithm can be improved with a more accurate estimation of the ITI coefficient. The results also show that our proposed algorithm has an approximately 1 dB gain compared to the MVA at 10 −5 BER.
TMR Effect in BPMR Channel
In this section, we study the effects of TMR. In practice, TMR results in performance degradation; however, it is often difficult to estimate how often it occurs. However, we are able to estimate the ITI with the TMR effect when using the three-way GPR target. In the model, when the TMR is ∆ , the upper interference h−1,0 is larger than the lower interference h1,0 (the estimation of GPR results . Then, the coefficient of the feedback line is to reduce ITI. Meanwhile, is the estimation of the lower interference h1,0. Therefore, is ( ). When the TMR is ∆ , the upper ITI h−1,0 is smaller than the lower one h1,0 (the estimation of GPR results . Then, the coefficient of the feedback line is to reduce ITI. Meanwhile, is the estimation of the lower ITI h−1,0. Therefore, is ( ). In general, the feedback coefficient is equal to max , and min , . In our study, we simulated the channel conditions for both 10% and 15% TMR [12].
The results in Figure 15 show that our algorithm was minimally impacted by the variation in TMR as the TMR is reflected on the ITI information and our GPR target estimation is based on the ITI. Figure 15. BER versus SNR for 10% and 15% track misregistration (TMR).
BER
In Figure 16, we varied the TMR value from 10 to 30% at SNR = 15 dB. This confirms that our algorithm is not significantly affected by TMR. TMR occurs when the read-head deviates from the main track. This changes the ITI on the receiving signal. The change in ITI is not quantifiable with the conventional scheme. However, with a three-way GPR target, the model is able to estimate the ITI information using or when TMR occurs. This allows the algorithm to offset the amount of ITI during detection. The proposed scheme is quite resistant to the TMR effect.
Media Noise in BPMR Channel
To simulate actual conditions, we performed the simulations in the BPMR channel with media noise. First, we tested with 6% and 8% position fluctuation in the simulations [23,24].
In Figure 17, the proposed algorithm was able to achieve a performance gain in the BPMR channel even with 6% and 8% position fluctuation. Then, we increased the position fluctuation further at 15 dB. Figure 18 shows that when the position fluctuation reached 18%, all tested methods had similar BER performance. This means that our proposed algorithm can only withstand a position fluctuation of less than 18%. With position fluctuation, the interference changes randomly. Therefore, the interference coefficient estimator is provided with a lot of randomly changing ITI and ISI information for detection. This makes it possible to improve performance in the BPMR channels even with media noise.
Conclusions
In this paper, we presented a detection method that has improved BER performance compared to the MVA [12]. The proposed model is able to achieve an approximately 2 dB gain at 10 −5 dB. With a three-way GPR target, the proposed detection scheme can also BER improve BER performance when TMR and position fluctuation occur. By employing the feedback line and target, the MVA can estimate the ITI and predict TMR. Due to this, it can choose the solution suitable for the ITI case and TMR case, respectively. With a threeway GPR target, the proposed model also improved the performance against the TMR effect and withstood fluctuations of nearly 18%. The proposed model requires training data to estimate the target before starting the detection of user data. In addition, the proposed detector is based on the MVA, which is fed the ITI information from the GPR and the feedback line. Therefore, the computational complexity is almost the same as conventional Viterbi methods. Finally, we can expect that it is possible to improve the BER performance of the proposed scheme if the MVA detection block is replaced by a BCJR detector with the cost of complexity.
Conflicts of Interest:
The authors declare no conflict of interest. | 6,367.2 | 2021-01-13T00:00:00.000 | [
"Computer Science"
] |
Computational inertial microfluidics: a review.
Since the discovery of inertial focusing in 1961, numerous theories have been put forward to explain the migration of particles in inertial flows, but a complete understanding is still lacking. Recently, computational approaches have been utilized to obtain better insights into the underlying physics. In particular, fundamental aspects of particle focusing inside straight and curved microchannels have been explored in detail to determine the dependence of focusing behavior on particle size, channel shape, and flow Reynolds number. In this review, we differentiate between the models developed for inertial particle motion on the basis of whether they are semi-analytical, Navier-Stokes-based, or built on the lattice Boltzmann method. This review provides a blueprint for the consideration of numerical solutions for modeling of inertial particle motion, whether deformable or rigid, spherical or non-spherical, and whether suspended in Newtonian or non-Newtonian fluids. In each section, we provide the general equations used to solve particle motion, followed by a tutorial appendix and specified sections to engage the reader with details of the numerical studies. Finally, we address the challenges ahead in the modeling of inertial particle microfluidics for future investigators.
Introduction
Microfluidics, a technology characterized by the engineered manipulation of fluids at the microscale, has shown considerable promise in point-of-care diagnostics and clinical studies [1]. Since its birth in the 1990s, this technology has been matured into a complex field impacting many commercial applications [2]. Isolation, fractionation, and purification of cells using microfluidic platforms have been a flourishing area of development in recent years with many successful translations into commercial products. Several review articles are available on particle/cell separation using microfluidics systems, with some of them being presented in [3][4][5].
Microfluidic systems normally leverage on the disparities in the intrinsic properties of different particle/cell populations (i.e., size, deformability, surface charge, and density) to achieve separations. These systems can be broadly classified as active and passive separation techniques. Active techniques rely on external force fields such as acoustic or magnetic for operation, while passive techniques (e.g., pinched flow fractionation, deterministic lateral displacement (DLD), and inertial microfluidics) rely only on the channel geometry and inherent hydrodynamic forces for functionality [6]. Among all existing microfluidic systems, inertial microfluidics, which takes the advantage of size-dependent hydrodynamic effects in microchannels, has become a promising approach for particle and cell separation due to its capacity in high-volume and high-throughput sample processing [7][8][9][10][11]. Inertial microfluidics has been employed for various applications, influencing a wide range of industries such as microbiology, biochemistry, and biotechnology, most of which can be found in [9,10,12]. The most prevalent structures used in inertial microfluidics are demonstrated in Fig. 1A.
Inertial microfluidics is defined as the migration of randomly dispersed particles toward specific equilibrium positions inside a microchannel. This phenomenon was first reported by Segre and Silberberg in 1961 [13]. Later, with the advent of microfluidics, it was extensively explored, resulting in numerous articles evaluating the underlying physics of this phenomenon.
To date, numerous research groups have attempted to provide a numerical solution to better understand fundamental aspects of particle focusing inside inertial microfluidic systems. These numerical solutions can be divided into three distinct classifications. First, asymptotic analysis by simplification of fluid equations predicts inertial forces acting on a particle within a channel.
It has been reported that the phenomena pictured by this solution are far from the real scenario, due to its oversimplifying assumptions and limitations. In the second classification, scientists utilize Navier-Stokes-based solutions for inertial microfluidics, where all prevalent issues of particle size or particle effects on fluid streamline in asymptotic solutions are addressed.
However, tracking the solid-fluid interface or the demanding calculation time for solving equations are challenges for Navier-Stokes-based approaches. As an alternative, researchers use the lattice Boltzmann method (LBM) for inertial flow modeling. The relative simplicity of algorithm parallelization, adding new physics, and the strength of LBM in the intermediate Re regime explain its rapidly increasing use for inertial microfluidics over the last decade.
Although numerous review papers exist for inertial microfluidics, there is a lack of a comprehensive review of computational inertial microfluidics. Accordingly, the primary aim of this review paper is to provide researchers with the latest updates regarding computational solutions of inertial particle motion within a microfluidic device. Here, we have reviewed all computational attempts for inertial particle motion, covering asymptotic calculations, Navier-Stokes-based approaches, and LBM.
Asymptotic solutions
In 1961, Segre and Silberberg demonstrated that a suspension of neutrally buoyant particles flowing in a tube with radius R migrate to positions 0.6 R from the centerline when inertial The asymptotic solutions are among the early attempts made for predicting particle migration in confined and non-confined flows. Despite their weakness to capture specific details, such as the particle-fluid interaction and disturbed fluid velocity profiles around the particle, and to apply to a complex 3D domain, these solutions are convincing enough to explain the phenomenon in the most time-efficient manner.
In 1961, Brenner investigated the motion of a particle toward a semi-infinite plane surface [16].
The effect of the plane surface is considered using two distinct cases: a rigid plane surface and a free plane surface. They found that resistance against a particle path increases in a wallbounded case (e.g., particle motion within a confined channel) compared to an unbounded case (e.g., particle movement in an infinite fluid domain) when a particle moves under otherwise identical conditions. Nowadays, we know that a particle experiences more resistance along its way near the walls. The results obtained for a solid surface or a free plane surface resulted in a drag force ( ) on the sphere which is shown by Eqs. (1) and (2) ] } (2) where U is the fluid velocity, and is a term that relates the ratio of sphere radius (b) to the distance of its surface from the plane (h), as shown by Eq. (3).
In another work, the force exerted on a spinning particle moving in a viscous fluid was investigated using a combination of Stokes and Oseen expansions [17]. Since both expansions are for the same solution and each is valid over its own range, the idea was that these two expansions should be matched at the interface. Through the mathematical derivations, lift and drag forces over the particle were extracted as by Eqs. (4) and (5), respectively.
where is the fluid density, is the angular velocity of the particle, is the particle diameter, Ω and V is the relative velocity between fluid and particle. These two formulas are acceptable for a moving particle in an initially quiescent flow. The transverse force on a particle in a 3D Poiseuille flow was calculated based on Eq. (6).
where is the radius of the tube. Equation (6) shows a force in the direction of the central 0 axis where the force vanishes; however, since the Segre and Silberberg observations [13,18] show that particles tend to concentrate on an annulus of a tube, this formula requires further development and modification.
These studies were further continued by Saffman [19] who showed that if a spherical particle travels through a highly viscous liquid with the velocity ( ) relative to a uniform and simple shear flow, it will encounter a lift force perpendicular to both rotational and translational velocity vectors. If the particle lags behind the fluid with a relative velocity, fluid would push the particle toward the center, and vice versa. This force can be calculated by Eq. (7).
here, is the magnitude of the velocity gradient. This model was valid only when there was a difference between fluid and particle velocity. Besides, the effect of the wall on the velocity profile around the particle was omitted by the assumptions of this model. As a result, this model was unable to resolve the motion of particles near the wall.
Lateral force on a deformable sphere in an inertia-less steady shear flow was observed by Tam and Hyman [20]. A rigid particle does not experience any lift force in a creeping flow; however, a deformable particle is affected even in a Stokes flow regime. To characterize the deformation of a particle, it was assumed that the elastic displacement was sufficiently small; hence, the linear elastic theory was used, and the total transverse force was obtained as Eq. (8).
and are constant parameters that are available in [20], and and are Lamé constants. Through particle tracking analysis, it was shown that particles reached an equilibrium position for both simple shear and Poiseuille flow, independent of the initial particle position, where particles reside on the centerline for simple shear flow and 0.3 times half the channel height for Poiseuille flow for . Moreover, a general lateral force (Eq. (9)) was proposed, in ≪ 1 which is the ratio of the particle diameter to the channel height, is for 4 (1 -2 ) Poiseuille flow and for simple shear flow, is the shear rate, and G 1 as well as G 2 are numerically predefined functions in terms of s, which is illustrated in Fig. 2 [21]. Fig. 2 The schematic view of a cylindrical particle inside the confine channel flow Also, it has been illustrated that rigid particles ( in a second-order fluid (a well-known ≪ ) viscoelastic fluid model) migrated toward the centerline due to the induced force from normal stress differences in Poiseuille flow, and toward the outer cylinder in the case of circular Couette flow [22]. When the ratio of Weissenberg number (Wi) to Re is much more than unity , the effect of inertial terms in the Navier-Stokes equations are negligible ( ≫ 1) where is the blockage ratio, and are the first and second normal stress differences, 1 2 respectively, and s is normalized vertical location of the particle inside the domain (Fig. 2).
In 1989, Schonberg et al. presented that in a 2D Poiseuille flow, by increasing Re, particles tend to migrate toward the wall [23]. However, the particle was assumed to be small enough so that the velocity profile inside the channel remains undisturbed, indicating that the particle . The singular perturbation technique was employed to solve coupled flow and particle ≪ 1 equations in this article. It is demonstrated that the particle equilibrium position becomes closer to the wall as Re increases, which was validated with experimental data of Segre and Silberberg [13].
The majority of studies so far have dealt with particle migration in the small and intermediate Re flow regime (Re<3000). However, in 1999, Asmolov investigated the lateral migration of buoyant and non-buoyant particles in 2D confined flow over a wide range of Re and presented some results for unbounded and semi wall-bounded cases [24]. It was shown that two equilibrium positions exist for neutrally buoyant particles. It was mentioned that net inertial lift force exerted on the neutrally buoyant particle in 2D Poiseuille flow could be determined using Eq. (11), where is the lift coefficient that is a function of particle lateral position and Re, is the fluid density, is the local shear rate, and is the particle diameter.
While many studies have addressed the migration of particles in planar fluid flow, it is crucial to investigate the effect of a realistic 3D fluid flow on particle migration in confined flow. To address this issue, Matas and his colleagues were one of the pioneers who analyzed 3D particle inertial migration over a wide range of Re in a pipe flow [25]. Accordingly, the matched asymptotic expansion was used for cylindrical pipe flow, which was previously used for the planar case [24]. Based on their results, lift force in a pipe flow was qualitatively similar but quantitatively different from the one in a planar channel flow. Furthermore, the equilibrium position of particles in the planar flow was closer to the wall, and the magnitude of the lift force in planar flow was considerably larger than that of pipe flow. For Re>700 and in the case of pipe flow, one additional equilibrium position exists [26].
Previous matched asymptotic solutions and point particle methods assumed that the particle is small enough to not disturb the fluid flow. However, particle size has a vital role in particle migration. Therefore, these methods cannot fully capture the particle trajectory. To meet this demand, Hood et al. developed a new theory for the exerted force on finite-size particles within a 3D microchannel flow with a square-shaped cross-section [27]. They employed a numerical method to analyze the dominant balancing forces within the Navier-Stokes equations and then derived an asymptotic model to calculate the lift force on spherical particles as a function of their particle size. Their results were valid up to Re of 80, with maximum particle size being limited only by the proximity of the particles to the walls. Furthermore, they illustrated that the lift force tended to zero at eight symmetrically placed points around the microchannel section, while four points were stable, but the others were unstable. This observation was in good agreement with the experimental data [27]. Earlier in 2018, Asmolov et al. developed a new theory for the lift force on finite-size particles in a confined microchannel flow for Re < 20 [28]. They showed that this theory might be applicable for the use beyond Re of 20 and other confined flows such as pipe flow. The predicted results obtained from their new model were verified by LBM. The proposed lift force is defined by Eq. (12). = 4 2 ( 0 ( ) + 1 ( ) + 2 ( ) 2 ) (12) where is the particle radius, is the maximum shear rate at the wall, is slip velocity, and values are listed as follows.
This study proposed one of the first models for inertial migration of finite-sized particles in confined microchannel fluid flow.
In particular, this model shows a remarkable increase in the lift force near the walls.
Navier-Stokes-Based Solutions
Semi-analytical and asymptotic solutions based on perturbation theories can explain the physics of particle motion since they provide an explicit formula for the forces acting on the particle [29][30][31]. However, these methods have some restrictions that limit their application to a variety of scenarios. These restrictions include particle size, Re, and the distance between the particle and the wall [32][33][34][35]. There are many cases of particle motion that do not comply with these restrictions [36][37][38]. For these cases, numerical methods are powerful alternatives that can determine the motion of particles for a wide range of sizes and Re [39]. Most existing commercial CFD packages employ the point particle assumption when simulating particle motion. Since there is not an exact formula for calculating inertial lift forces acting on a particle, these packages cannot predict the particle trajectory in inertial flows precisely. DNS can be considered as an alternative and robust method, in which forces acting on a particle are calculated from the interaction between the fluid and the particle. The most frequently used methods, which adopt DNS to simulate particulate motion, are briefly explained in the following.
Flow at Specific Particle Position (FSPP)
In some studies, the steady-state flow around the particle and inertial forces acting on the particle at a specific point have been studied, obviating the need for calculating the whole particle trajectory. Accordingly, in 2009, Di Carlo [40] proposed a DNS method for calculating the steady-state flow fields around a single particle at specific positions of the channel crosssection to obtain the net inertial force acting on a particle. This method is highly efficient as it avoids problems encountered in the calculation of transient particle motion such as remeshing at each time step. To calculate the steady-state flow field around a single particle in the position (y,z) of the channel cross-section (Fig. 3A), a part of the channel with sufficient length should be considered (usually 20a is adequate for neglecting the effect of the particle-particle interaction, where a is the diameter of the particle). Then, the particle is located at the center of this part at position (y,z). The boundary conditions shown in Table 1 are implemented in the channel. In the beginning, the particle is considered to be at rest. Using continuity and Navier-Stokes equations (Eqs. (16) and (17)), pressure and velocity fields around the particle are calculated. Next, the values of the forces and torques exerted on the particle are obtained. Linear and angular velocities of the particle are calculated via Newton's second law (Eqs. (18) and (19)).
In these equations, and are the particle surface and outward unit normal vector, respectively. ∂ At the next time step, values of the boundary conditions are updated by velocities obtained from the previous time step, and the flow field is solved again. This iterative algorithm is terminated when the momenta in y and directions and the force per surface area of the particle become smaller than a specified value. Finally, components of the inertial lift force acting on the particle in the y-z plane, i.e., and at point (y 1 , z 1 ) are calculated. By repeating this process for different positions of the channel cross-section, the distribution of inertial forces in the cross-section of the channel can be obtained.
Arbitrary Lagrangian-Eulerian (ALE) method
In the ALE method proposed by Hughes et al. [41], the system of suspended rigid particles in an incompressible fluid flow is considered. The surface velocity of each particle is = + Ω . Here, , and are the particle's center velocity, angular velocity, and center × ( -) , Ω position, respectively. Fluid flow is solved using incompressible continuity and Navier-Stokes equations. In most studies, a finite element scheme is used for solving the flow. At the next time step, particle positions, orientations, and their velocities are updated by integrating the total stress on the surface of each particle and Newton's second law (Eq. (18)and (19)). A new mesh should be generated at this time step to recalculate the flow field. Using this iterative solution, one can track the particle trajectories at each time step. The difficulties of this method are updating the mesh at each time step, which is a time-consuming process. In this method, if the density of a particle is less than that of the fluid, the above explicit time-stepping method leads to an unstable solution. This instability can be overcome by separating part of the drag force, which contributes to the acceleration of fluid around the particles from the total drag force [42]. For more information about the ALE method and methods for stabilizing its solution, readers can refer to [43].
Fictitious Domain Methods
As stated in the ALE method, the computational domain is re-meshed at each time step when particles move along the domain. In fictitious domain methods, a fixed mesh is used for the simulation of particulate flows. Unlike the ALE method, this mesh includes the domain inside the particles, and the flow field is solved for the entire computational domain using the Navier-Stokes equations. Moreover, to ensure that the fluid inside each particle obeys the rigid body motion, a force is applied to its domain. Fictitious methods are classified according to this applied force. If this force is applied to the surface of the particle, it is called the immersed boundary method (IBM), while if it is applied to the body of the particle, it is called the immersed body method. Here, we only describe the Distributed Lagrange Multiplier (DLM) method as a subset of the immersed body method that is frequently used for simulation of particle motion in inertial microfluidics.
Distributed Lagrange Multiplier (DLM)
The DLM method is an immersed body type of the fictitious domain method introduced by Glowinski et al. [44] in the framework of the finite element method. In summary, to solve the flow, one mesh for the pressure field and one finer mesh for the fluid velocity field are required. Particle domains are discretized with additional meshes, which move with their corresponding particles. The size of the fluid mesh should be smaller than that of the particle mesh to avoid the overestimation of the system. The flow field is solved over the whole computational domain, including the particles' interior. A distribution of Lagrange multipliers as body force densities is used to force the flow inside the particles to show a rigid body motion. The value of the Lagrange multipliers is calculated using the constraint at each point of the mesh inside the particle. For more = + Ω × ( -) details about this method, please refer to [43].
Immersed Boundary Method (IBM)
In IBM, there are two computational meshes [45][46][47][48][49]. The first one, which is for the fluid, is a fix staggered Cartesian grid which is referred to as the Eulerian grid. The second mesh is represented by a set of markers, , which distribute evenly over the surface of the ∀ 1 ≤ ≤ particle. This set is referred to as the Lagrangian mesh (Fig. 3B). A Lagrangian mesh has an element size of and the makers are located at the center of each element. Since in the IBM, ∆ the no-slip boundary condition is not imposed at the surface of the particle explicitly, a force distribution should be defined on the particle's surface and added to the Navier-Stokes equation to guarantee the constrained rigid motion of the particle. This constraint should be equal to the local particle velocity in Lagrangian coordinates (Eq. (20)). Moreover, because there are two different meshes, the value of force and velocity between Eulerian and Lagrangian meshes should be transferred using interpolation based on the regularized Dirac delta function δℎ ( ) [45]. To calculate the force distribution, first, the velocity predicted on the Eulerian mesh is interpolated at Lagrangian markers (Eq. (20)). Second, the forces on the Lagrangian mesh are computed based on the difference between the interpolated velocity and the particle velocity (Eq. (21)). Finally, the force is added to the Navier-Stokes equation to calculate the flow field and the motion of particles using Newton's second law [45,46].
Fig. 3
A) Schematic illustration of particle motion in a part of the channel. The surface of particle rotates with an angular velocity of Ω, and walls have the backward velocity of . Reprinted from [50] with the permission of AIP Publishing. B) Representation of the Eulerian and distribution of Lagrangian grids. Eulerian grid is shown with fix staggered Cartesian grid with gray color. Lagrangian gird is represented by a set of green markers, which are distributed evenly over the surface of the particle with an element size of ∆ .
Straight channels Particle motion in a tube and Couette flow
The first attempts to numerically investigate particle motion in a shear flow was made by Feng et al. [36]. They considered neutrally and non-neutrally buoyant particle motion in 2D horizontal Couette and Poiseuille flows, which was followed by similar studies for 3D cases [51]. These studies used the ALE method for the simulation of particle motion and illustrate that the density difference between particle and fluid plays a crucial role in the lateral equilibrium position of particles.
Similarities in the particle motion between confined and unconfined flows motivate several groups to propose lift formulas analogous to the lift force formula acting on a particle in classical aerodynamics using ALE [39,52] and DLM methods [39]. In these studies, since the motion of a single particle was investigated, ALE method was a better choice to obtain more precise results than DLM.
Shao et al. [37] investigated particle motion in a tube at different Re by introducing some modifications to the DLM method. They found that by an increase in Re, the equilibrium position shifts from the Segre and Silberberg equilibrium position to the inner radius equilibrium position (Fig. 4A). The critical Re in which the inner equilibrium position becomes stable is a function of the particle size and the distance between each particle in the flow direction.
Particle motion in channels with a rectangular cross-section
The physics of inertial microfluidics in channels with rectangular cross-sections is more complicated compared to tubular flows. Liu et al. [38] and Mashhadian and Shamloo [53] investigated the migration of particles in a straight channel with a rectangular cross-section using the FSPP method. Particle migration in this channel can be divided into two steps. First, particles enter the channel at random positions (Fig. 4BI) and focus in a line around the walls Experimental results of different studies illustrate various focusing patterns at the outlet of the channel [54,55]. To address these diversities, several researchers numerically investigated the migration of particles in straight rectangular channels [38,40,53,56,57]. Hence, the value of inertial lift force ( ) at different positions within the cross-section should be investigated.
Here, acting on the particle in a rectangular channel is a function of particle size ( ), the aspect ratio of the cross-section ( ), Re, and the position of the particle in the channel crosssection. Di Carlo and his colleagues [40] were pioneers in investigating the distribution of the inertial lift force in the cross-section of square channels using the FSPP method (Fig. 5AI). The which is calculated using the FSPP method for three different sizes of the particle at a constant Re. By increasing the particle size, the equilibrium position approaches the channel center [57], while by increasing Re, this equilibrium position moves toward the wall [40]. Also, in rectangular channels, depending on Re, there are several equilibrium positions near the channel walls [38,53]. At low Re, the equilibrium positions at the center of long walls are stable (Fig. 5DI). By increasing Re, the equilibrium positions near the center of short walls also become stable (Fig. 5DII). Moreover, if Re increases to high values, new equilibrium positions emerge near the long walls ( Fig. 5DIII and Fig. 5BII). The focusing pattern of particles in a channel with a square cross-section at low (I) and high (II) Re [56]. C) By increasing the particle size (blockage ratio), the equilibrium position of particle moves toward the center [57]. D) Distributions of lift force acting on the particle in a rectangular channel with an aspect ratio of two under various Re and particle sizes Reproduced from Ref. [38] with permission from The Royal Society of Chemistry. I) Only the centers of the long walls are stable equilibrium positions (the blockage ratio and Re=100). ( / ) = 0.3 II) Stable equilibrium particle position at Re=200 and blockage ratio of 0.3 III) By increasing Re to high values (Re=200) for the blockage ratio , new equilibrium positions emerged near the ( / ) = 0.1 centers of long walls.
Effect of the particle on fluid flow
In the majority of studies on particulate flows, the effect of fluid on the particle has been investigated to trace the particle migration. However, there are limited works investigating the effect of particles on the carrier fluid, which is of utmost importance to fully understand the exact mechanism of particle focusing within inertial microfluidic systems [58][59][60]. [58]. On the other hand, outside the closed streamline area, the open streamline area can be observed where streamlines after passing the obstacle return to their previous directions. Moving to an inertial regime causes the closed streamlines around the particle to collapse. This creates two reversed streamline areas (Fig. 6B). Both the spiraling streamline and reversed streamline areas meet each other at a stagnation point [61]. Fig. 6A and B are obtained via numerical simulations using the front-tracking finite difference method [58].
Unlike open boundary shear flows, reversed streamlines are observed in confined particulate flows in the Stokes regime (Fig. 6C) [59]. The distribution of streamlines around a particle in A secondary flow in the channel cross-section is generated once there is a particle in the channel, even the particle has no rotation and is located at the channel center [60]. Fig. 6E, obtained through the FSPP method, represents the value and direction of secondary flows created due to the presence of a particle located at different distances from the channel crosssection center. When the particle is closest to the channel center, the weakest secondary flow is observed.
The pressure distribution across the surface of a particle plays a significant role in the lift force acting on the particle. Numerical studies show that there are four distinct regions on the surface; two minimum and two maximum pressure areas (Fig. 6F) [38,50]. Since the magnitude of the pressure distribution across the areas near the channel wall is larger, the particle is repelled from the wall toward the center of the channel. Closed streamline areas near the particle in Stokes flow and B) reversed streamline areas around the particle in the inertial flow [58]. Distribution of streamlines around a rotating particle in confined flow. Reversed streamline observed in both C) inertia-less and D) inertial flow regimes [59]. E) The lateral position of the particle in the cross-section of the channel has a significant impact on the strength and direction of induced secondary flows [60]. F) I. Isometric view of the velocity distribution. II. Top view of pressure distributions on the surface of the particle near the wall. Reprinted from [50] with the permission of AIP Publishing.
Deformable and non-spherical particles
Rigid, spherical particles are commonly used in numerical and analytical solutions of particulate flows. However, the non-spherical shape and elasticity of a particle play a vital role in the inertial particle motion. Using FSPP, Masaeli and her co-workers [62] studied the motion of rigid rod-like particles with different aspect ratios in rectangular channels. They showed that the rotation of particles is a combination of in-plane and out-of-plane rotations (Fig. 7A). By increasing Re, particles mainly exhibit in-plane rotation. Also, by increasing the aspect ratio of a particle, its equilibrium position moves toward the center of cross-section (Fig. 7B). This phenomenon has been used to separate particles with different rotational diameters [63] (Fig. 7C). Also, Lashgari et al. [64] investigated the migration of a single oblate particle in a duct using the IBM method from moderate to high Re (Fig. 7D). To reduce the numerical calculation time, particles are released from an initial position far from the center as the particle migration velocity at the center vicinity is relatively low.
Deformability is another property of real particles (e.g., cells and microgels). Deformable particle models can be divided into three categories: 1) droplet [65,66], 2) deformable capsule [67,68], and 3) elastic particles [69]. The deformability leads to approaching the particle equilibrium position to the channel center (Fig. 7E). When deformable particles are released in the channel, after a while, they reach a steady-state shape and migrate toward their equilibrium position. This feature of the deformable particle is due to the fact that they exhibit tank-treat motion (for periodical deformation of different points of a deformable particle see Fig. 7H) while the particle is rotating rather than tumbling (Fig. 7I), which can be observed in rigid non- Hadikhani et al. [66] using the ALE method observed that by increasing the rectangular channel aspect ratio, bubbles focus at the center of the short walls, which is not similar to the behavior of rigid particles with the same size (Fig. 7G).
There are several constitutive laws for the simulation of the deformation of capsule membranes [70,71]. However, in most cases, the neo-Hookean law is used because of its compatibility with experimental results [67]. The front-tracking method [72], as an accurate method to track the interface of a deformable object, has a formulation similar to that of IBM. However, it requires a dynamic remeshing in each iteration and thus is more time-consuming than IBM.
Using a front-tracking method, Doddi and Bagchi [67] investigated the migration of deformable capsules in Poiseuille flow (Fig. 7H). Near the wall, the deformation rate is high, and after a quick deformation, the particle shape becomes approximately steady. In another study, Raffiee et al. [73] investigated the equilibrium position of cells with a blockage ratio of 0.2 in a square channel using the front-tracking method. According to their results, unlike rigid particles that reach equilibrium position located on the center of the channel walls, the focusing position of the cell is on the channel diagonal at the same range of Re. Villone et al. [69] investigated the motion of elastic spheroid particles under unbound and confined shear flows using the ALE simulation method. The results show that the wall-induced force causes more particle deformation compared to that in an unbound shear flow. toward the center of the channel. Reprinted from [67], with permission from Elsevier. I) Schematic of definitions of tumbling and tank-treat motions.
Inertial microfluidics in non-straight and non-planar microchannels
In the previous section, all reported studies are conducted using straight channels to investigate the physics underpinning inertial microfluidics. However, most channels used for particle manipulations are non-straight channels. In the existing numerical studies on particles' motion in these channels, the distribution of the inertial lift force in the cross-section of the channel has been calculated through the FSPP method. Then, the particle trajectories in the channel can be obtained through the point particle model by combining this force with other existing forces that have explicit formula such as drag force. These trajectories have an explicit formula [38] shown in Eq. (22). Fig. 8A presents the focusing mechanism in serpentine channels [50]. In this channel, drag and inertial lift forces play a vital role in particle focusing. If the velocity of a particle exceeds a threshold, it will be focused at the center of the long walls of the serpentine channel [74].
Similar to serpentine channels, spiral microchannels have been frequently used for the separation of bio-particles and cells [75][76][77][78][79][80]. If the size of particles in spiral channels is less than a particular threshold ( ), the effect of the drag forces overcomes the inertial < 0.07 forces and traps particles in their vortex streamlines (Fig. 8B).
By changing the shape of the channel cross-section, we can control the focusing positions of the particles in a straight channel (Fig. 8C) [53,81,82]. The equilibrium positions of particles can be obtained through the vector plot of the total force in any arbitrary channel cross-section by combining the calculated inertial forces from the FSPP method with the drag forces ( Fig. 8D) [83]. Therefore, it can be stated that particle focusing is highly related to the channel shape and cross-section, and a deep understanding of the underlying physics helps the investigator for better manipulation of the particles. IV. Particle migration along the length of the channel [77]. C) Particle focusing in a hybrid straight channel with rectangular, triangular, and semicircular cross sections. Reprinted from [53], with permission from Elsevier. D) I. Force-maps for triangular microchannel over a wide range of Re. The magnitude of lift forces for II. Re=20 and III. Re=150 [83].
Inertial microfluidics in non-Newtonian fluids
Particles in a non-Newtonian fluid experience several lateral forces in addition to the inertial forces. The magnitude and direction of these lateral forces depend on the non-Newtonian fluid's rheological properties. Therefore, the analysis of particle motion in a non-Newtonian fluid becomes more complicated compared to the Newtonian fluid. Seeing as this field is a subject of ongoing investigations, only recent major studies on particle motion in viscoelastic fluids have been reviewed.
Similar to Newtonian fluids, to obtain the flow field in a viscoelastic fluid, continuity and momentum equations need to be solved (Eqs. (23) and (24)). However, an extra stress tensor ( ) must be added to the total stress tensor ( ) in order to consider the effect of viscoelasticity τ σ (Eq. (25)) [84].
σ = -pI + 2 ( ) + τ (25) where and are the Newtonian solvent viscosity and strain rate tensor, respectively. To ( ) solve this set of equations, the relation between and other parameters, such as fluid velocity, τ should be obtained by a constitutive equation [85,86]. Two constitutive laws that are frequently used are Oldroyd-B and Giesekus equations [87]. Equation (26) shows the Giesekus constitutive law [84]. In this equation, , , and are the mobility factor, polymeric viscosity, and relaxation time, respectively. When the mobility factor ( ) is equal to zero, the Giesekus equations is reduced to the Oldroyd-B equation.
The migration and focusing pattern of the particles in the channel cross-section mainly depend on the competition between elastic and inertial forces acting on particles (Fig. 9A) [88][89][90][91]. Li and his colleagues [92] investigated the effects of fluid elasticity, fluid inertia, and shearthinning viscosity using Oldroyd-B (Fig. 9BI) and Giesekus equations (Fig. 9BII). They used the DLM method in the framework of the finite volume method on a staggered grid. Particles in Oldroyd-B fluid focus at the center of the channel, while in the Giesekus fluid particles reach their equilibrium at a position away from the center (Fig. 9B).
More recently, Raffiee and colleagues using the DLM method [89] studied the focusing pattern of particles in the cross-section of a square channel with Giesekus fluids for different values of Wi and Re (Fig. 9C). In another study, with a modified version of the DLM method, Yu et al. [90] investigated focusing patterns of particles in Oldroyd-B fluids. Fig. 9D shows the result of their study for a particle with a blockage ratio of 0.15 in a square channel under various conditions. Using FSPP, Raoufi et al. [88] investigated the effect of channel cross-section on focusing efficiency of elasto-inertial flows in straight channels. They have shown that by increasing the channel corner angle, the elastic force pushes the particles toward the center of the cross-section more efficiently.
To simulate the motion of cells in a square microchannel in Newtonian and viscoelastic fluid ( Fig. 9E), Raffiee et al. [93] used the front-tracking method. The results revealed that an increase in the solution concentration led to significant enhancement in the volumetric flow rate, which can help to boost the total throughput of a microfluidic device. [88], with the permission of AIP Publishing. B) Investigation of fluid behavior and particle equilibrium position in I. Oldroyd-B II. and Giesekus fluids [92]. C) the distance of the offcenter lateral position of particles from the center of the channel along the I. main axis and II. diagonal of the channel for all ranges of Wi and Re. Stable equilibrium positions are identified by filled symbols, whereas unstable ones are represented by unfilled symbols [89]. D) focusing pattern of a particle with a blockage ratio of 0.15 under different conditions of EI and Re [90]. E) Increasing in Wi leads to relocation of the cells toward the centerline [93].
Lattice Boltzmann Method for Inertial Particle Migration
The LBM, as an explicit alternative method for fluid dynamics problems, was introduced in 1988 by McNamara and Zanetti [94]. In this method, the fluid is approximated by discrete particle distributions moving and colliding on a regular lattice. LBM can be easily adapted for parallel processing. The numerical investigation of particle dynamics in fluids requires tracking of a solid-fluid interface where LBM is particularly powerful [95][96][97][98]. As providing the details of LBM for inertial particle microfluidics is beyond the scope of this review, please refer to textbooks, such as [99,100].
Inertial particle motion in 2D/3D straight channels
Ladd and his team were the first to investigate inertial phenomena in microfluidic devices [101][102][103], where they investigated the equilibrium positions of particles at Re of 100 to 1000 in a square duct [104]. At low Re, the numerical results were consistent with experimental ones, while at high Re, some discrepancies are raised (Fig. 10A-D). Afterward, Nakagawa and his colleagues [105] addressed this problem in their numerical study. They illustrated that for Re less than 260, the equilibrium positions were at channel faces, while for higher Re, channel corners were added to the equilibrium positions. Also, they showed an outward shift of the equilibrium positions at channel faces when Re was increased. For larger Re, the positions moved back inward (Fig. 10E-H). By combining IBM and LBM, the physical mechanism for this behavior was identified. The two vortices generated next to the particles grew upon an increase of Re, which pushes away particles from the wall [106,107]. [104], at Re = 100, particles migrated to the eight equilibrium positions. For Re = 500, particles migrated to one of four stable locations near the corner of square duct, while for Re =1000, the particle equilibrium configuration changed. Reprinted from [104], with the permission of AIP Publishing. Map of lateral forces for Re of E) 260 and F) 514 calculated by Nakagawa and co-workers [105] who showed that the particle behavior and equilibrium positions change by increasing Re. Equilibrium position in G) channel face and H) channel corner. Circles are the results of their study, while experimental results are indicated by other shapes.
In a simple-shear flow profile, Mao and Alexeev evaluated particle inertia, fluid inertia, and their combination for spheroid particles with different aspect ratios at low and moderate Re [108]. The authors pointed out the possibility of superposition principle for the effects of fluid inertia and particle inertia at sufficiently low Re. They showed that Stokes number, aspect ratio, Re, and initial orientation of a single spheroid affect its trajectory. Stokes number (Eq. (27)) defines the ratio of particle response time to fluid response time This number is essential for the separation of non-biological particles that are denser than water. At low Stokes numbers, particle behavior is similar to a neutrally buoyant particle [109], while an increase in Stokes number leads to the oscillatory behavior of particles [110]. Mao and Alexeev proposed a microchannel decorated with diagonal symmetric aligned ridges, resulting in the generation of secondary flows, for the hydrodynamic sorting of microbeads ( Fig. 11A) [111]. The induced secondary flows and inertial migration led to the separation of microparticles based on their sizes (Fig. 11B). However, the values of Re investigated are relatively small, and it is unclear how relevant inertial forces are. Besides ridges, wall roughness also affects the particle trajectory [112].
With the combination of IBM, LBM, and FEM, the hydrodynamic focusing of rigid particles ( in a straight channel was investigated [113]. The particle trajectory (Fig. = 3, 6, and 12 µm) 11C) for two particles with a radius of 6 and 12 µm at Re = 83 reveals that while these two different particles show small oscillations in the interacting stage, both followed two different paths at the separation stage. Besides, particle equilibrium position ( demonstrated = 6 µm) that for high enough Re, inertial effects became dominant and pushed particles closer to the channel wall. However, an even larger Re might result in unfavorable focusing and unstable fluid dynamics (Fig. 11D). In another study, Krüger and his co-worker investigated the interplay of inertial and deformability over a wide range of Re (3-417) and Ca (0.003-0.3) for the volume fraction of [114]. They showed that for a fixed Re, softer particles focused = 0.1 near the center-line of the microchannel (Fig. 11EI). According to their results, the higher the rigidity of the particles, the thinner the depletion layer (d), which is defined as the minimum distance of the particle surface to the wall (Fig. 11EII). For Re > 45, as identified with the dotted line in Fig. 11EII, the growth of the depletion layer is significantly increased. Fig. 11 A) Schematic illustration of a microfluidic device decorated with diagonal ridge developing secondary flows. B) The particle trajectory of various particles from the starting point of y/w=0.5 was evaluated. As can be seen, the smaller particle migrated toward the positive y while larger ones migrated in the opposite direction. Different lines belong to different particles [111]. C) particle trajectory for two particles with a radius of 6 and 12 µm. The whole process can be divided into three separate sections of initial, interacting, and separation stages. In the separation zone, particles with different size migrated By increasing Re, equilibrium positions relocate from the center of the longer wall to the center of the smaller wall, and it pushes particles more toward the walls. Reprinted from [113], with permission from Elsevier. E) I. for a fixed value of Re, softer particles are closer to the centerline II. an increase in stiffness leads to an increase in the depletion layer. Re > 45 is identified by a dotted line [114].
The entropy of particle focusing in microfluidic devices (Eq. (28)) demonstrates the ( ) connection between each particle to the cumulative performance of several particles, where probability function was identified as and the total number of states in the system (either ( ) fluid or solid state) was recognized as [115]. A higher ordering degree means lower focusing entropy, indicating a better focusing behavior.
Focusing entropy of rigid particles was lower than soft ones (Fig. 12A), signifying better ordering behavior of rigid particles, and the rectangular cross-section showed more viability regarding hydrodynamic focusing than the square and circular ones. With the coupling of LBM and LSM, Kilimnik et al. investigated the migration of a deformable capsule [116]. The larger and the softer the particles are, the closer to the center they become, indicating that deformability leads to a center-facing lift force, which is a well-known effect. In addition, increasing the viscosity of the encapsulated fluid resulted in equilibrium positions closer to the wall. It has also been illustrated that in a Poiseuille flow, soft particles moved away from walls, whereas the position of hard ones is independent of the Re. Besides, the migration velocity in Poiseuille flow was 3 to 4 times higher compared to the simple shear flow due to the higher variation of velocity across the particle [117]. Prohm and Stark showed that the lateral position of particles could be manipulated by axial control forces ( Fig. 12BI and BII) [118]. In addition, they used the feedback control approach to apply a non-constant axial force in order to efficiently control the position of particles and increase the particle throughput (Fig. 12BIII). Based on their method, this group [119] extended their theory by investigating of deformable capsules in a microchannel by evaluating on Laplace number (Tutorial Box 1). While deformability entirely is depended on Re, the equilibrium position approximately is independent of Re and falls into a master curve line (for each particle diameter) when plotted versus Laplace number. Moreover, they used external forces to control the equilibrium position of particles. Although rigid particles move away from the centerline in high Re, very soft capsules behave oppositely. Moreover, it was shown that by choosing the proper amount of flow rate, separation of a deformable capsule with different aspect ratio and membrane shear elasticity through bifurcation is achievable [120].
The interparticle spacing in inertial microfluidics is of significant importance for such applications as imaging or flow cytometry [121]. The favored spacing in low particle Re was measured to be 5D while at high particle Re, it went to 2.5D, where D equals to particle diameter. However, an increase in concentration led to a decrease in particle spacing such that a single train of the particle could not be identified [122]. To shed more light on this matter, Liu and Wu [123], in a 2D lattice domain, proposed a dimensionless focusing number and represented that resulted in complete inertial ( = , = 0.36, = 2.33) > + migration while shows unfocused particles and led to partial particle < --< < + focusing ( and are upper and lower limits of particle focusing, respectively). Besides + the train of particles, Haddadi and Morris developed a study regarding dynamics and trajectory of isolated and suspended (solid volume fraction was less than 0.3) pair of particles [124]. They showed that pair trajectories and streamline around an isolated particle have similarities, including reversing, in-lane spiraling, off-plane spiraling (Fig. 12C), and open but fore-aft asymmetric streamlines. More recently, with more emphasis on inertial microfluidics, Schaaf and his colleagues investigated on flowing pair of particles [125]. They believed that the inertial lift profile is strongly dependent on the position of the particle in the fluid, whether it is leading or lagging (Fig. 12D). Nonetheless, by increasing the axial distance or Re, the profiles become similar to each other while having a constant shift to each other. ) rectangular microchannel. The average values of entropy after initial falling were highlighted by dash lines. Reprinted from [115], with permission from Elsevier. B) Schematic illustration of the feedback control. I. when the particle left the desired area (i.e., ), the axial force turned on, and it was [ -, ] turned off when the particle went to the centerline (i.e., ). II. the velocity of a particle with and = 0 without axial force control in the desired portion (i.e., III. An example of particle trajectory in [ -, ] the desired interval. Reproduced from Ref. [118] with permission from The Royal Society of Chemistry. C) off-plane spiraling streamlines for an isolated pair of particles for I. Re = 0.05 and II. Re = 0.6. An increase in Re leads to decrease in the off-plane spiraling zone [124]. D) representation of color-coded lift profile for a pair of leading and lagging particles [125]. Published by The Royal Society of Chemistry.
Inertial particle motion in non-straight microchannels
Generally, straight microchannels enjoy the features of a simple fabrication process and easy operation, and the inertial focusing mechanism is almost clear for certain specific crosssections. However, these channels suffer from the long footprint, which impedes their further applications and makes limitations on their commercialization aspect. As an alternative, scientists set to induce secondary flows within the channel by means of obstacles, changing channel geometry, or using curved channels to assist inertial particle motion. Accordingly, the principal mechanism of particle migration becomes more intricate, requiring a robust, solid description. So far, the number of studies using LBM for the investigation of inertial particle migration in a non-straight channel has not been significant. Due to the intricate nature of flow in curvature, it is challenging to extract the governing equations of fluid with the corresponding dominant forces acting on particles. Moreover, most of results focus on experimental studies, resulting in unclear particle-particle and particle-fluids interaction during the focusing process and unspecified particle migration in cross-section of the channel. Hence, it is anticipated that in the near future, more numerical studies focus on inertial particle migration within nonstraight microchannels. In the following, we review studies conducted using LBM in a nonstraight microchannel.
Serpentine microchannels
Serpentine microchannels have a smaller footprint compared to straight channels and bear the feature of massive parallelization. Gaining the efficiency of IB-LBM, Jiang and colleagues investigated numerically and experimentally particle focusing with a diameter of 5 and 10 µm inside a symmetrical serpentine microchannel [126]. Calculating fluid flow by LBM and particle structure by FEM, the authors integrated these two parts using IBM. In low Re, inertial lift force increased, and particles were pushed toward the sidewalls where smaller particles were closer to the walls. In high-enough Re, the dominant force turned to be drag force, Dean flow forced particles to swing out of inertial lift force trap, and particles were focused at the vicinity near the center of Dean flow vortex ( Fig. 13AI and Fig. 13AII). This focusing pattern is expressed by Eq. (29) where is curvature ratio ( , is hydraulic diameter and = ℎ 2 ℎ is the channel radius) and is particle diameter [127,128].
Based on Eq. (29), an increase in Re leads to make the Dean drag force dominant.
Cavities and Contraction-expansion arrays
Cavities, contraction-expansion arrays, and constrictions provide additional secondary flows, assisting in inertial particle migration [129]. Wu et al. proposed a contraction-expansion microchannel to induce additional secondary flows for particle separation (Fig. 13BI and BII) [130]. They observed that particles with the diameter of 9.9 µm moved more toward the centerline than those with a diameter of 5.5 µm. However, since the concentration of the used microparticles for numerical simulation was not high enough, particle-particle, particle-wall, and particle-fluid interaction were not thoroughly investigated. Also, although contractionexpansion channels with higher expansion to contraction ratio required higher flow rates to focus size-based particles, their focusing performance is better [131].
In another study, the vortex entrapment of particles inside a microchannel with flat and curved edges was evaluated [132]. The authors suggested that the combination of repulsive forces and inertial effects could potentially lead to liberating particles trapped in a vortex zone at high Re.
It was shown later that the entrapment inside a microcavity is related to both particle dynamics and flow morphology [133] where the feasibility of these microcavities was showcased by cancer cell separation [134]. Using IB-LBM, it was revealed that the dynamics of a particle within a cavity is affected by two competitive outward centrifugal and inward inertial forces [135]. There are three particle entrapping phases: no trapping (Re < 50, due to the lack of enough inertial forces), stable trapping (50 ≤ Re ≤ 200), and unstable trapping (Re > 200, the existence of strong fluid inertia) (Fig. 13C). Also, four trapping modes of outer to inner (50 ≤ Re < 100), invariable (100 ≤ Re < 150), inner to outer (150 ≤ Re ≤ 200), and inner to escape (Re > 200) within a microcavity occur. Within a cavity, rotating and orbiting velocity of a particle is not constant while both motions are counter-clockwise. Fig. 13 A) The comparison of the lateral position of particles with different sizes in a channel. Data shown in the figure is obtained experimentally and numerically. Reproduced from Ref. [126] with permission from The Royal Society of Chemistry. B) I. Schematic illustration and the design principle of the proposed contraction expansion microchannel by Wu and co-workers. II. randomly dispersed particles were first focused due to the inertial effect. Next, at the presence of contraction-expansion arrays, secondary flows were induced resulted in the migration of small particles to the side of channel walls while maintaining large particles near the centerline of the channel. Afterward, each stream was collected from separate outlets. Reproduced from Ref. [130] with permission from The Royal Society of Chemistry. C) Three particle entrapping phases I. no trapping, II. stable trapping, and III. Unstable trapping.
Concluding remarks and outlook
In this review, we have summarized all computational techniques for inertial microfluidic modeling and categorized them into three subsections of semi-analytical solution, direct numerical simulation, and Lattice Boltzmann method. In the first, all relevant articles utilizing semi-analytical methods to simulate inertial particle focusing were reviewed. In these methods, the main important parameters are Re and confinement ratio, allowing an analytic treatment of inertia-induced migration for the calculation of lift force profiles. As these methods require the particle radius to be much smaller than the channel diameter, it is hardly applicable to microfluidic particle flow, where this assumption is often violated. Also, they cannot be implemented for coupled inertia-viscoelastic problems where complex constitutive equations exist. There are some crucial issues that need to be addressed in order to achieve more accurate and realistic solutions. The first one is overcoming the inherent complexity of solving 3D partial differential equations (PDEs) using proper analytical or semi-analytical approaches.
Analytically solving a set of PDEs for the fluid (three momentum equations) coupled with a set of PDEs for particles (three linear momentum plus three angular momentum equations) and also two coupling equations at the interface of fluid and solid domains using non-homogeneous boundary conditions is nearly an impossible task. Therefore, all covered articles tried to simplify this complex problem. The second issue stems from convective terms in the momentum equations. These terms alter prior first-order PDEs to more complex second-order PDEs, implying more difficulty in solving the equations by analytical methods. Last but not least, the effect of particles on fluid flow through disturbances near the particles is another intricacy of inertial microfluidics which should be considered.
In the second section, all numerical studies on inertial particulate flows based on the Navier-Stokes equations are reviewed. These include methods such as ALE, DLM, IBM, and FSPP which are mainly used for assessing the effects of different parameters, such as particle shape, particle deformability, channel geometry, and the type of fluid on particle migration within a microchannel.
Although there has been significant progress in inertial modeling of particle motion, there are still various cases which have not been thoroughly investigated. As can be seen in Table S1, the most uninvestigated cases are non-straight channels, non-spherical/deformable interpolation functions [136]. Although there are some efforts (e.g., picturing the steep boundary condition using a logarithmic representation) to address this problem to some extent, this field is actively being investigated. The need for further investigation is more evident when one considers a combination of the above-mentioned conditions, namely deformable and nonspherical particles in non-straight channels in viscoelastic fluids.
It would be significant to assess the capability of existing commercial CFD packages for simulation of inertial microfluidics and give an insight to those who want to use them for their studies. There exist various commercial CFD software such as COMSOL Multiphysics, ANSYS Fluent, OpenFOAM, or Flow-3D. Using these software packages dramatically reduces the efforts required to write bespoke CFD codes. Nevertheless, using these packages for the simulation of particle motion in inertial and viscoelastic flows brings about several problems.
Most of these packages can deal with particle motion, such as the particle tracing module in COMSOL. As mentioned earlier, these modules treat particles as point particles, indicating that forces acting on particles are calculated through a series of equations (e.g., Stokes drag). The problem with this approach is that the exact formula for the inertial forces acting on the particle in terms of flow parameters and particle properties is unknown. Although COMSOL, as a pioneer in this field, provides an approximate formula for inertial forces, the numerical model in some cases gives incorrect and inaccurate results for 3D flows. Therefore, it is necessary to calculate the interaction between the finite-size particles and the fluid. This requires fluidstructure interaction (FSI) which is in principle available in most commercial software packages. However, since current FSI modules do not have periodic boundary condition for moving particles at the inlet and outlet of channel unit cell, they cannot be implemented to the particle motion simulation. For simulating viscoelastic fluids, it is appropriate to use constitutive equations such as Giesekus or Oldroyd-B that are consistent with experimental results; but these models are not defined by default in current software packages. Altogether, it is not possible or practical to accurately simulate particle motion in inertial and viscoelastic flows using commercial software packages in their default mode, and additional scripts must be written. Using these scripts, it is possible to use periodic boundary conditions in a part of the domain or to add constitutive equations for viscoelastic fluids to commercial packages.
In the third section, all numerical studies based on LBM were reviewed. Initially, numerical results obtained using LBM were not entirely in line with those obtained from experiments.
More recent results, however, illustrate that LBM can now accurately predict the particle behavior in inertial microfluidics. LBM has a strong potential to be applied to inertial microfluidics and can be considered as a robust, efficient, and powerful alternative to conventional Navier-Stokes solvers in many fluid flow problems. LBM is mostly limited to viscous fluids (both Newtonian and non-Newtonian), and more research is needed to extend the method to viscoelastic fluids. Since the LBM in its original form depends on a cubic lattice, it is challenging to apply the method to geometries such as spirals due to the large "dead" volume. This explains why the LBM has mostly been used to study inertial flows in straight channels. Sparse geometry LBM codes can be applied to geometries such as spirals more easily, and more development work in this field is needed.
Due to the large number of different lattice-Boltzmann boundary condition schemes available for moving and stationary objects, it is yet to be decided which methods are most suitable and reliable for inertial particle microfluidic applications. Since the LBM is a weakly compressible scheme, it is not very accurate for the pressure field, which may negatively affect the lift and drag forces acting on moving particles. Although recent LBM studies have shown considerable progress in enhanced computational performance, non-spherical particles (using DSP-LBM method), or non-Newtonian fluids [137][138][139][140], more research is needed to investigate the accuracy and reliability of the LBM for inertial particle microfluidics.
In conclusion, computational inertial microfluidics is a nascent field, requiring more devoted studies to develop and describe the underlying physics. Please refer to Table S1 in electronic supplementary information for the current contribution of computational methods in inertial microfluidics. Most of the studies published so far are not comprehensive since the developed codes and models are limited to specific channel geometries or particle properties, mostly rectangular straight microchannels and rigid spherical particles. New computational packages, commercial and scientific, need to be developed that can predict the migration of realistic particles in complex flow geometries, depending on initial conditions, flow parameters, and particle concentration.
Dimensionless parameters in inertial microfluidics Reynolds number (Re):
Re is one of the most important dimensionless numbers in fluid mechanics, representing fluid behavior in various situations. Re is the ratio of inertial to viscous effects, as shown in Eq. (S1). In this text, we refer to channel Reynolds number as Re. Here, is the fluid density, is the fluid dynamic viscosity, is the mean velocity of the fluid, and is the hydraulic diameter of the channel. where is particle diameter and is channel dimension.
Blockage Ratio ( ):
The blockage ratio is defined as the ratio between particle diameter to the characteristic length of channel (height of the channel cross-section). If the blockage ratio is more than 1, the channel gets blocked. where is the dynamic viscosity of the continuous phase fluid, is the maximum velocity, and is the channel width. For a deformable capsule or elastic particle, Ca is considered as the ratio between the viscous force and elastic force, while the elastic modulus takes different forms depends on capsules or bulk elastic particles [67,142].
Laplace number (La):
Ca for a deformable capsule depends on the flow characteristics (i.e., flow speed). La is a dimensionless number that characterizes the rigidity of a capsule based on the elastic shear force ( , is shear modulus) and intrinsic inertial force scale ( ), without depending on the explicit flow speed [119]. 2
Using
, La can be written as Eq. (S6). | 14,529.2 | 2020-02-18T00:00:00.000 | [
"Engineering",
"Physics",
"Computer Science"
] |
SOPHOCLES’ AIAX: HYBRIS, FOOLISHNESS AND GOOD SENSE. A COMPARISON WITH ANTIGONE
Disobedience to civic authority, shift of perspective within friendship —that is, who was or should be friend is then regarded as enemy—, and burial issue make the tragedy Aiax an appropriate candidate for a comparison with Antigone . Indeed, a comparison between the two tragedies has been already proposed, and parallels have been usually established between Antigone and Aiax, on the one hand; Creon and the Atridae, on the other. Along the lines of a previous study of mine on Sophocles’ Antigone , the present paper aims at comparing Aiax and Antigone with reference to a specific theme and terminology, i.e. those pertaining foolishness and wisdom. Antigone and Aiax are usually associated with each other in their foolish rebellion to those who are in authority (respectively Creon and the Atridae). As argued in the previous paper, while, however, that of Antigone is foolishness only in all appearance, the foolishness of Aiax is a real, factual one bordering on hybris, which makes him a counterpart of Creon rather than of Antigone. On the other hand, the Atridae differ from Creon —rather than being his counterpart— in that they avoid acting foolishly, and falling for a mistake of judgment, i.e. for a lack of good sense, while dealing with the burial issue. Indeed, the Atridae avoid Creon’s hybris by finally respecting the «unshakable and unwritten laws of Zeus» which makes them, in some way, a counterpart of Antigone. A closer lexical analysis of the occurrences of words pertaining foolishness and wisdom —such as ἄνοια, ἄφρων/ ἀφροσύνη - σωφροσύνη, φρονeῖν - μὴ φρονeῖν, μωρία, ἀβουλία, δυσβουλία etc.— has led to these results.
Disobedience to civic authority, shift of perspective within friendship -that is, who was or should be friend is then regarded as enemy-, and burial issue make the tragedy Aiax an appropriate candidate for a comparison with Antigone.Indeed, a comparison between the two tragedies has been already proposed, and parallels have been usually established between Antigone and Aiax, on the one hand; Creon and the Atridae, on the other.Along the lines of a previous study of mine on Sophocles' Antigone, the present paper aims at comparing Aiax and Antigone with reference to a specific theme and terminology, i.e. those pertaining foolishness and wisdom.Antigone and Aiax are usually associated with each other in their foolish rebellion to those who are in authority (respectively Creon and the Atridae).As argued in the previous paper, while, however, that of Antigone is foolishness only in all appearance, the foolishness of Aiax is a real, factual one bordering on hybris, which makes him a counterpart of Creon rather than of Antigone.On the other hand, the Atri dae differ from Creon -rather than being his counterpart-in that they avoid acting foolishly, and falling for a mistake of judgment, i.e. for a lack of good sense, while dealing with the burial issue.Indeed, the Atridae avoid Creon's hybris by finally respecting the «unshakable and unwritten laws of Zeus» which makes them, in some way, a counterpart of Antigone.A closer lexical analysis of the occurrences of words pertaining
i. introdUction
In a previous paper on Sophocles' Antigone 1 , I discussed the presence of a dichotomous motif underlying the entire tragedy, namely that concerning the dialectic between wisdom/good sense and foolishness.Through a lexical analysis I pointed out the occurences of a specific terminology throughout the tragedy, such a terminology that connotes the two main characters, Antigone and Creon, as being, the first, «apparently foolish» -despite the general impression that the occurrence of terms of foolishness related to Antigone provokes-, and, the second, «really foolish» -despite the acknowledgment that is often reserved for his wisdom and good sense-.While Antigone's foolishness consists of her disobedience to a man-made law and to civic authority, that of Creon consists of obstinacy in believing in his own thoughts, and thus in refusing to listen to those who are able to provide him with appropriate advice, such an obstinacy that borders on an act of hybris by violating the gods' law in name of his own persuasion to be always right 2 .As a matter of fact, eventually Creon must yield and recognize his own foolishness, when he admits that the best way to end one's own life is by «preserving the established laws» (S., Ant. 1113(S., Ant. -1114)).These «established laws» are the same as the ones Antigone claimed in defense of her action (S.,(902)(903)(904)(905)(906)(907)(908)(909)(910)(911)(912)(913)(921)(922)(923)(924)(925)(926)(927)(928).Her obstinate disobedience, i.e. the essence of her lack of good sense, cannot thus be regarded as real foolishness since it is a «reverent/holy» obstinacy in obedience to the gods.As argued in my previous paper, the difference between Antigone's foolishness -which is only apparent-and Creon's foolishness -mistaken as good sense-is significantly expressed through different terms denoting foolishness per se, terms which appear to be almost exclusive either of Antigone or of Creon.While ἀφροσύνη, ἄνοια are typical and almost exclusive of Antigone, μὴ φρονεῖν, μωρία, ἀβουλία -δυσβουλία are typical and almost exclusive of Creon3 .When it happens that terms typical of Creon' foolishness (e.g.μωρία, δυσβουλία) refer to Antigone, too, they reflect the view that others have of the heroine, which -in the end-is proved to be a mistaken view.
The motif of disobedience to the civic authority together with both the switching from friend to enemy status -i.e., who was or should be friend is then regarded as enemy-and the burial issue, makes Aiax an appropriate candidate for a comparison with Antigone.A comparison exactly between these two tragedies has been already proposed 4 , and parallels have been usually established between Antigone and Aiax, on the one hand, and Creon and the Atridae, on the other: «it is Antigone who finds herself compared to Ajax, while Creon finds his counterpart in the Atridae» 5 .The present study aims at comparing Aiax and Antigone specifically with reference to the theme and terminology of foolishness and wisdom at which I have hinted above.By applying a lexical analysis I shall argue that the occurrences, in Aiax, of terms that turned out to exclusively refer to Creon's foolishness in Antigone, indeed prove that: a) the foolishness by which Aiax is affected resembles that of Creon 6 rather than that of Antigone, as it is usually discussed; b) what the Atridae -especially Agamemnon-eventually tend to do, while dealing with the burial of Aiax, is exactly to avoid acting foolishly, and thus falling for a mistake of judgment, i.e. for a lack of good sense as, on the contrary, Creon did while dealing with the burial of Polyneices 7 .In this light a parallel can finally be established between the «lessons» implied in both tragedies, that is -to paraphrase Sophocles-«bodies grown too great and stupid (ἀνόνητα) fall through grievious afflictions at the hands of the gods, whenever a man is born with a human nature, but does not think in accordance to his human φρονή» (S., Aiax 758-761)8 .This is exactly what happened both to Aiax and to Creon.
ii. aiax's FooLishnEss: LExicaL and concEptUaL comparison with antigonE and crEon The essence of Aiax' foolishness, i.e. his lack of good sense/wisdom, is well described by the hero himself in his last speech (Ai.646-692), when he realizes that one must yield to, obey and respect the gods and those who are in authority, in his case, .To behave in this way means 'to be minded-sensible/to have good sense' (σωφρονεῖν: Ai. 677), which Aiax has proved not to be or to have, and learns it only after the deeds of his foolishness.
Ἄνουϛ and Ἄφρων
According to the words of the hero in the lines mentioned above, the refusal to bend to the rulers' demand of acquiescence and obedience only partially connotes Aiax's foolishness.This refusal is namely that lack of good sense which is commonly ascribed to Antigone, who -as it is known-refused to obey Creon, the ruler.
In Antigone, from a lexical point of view this kind of foolishness is described through two categories of words: ἄνουϛ/ἄνοια, ἄφρων/ἀφροσύνη 9 .Antigone is, indeed, said ἄνουϛ (Ant.66-68, 99) or ἄφρων (Ant.383).But, Antigone's disobedience is a holy one (Ant.74), justifiable in the name of her respect of the gods («It was not Zeus who made this proclamation...» Ant.450).Only those who really lack good sense can mistake it as foolishness.Therefore -as previously argued-she is ἄνουϛ and ἄφρων in the others' eyes, that is, only in all appearance.As a matter of fact, significantly both kinds of terms (ἄνουϛ, ἄφρων) are truly addressed to those who are really foolish in that they do not care about gods' rules.With reference to ἄνουϛ and, more generally, to the νοῦϛ-words in Antigone, though the occurrences are quite exclusive of Antigone herself by still reflecting only the others' viewpoint on her deeds, there is a significant single case that can be by irony referred to Creon's real and specific ἄνοια.In Ant.281 Creon calls ἄνουϛ the chorus who has just attempted to explain the burial of Polynices as a sort of miracle performed by the gods.The excessive reaction of Creon speaks in favor of his blind exclusion of the gods in all matters, which by irony makes him the real ἄνουϛ.So ἄνουϛ is Aiax.
In Sophocles' Aiax, both categories of words (ἄνουϛ/ἄνοια, ἄφρων/ ἀφροσύνη) are used to describe the foolish behavior of Aiax, namely a foolishness which consists of both not listening to those who give good advice (Ai.763), and being too much confident in one's own thought (Ai.766-770) rather than realizing how much better is to respect and yield to the gods (Ai.666-667).As a matter of fact, the foolishness of Aiax consists first of all, and foremost, of an act of irreverence and hybris toward the gods, as it is well proved by Athena's speech at the beginning of the tragedy (Ai.127-133) 10 .And obstinacy to listen to those who are able to give good advice, and to yield and respect gods' laws is exactly that affecting Creon and determining his foolishness.As Aiax dares neglect his father's advice and the gods' respect when he refuses Athena's help in his ἄνοια -ἀφροσύνη (Ai.762-777), so does Creon when he confirms his intention to kill Antigone and refuses to listen to Haimon, no matter also what Zeus of blood-kinship would think: «Let her keep invoking the Zeus of blood-kinship» (Ant.658-659), which is to say «let's not care of Zeus' laws»11 .
In Aiax this kind of foolishness, which borders on hybris, is also described by φρήν-words (μὴ κατ ̓ ἄνθρωπον φρονεῖν Ai. 761, 777), i.e., by the same category of words that, in Antigone, connote Creon's lack of good sense bor-
Μωρία
As to the μωρία-words in Antigone, I showed how, despite one reference to Antigone, they are peculiar of Creon and of his specific and real foolishness, which mostly consists of negligence of gods' laws 14 .In Aiax, except for one case, the word occurs to describe a similar kind of foolishness which may belong not only to Aiax, but also to those people who show no respect for gods' laws, or for the interpreters of gods' will.In either way, the lack of good sense results in an act of hybris.Μωρία is used by Aiax himself when, by realizing what he has done, he considers himself as one devoted to the pursuit of foolishness (Ai.406-407): his μωρία is closely linked to his hybris toward Athena (Ai.401-403).Moreover, the skeptical words by which the chorus replies to the messenger's announcement of Calchas' prophecy are said to be full of μωρία (Ai.743-745): not to believe to what a seer suggests on the basis of his divine knowledge is a form of hybris in that it means not to care, in a way, of gods' minds.And this is the same form of hybris that the μῶροϛ Creon performs when he denies any credibility to Teiresias' interpretation of the omen and to his advice (Ant.998-1045).
More importantly, in Aiax the word occurs twice with reference to what in Antigone is the explicit mark of Creon's hybris, and thus of his foolishness, i.e. the denying of the burial, despite the gods' laws.
Let us analyze these two occurrences: 12 As to Aiax's ἀφροσύνη, see A. Rademaker, Sophrosyne and the Rhetoric of Self-restraint.Polysemy & persuasive use of an ancient Greek value term, Leiden-Boston, 2005, pp.125-133, who defines it in terms of both insubordination to those in power, and «arrogance on account of his martial prowess» (p.133).Rademaker mostly bases her analysis on the occurrences of σωφροσύνη-words, neglecting the usage of the other φρήν-words which are under discussion here and in the previous paper on Antigone.As to other occurrences of φρήν-words, which are not mentioned above, they usually describe the status of mental sanity first lost and then re-gained by Aiax (see, e.g., S., Ai. 46,83,182,306,344,355), or more generally the mental sanity commonly possessed by men (Ai.272).A more specific connotation characterizes the occurrences regarding Agamemnon and Teucer, as I shall discuss above.
Ai. 115015 : through a sort of riddle, Teucer calls Menelaus μῶροϛ for his intention to persecute the dead, i.e., to refuse Aiax an appropriate burial and thus to dishonor the gods and their laws (Ai. 1129(Ai. -1131)).This is exactly the same as Creon's μωρία; Ai. 1375: the chorus defines μῶροϛ the one who is not able to realize the wisdom/good sense of Odysseus, who has just persuaded Agamemnon about the right necessity to give Aiax a burial for the sake of gods' laws (Ai.1343-1344).Again, Creon has proved to be such a μῶροϛ when refusing the wise advice of both Haemon and Teiresias16 .
Μανία
As to the μανία-words, except for one case17 , all occurrences are related to Aiax (Ai. 59,81,216,611,726) 18 .It might not be surprising that these are the more common terms by which everybody refers to Aiax's foolishness, due to its meaning of madness sent/provoked by a god (Ai.59, 611).And, as implied by both the messenger (Ai.776-777), and Athena (Ai.59-67, 118-133), Aiax's hybris toward the goddess has provoked his madness.Like in Antigone, μανία mostly represents the status into which a man falls because of his lack of good sense.With regard to this status of μανία, not only does the resemblance between Aiax and Creon depend on the occurrence of the same vocabulary, but it is also confirmed by Creon's eventual admission of the gods' intervention in driving him to a foolish downfall (Ant.1271-1275) 19 .
First Possible Conclusions
In light of the analysis above proposed, Aiax' foolishness fully resembles that of Creon.As a matter of fact, despite the disobedience issue, which would make him comparable with Antigone, Aiax's foolishness does not include that ascribed to Antigone, as it seems at first glance.This conclusion is based not simply on the fact that Antigone's foolishness is a false one, as argued above, but it is also due to the fact that Aiax's obedience to those in authority is a questionable matter, as shown by Teucer.More than once Atreus' sons evoke Aiax' disrespect of the demand of obedience to the rulers as reason for their denying the burial (Ai.1066-1076, 1231-1234), the same reason that Creon evokes to justify his punishment of Antigone (Ant. 449,(473)(474)(475)(476)(477)(478)(479)(480)(481)(482)(483)(484)(485)(486)(487)(488)(489).But, as Teucer observes, Aiax was not subject to Menelaus' rule (Ai., 1098-1108); he went to fight in Troy as ally, worthy of being considered at the same level as Menelaus himself.More importantly, he went to Troy because of an oath that bound him (Ai.1113-1114), as well as any suitors of Helen 20 .We may thus conclude that as the disobedience of Antigone cannot be regarded as a real act of foolishness, so too that of Aiax: the first is done in obedience of gods' superior laws; the second seems not even to be a form of disobedience.Therefore, Aiax's lack of good sense seems to exclusively resemble that of Creon.With regard to this, it might be worth noting how, in her last speech, Tecmessa points out the meaning of Aiax's death: θεοῖϛ τέθνηκεν οὗτοϛ, οὐ κείνοισιν [Atreus' sons and Odysseus]... (Ai.970); that is to say Aiax's death eventually satisfies the gods since in this way he pays for his foolishness, which seems not to have anything to do with acts of disobedience to the rulers.
thE good sEnsE oF agamEmnon If Aiax's foolishness, consisting ultimately of arrogant irreverence towards gods, might be regarded as a paradigm of what Creon's real τὸ μὴ φρονεῖν in Antigone ends up to be, Menelaus' and, far more, Agamemnon's way of handling the issue of Aiax's burial contributes to further define 20 See Hes., Fr. 196-204; Apollod.3.131; Hyg., Fab.81 Menelaus is the first who shows up to forbid Aiax's burial (Ai.1047-1048): he has the power to deliver such a prohibition because of the authority he has as ruler of the army (Ai.1050).He states that he has the right to decide such a thing since Aiax, brought as φίλοϛ ('friend': 1053), has been found as more than an enemy (Ai.1054).Enemy and traitors do not deserve an appropriate burial22 .Friend-enemy motif namely in relation with the burial issue clearly reminds us of Antigone's plot23 .Moreover, the reference of Menelaus to the city's laws that are able to guarantee safety and good order -and this is the duty of a ruler (Ai.1073-1076)-shows some similarities with the ruling philosophy of Creon, i.e., the speech that he delivers for two specific purposes: first, to justify his decree against the burial of Polynices, who, like Aiax, has been found enemy ; second, to justify his irrevocable intention to punish Antigone (Ant.639-678) 24 .
Teucer, like Antigone, defends the right of the dead, and precisely of a relative/friend dead, by evoking «the gods' laws» (Ai.1129-1131), for which Menelaus seems not to care to such a point that, like Creon by Antigone (Ant.469-470), so is he called μῶροϛ (Ai.1150), too.Menelaus' μωρία is also explicitly pointed out by the chorus when, though granting him the ability to lay down wise judgments, it suggests him not to commit hybris against the dead, thus -one can add-against the related gods' laws (Ai.1091-1092).The wise judgments that the chorus ascribes to Menelaus concern the ruling philosophy he has just illustrated.It is a partial acknowledgment of the ruler's wisdom which in Antigone, too, the chorus grants to Creon (Ant.683) 25 .Moreover, as Antigone is foolish and characterized by τὸ μέγα φρονεῖν in Creon's eyes, so is Teucer in Menelaus' eyes (Ai. 1120(Ai. , 1142)).As that of Antigone, so that of Teucer is a holy pride in obedience of gods' laws.Therefore it is not a real form of insolence toward those who are in authority since -as Teucer more explicitly declares-ξὺν τῷ δικαίῳ γὰρ μέγ ̓ ἔξεστιν φρονεῖν (Ai.1125).
The motif of justice in terms of respect for gods characterizes Haemon's arguments, too, in his struggle with his father (Ant. 727,743).Like Antigone, Haemon, too, accuses, in a way, Creon of dishonoring the gods (Ant.745, 749); like Antigone, Haemon, too, is foolish and is marked by τὸ μέγα φρονεῖν in Creon's eyes.Teucer thus shares all of these characteristics in his confrontation with Menelaus, as well as Menelaus potentially shares Creon's traits.
The confrontation between Agamemnon and Teucer with the essential intervention by Odysseus accomplishes what remains unsolved in the confrontation with Menealus, in terms of making a definite decision with regard to the burial, and of serving as moral paradigm.Agamemnon, too, looks at Teucer as an insolent man who dares utter strong words against the rulers, by thus showing a lack of good sense and of self-restraint (Ai. 1226(Ai. -1228;;1251-1259).In Agamemnon's speech to Teucer, the occurrence of terms having φρήν, νοῦϛ, σωφροσύνη resembles the ones we found apt to indicate the foolishness in terms of disobedience to those who are in authority, both in Antigone and in Aiax.With regard to Teucer, it is possible to talk of a false, apparent foolishness, too, since Teucer is not arbitrarily disobeying, or arbi-trarily acting as insolent with Agamemnon.He is so in Agamemnon's eyes, but his pride is a just one (Ai.1125), since he is defending superior laws.
The focus, as a matter of fact, quickly and significantly shifts to an indirect definition of what τὸ φρονεῖν, and thus σωφρονεῖν and τὸν νοῦν ἔχειν, really mean in spite of the personal, human thoughts of a ruler, and in favor of «the gods' laws».When Odysseus intervenes, Agamemnon proves to possess the ability to εὖ φρονεῖν by agreeing to listen to a person who is able to give good advice.«I should be foolish (εἴην οὐκ ἂν εὖ φρονῶν) not to let you... », replies Agamemnon to Odysseus' request to speak (Ai.1330).More significantly, Agamemnon demonstrates good sense by accepting the advice to bury Aiax in obedience to the gods' laws, although this means to dismiss his own laws: to dishonor a dead is to destroy the laws of gods (Ai. 1342(Ai. -1344)).This is Odysseus' warning, similar to those given, directly or allusively, by Antigone, Haemon and Teiresias to Creon.Though it is hard, Agamemnon eventually decides to honor his friend, Odysseus, who gives the good advice (Aiax 1351)26 , which consists of making him avoid disrespect towards gods by allowing Aiax's burial in honor and obedience of οἱ θεῶν νόμοι (Ai.1343).This is to be «just» and «wise» (Ai.1363, 1374).Agamemnon thus avoids yielding to the foolishness which characterizes Creon: he does listen to good advice, he does yield to the gods, no matter how he hates Aiax -as well as This is, in the end, the lesson that the tragic story of Aiax must give, such a lesson which is illustrated throughout the play not simply by the fate of Aiax but also by the behavior of Agamemnon, a character who has a little space and yet plays an important role.This feature and the lexical and thematic similarities, that, as we saw, Aiax and Antigone present, allows us to look at these two specific tragedies as «supporting» each other's ultimate meaning.
Let us further examine the results of the analysis we have carried on, in order to draw a conclusion.
As in the second part of Antigone little space is reserved for the heroine, and her deeds and disappearance are almost completely forgotten, to a point that some scholars think of Creon as being the real main character27 , so seems it to happen to Aiax' deeds in the second part of the homonymous play.And, while in the second part of Antigone the focus is on the foolish obstinacy of Creon, the ruler, to rely on his laws and thoughts without caring for the gods' mind in relation with the burial issue, in the second part of Aiax the focus is on potentially the same kind of foolishness of Menelaus, and, more importantly, on the wise flexibility that eventually Agamemnon, the ruler, shows by acting as the one who really has τὸ φρονεῖν since he respects the gods' laws with reference to the same issue.
In light of the final lines in Antigone (1347-1352) 28 , which contain a lesson comparable to the one implied by Athena's words in Aiax (127-133), it is also worth re-considering the following specifics: (a) both Aiax and Agamemnon -as they are qualified and presented by Sophocles in Aiax-contribute to define Creon's foolishness in terms of hybris toward the gods, namely Aiax by analogy because of his pride and confidence in his own power and thought; Agamemnon by contrast for his eventual flexibility and lack of obstinacy; (b) both tragedies are characterized by the foolishness-wisdom motif with reference to the burial issue, which is developed -to a different degree-in terms of contrast between those who seem to be foolish, but they are revealed to be so only in the other's eyes (Antigone, Haemon, Teucer)29 , and those who seem to be wise, but they are eventually revealed to be foolish (Creon, partly Menelaus, and whoever does not recognize the wisdom of Odysseus' good advice); (c) both tragedies hint, to a different degree and with different outcome, at the fact that it is good and wise to learn from the others, to listen to those who εὖ λέγουσιν30 , i.e., to yield when necessary rather than persist in one's own mistakes because of pride.
. AIAX: HYBRIS, FOOLISHNESS AND GOOD SENSE.A ...what τὸ φρονεῖν means and how a man can come into possession of it, which is exactly what in Antigone Creon fails.As discussed in my previous work and implied in the above discussion 21 , obstinacy in terms of lack of flexibility can be regarded as a specific trait of Creon's foolishness, such a trait that is evident in his refusing to listen to those who are able to εὖ λέγειν and to give εὐβουλία, due to the bold confidence in his own δόξα.And it is this obstinacy and lack of flexibility that ultimately provokes Creon's downfall.He has the chance to see where the real foolishness lies, and thus the chance to respect gods' laws, but he insists on μὴ κατ ̓ ἄνθρωπον φρονεῖν, by ironically accusing the real wise, i.e., Antigone and Haemon, of a similar kind of foolish pride.In Aiax, Creon's foolishness in terms of disrespect towards gods' laws is potentially embodied by Menelaus and Agamemnon as well, and it is successfully counteracted by Teucer and Odysseus, who might be regarded respectively as the equivalent to Antigone and Haemon-Teiresias. SOPHOCLES' Creon hated Polynices-.Where Antigone, Haemon and Teiresias fail, Teucer and Odysseus succeed; and where Creon fails, Agamemnon succeeds. | 5,878.6 | 2008-12-30T00:00:00.000 | [
"Philosophy",
"History"
] |
Efficient Basis Change for Sparse-Grid Interpolating Polynomials with Application to T-Cell Sensitivity Analysis
Sparse-grid interpolation provides good approximations to smooth functions in high dimensions based on relatively few function evaluations, but in standard form it is expressed in Lagrange polynomials. Here, we give a block-diagonal factorization of the basis-change matrix to give an efficient conversion of a sparse-grid interpolant to a tensored orthogonal polynomial (or gPC) representation. We describe how to use this representation to give an efficient method for estimating Sobol’ sensitivity coefficients and apply this method to analyze and efficiently approximate a complex model of T-cell signaling events.
Introduction
A common problem in many areas of computational mathematics is to approximate a given function based on a small number of functional evaluations or observations. This problem arises in numerical methods for PDE [1,2], sensitivity analysis [3][4][5], uncertainty quantification [2], many areas of modeling [6,7], and other settings. As a result, there are a large number of approaches to this problem, and the literature is large and growing quickly. In settings in which the points of evaluation may be chosen at will, two common approaches are generalized polynomial chaos (gPC) using cubature and sparse grid collocation [8]. In other settings in which the points of evaluation are given, common approaches include RS-HDMR, cut-HDMR, ANOVA decomposition, kriging, and moving least squares; see, for example, [9,Chapter 5] for a discussion of such methods and further references.
Sparse grid collocation has been used widely in recent years as a means of providing a reasonable approximation to a smooth function, , defined on a hypercube in R , based on relatively few function evaluations [2]. This method produces a polynomial interpolant using Lagrange interpolating polynomials based on function values at points in a union of product grids of small dimension [10,11]. Using barycentric interpolation to evaluate the resulting polynomial [12], this method is a viable alternative to an expansion of in terms of a sum of products of one-dimensional orthogonal polynomials. This latter approach is known as generalized polynomial chaos (gPC) or spectral decomposition and is obtained via standard weighted 2 techniques. However, the orthogonality implicit in the gPC representation often provides many advantages over the Lagrange representation, particularly in applications to differential equations, in which the gPC representation is closely related to spectral methods. Other advantages of the gPC representation include the ability to estimate convergence as more points are added to the sparse grid, and the ability to estimate variance-based sensitivity coefficients quickly and accurately [4]. A common approach to obtain the gPC coefficients is to use numerical integration by applying a cubature rule. However, cubature rules to integrate the product of two polynomials up to the degree in the sparse grid interpolant typically require either more or different points than found in the sparse grid itself.
In this paper, we provide an efficient algorithm for converting from the Lagrange interpolating polynomial to an equivalent gPC polynomial using only the function values at the sparse grid points. The foundation of this algorithm is a matrix factorization based on the fact that the sparse grid is a union of small product grids. More precisely, let Φ be the matrix obtained by evaluating each of the gPC basis functions (one per column) at each of the sparse grid points (one per row). This matrix produces the gPC coefficients, 2 Computational Biology Journal , from the function values, , by solving Φ = for . We show below that Φ −1 factors into a product of block diagonal matrices in which each block corresponds to one of the small product grids composing the full sparse grid. We also demonstrate the utility of this method for calculating variance-based sensitivity coefficients on a model of T-cell signaling.
Methods for changing basis between sets of orthogonal polynomials are described in many places, including [13][14][15]. Most of these results focus on the case of polynomials of one variable, and none focuses on the case of polynomial bases for sparse grid interpolants. For further background, Boyd [16,Chapter 5] discusses the matrix multiplication transform for converting between Lagrange and spectral representations, while Chapter 10 discusses the special case of the Chebyshev polynomial expansion using the FFT.
Organization. In Section 2, we provide some background into Smolyak's algorithm for sparse grid interpolation and the calculation of Sobol' sensitivity coefficients. In Section 3.1, we give the factorization of Φ −1 and in Section 3.2 we describe the algorithm to produce the gPC representation based on function values at the sparse grid points and give an upper bound on the complexity of the algorithm. In Section 4.1, we give numerical results on running time and accuracy of the algorithm for a simple test function. In Section 4.2, we describe the application of this algorithm to compute Sobol' sensitivity coefficients for a model of T-cell signaling.
Sparse Grid Interpolation.
In this section, we provide some background on Smolyak's algorithm for sparse grid interpolation. The discussion here is based on [10]. The foundation for sparse grid interpolation is interpolation in one dimension using Lagrange interpolation as follows: where ∈ N, ∈ [−1, 1], and is the Lagrange polynomial satisfying ( ) = . A common choice for sparse grid interpolation is to use the Chebyshev-Gauss-Lobatto (CGL) points, in which case = 2 −1 + 1 and = − cos(( − 1) /2 −1 ) for > 1, since this choice provides a nesting property that is vital for efficient interpolation in higher dimensions. Moreover, 1 = 1 and 1 1 = 0 for this choice. Other choices, including Gauss-Patterson nodes and nodes on a (semi)infinite interval as in [17], are also possible; the methods below apply for these choices as well. These onedimensional formulas may be tensored in dimension > 1 to yield where i and j are multi-indices with componentwise partial order, 1 is the multi-index of all 1s, i j = ( 1 1 , . . . , ), and i j ( ) = 1 1 ( 1 ) ⋅ ⋅ ⋅ ( ). This formula requires 1 ⋅ ⋅ ⋅ function values sampled on a product grid. Note, however, that when some of the are 1, then this grid is of dimension less than (since 1 = 1).
Linear combinations of these formulas produce the Smolyak formulas. Let 0 = 0 and Δ = − −1 for ∈ N, and define |i| = 1 + ⋅ ⋅ ⋅ + . Then for ≥ , we have where Δ i is the tensor product of the Δ . A multinomial expansion and = + produce This formula is known as the combination technique, which first appeared in connection with sparse grids in [18]. When > , as is common for large , we can replace + 1 ≤ |i| by ≤ |i|. An anisotropic version of this formula is described in [19]. The formula for this version is largely the same as (3) with the index set ≤ |i| ≤ replaced by a more general index set , characterized by the property that if i ∈ and > 1, then i − e ∈ , where e has 1 in the th position and zeros elsewhere. Important results from [10] include that ( + , )( ) = for all polynomials of degree at most , and that if the derivatives are continuous for all multi-indices with ≤ for all , then where = ( , ) is the number of points in the sparse grid, ( , ), for ( , ). There is a similar error estimate for a weighted 2 norm of the Chebyshev expansion of − ( , )( ). They note also that ( + , ) is asymptotic to 2 / ! as tends to ∞, in the sense that the ratio tends to 1. More precise estimates on convergence may be found in [20].
Global Sensitivity Analysis.
A common problem in analyzing complex biological models with unknown parameters is identifying which parameters play an important role in modulating the response of the model. The variance-based sensitivity coefficients of Sobol' [21] can provide a great deal of insight into this problem.
The calculation of these coefficients for a function ( ) = ( 1 , . . . , ) defined on a hypercube in R starts by writing as a normalized sum of functions that depend on a specified subset of the following variables: Computational Biology Journal 3 with a weighted orthogonality condition imposed on pairs of functions in this decomposition and a 0 mean condition in each variable separately imposed on each function individually. Depending on the context, this decomposition is known as the Sobol' or ANOVA decomposition of or the HDMR representation of . One method for obtaining this decomposition is by calculating the gPC expansion of [4,7]. In fact, the gPC expansion is essentially a refinement of the above decomposition in which each component function in the sum above is written as a sum of a product of onedimensional orthogonal polynomials.
The Sobol' or variance-based sensitivity coefficients for are then defined in terms of the variance of the functions in the Sobol' decomposition. More precisely, let denote the hypercube [−1, 1] and assume that the decomposition above takes place on this set. Let = 2 be the volume of , and for a function on this set, define the variance of to be In terms of the Sobol' decomposition given above, the main effect sensitivity coefficient for is = ( )/ ( ). This is generalized to interaction coefficients, by replacing by , where is any subset of {1, . . . , }. Also, the total effect sensitivity coefficient is the sum of over all containing .
As shown in [4], the gPC expansion of leads to an efficient method for calculating these sensitivity coefficients: ( ) is simply the sum of squares of coefficients of the gPC polynomials in the expansion of . Hence, the change of basis algorithm described above provides an efficient way to estimate Sobol' sensitivity coefficients based on function values at sparse grid nodes. We simply evaluate the function at sparse grid nodes, change to a gPC basis using Legendre polynomials, and find the appropriate sum of squares of coefficients to calculate the and .
Change of Basis Matrix and Factorization.
In this section, we describe the matrix factorization that is central to the efficient change to a gPC basis.
Let , ≥ 0 be a set of polynomials, each of degree , orthogonal on the interval [−1, 1] using inner product ∫ 1 −1 , where is a nonnegative weight function that is positive in the interior of the interval (straightforward modifications allow for unbounded intervals, but for clarity we restrict to the finite interval).
The expansion in (4) together with (1) gives a decomposition of ( , ) into a sum of products of Lagrange polynomials, with each i corresponding to a full product grid, i (although usually i is less than -dimensional since each entry of i, that is 1, produces only a single point for the corresponding dimension). On this grid, each nontrivial coordinate direction has = > 1, and the number of points in this direction is = 2 −1 + 1. Hence, the degree of each is − 1, so we write each as a linear combination of 0 , . . . , −1 . For a fixed value of , we have Taking tensor products as in (1), we obtain a basis in terms of tensor products of for interpolation on the product grid i . Taking a union over the i that appear implicitly in (4) via , we obtain a basis for interpolation on the entire sparse grid. The problem then is to determine the (gPC) coefficients in this basis for specified function values, f, at the sparse grid nodes.
Conceptually, the simplest approach to finding the gPC coefficients starts by evaluating each basis function, , at each point in = ( , ). These values form a matrix, Φ, with each basis function corresponding to one column and each point in corresponding to one row. With this definition, and a gPC coefficient vector, , the product Φ gives the value at each point in of the polynomial determined by the coefficients in . Hence, one can solve the interpolation problem by solving Φ = f for . Of course, simply constructing this matrix takes ( 2 ) operations, while solving the system in this form using elimination takes ( 3 ) operations.
The alternative presented here is to give a factorization of Φ −1 based on (4).
Theorem 2. The change-of-basis matrix Φ −1 has a factorization
whereΦ is block-diagonal, with each block corresponding to a subgrid, i ;̂is diagonal; is a matrix that includes R into R for some > .
Proof. To start, we order the columns of Φ so that for each multi-index, i, the set of column indices of basis functions for interpolation on i from (8) are the same as the set of row indices for points in i . Let be the number of points in and i the number in i . Let i be the projection from R to R i obtained by restricting to the indices corresponding to i . In this case, Φ i = i Φ i is the matrix obtained by evaluating basis functions at points, with the basis functions and points restricted to those associated with i . Hence, given f = ( ), we can interpolate on i by solving 4 Computational Biology Journal Since the gPC basis functions associated with i were obtained by changing basis from the Lagrange representation of i , we see that i ( ) and the polynomial with gPC coefficients i i are the same polynomial. Hence, we may extend the equality in (10) to all of by replacing i by on the left and dropping i on the right to obtain Let i denote the weight for i in (4). Using (11) with (4), we have Recalling Let i 1 , . . . , i be an enumeration of the indices in , let be the matrix obtained by vertical concatenation of i 1 , . . . , i , letΦ be the block diagonal matrix with Φ i 1 , . . . , Φ i as blocks, and let̂be the diagonal matrix with diagonal entries i 1 , . . . , i , each repeated according to the size of the corresponding block. Then (13) gives the factorization 3.2. Algorithm and Complexity. Based on the previous section, the algorithm to find the coefficient vector, , to represent the sparse-grid interpolating polynomial in gPC basis is simply to use (14) to find = Φ −1 f. We avoid the matrix inverse by definingf = f, solvingΦ̂=f for̂, and then using =̂̂.
In this section, we show that for fixed in (4) and large dimension, , the complexity of this algorithm is linear in the number of points of evaluation. We frame this result as a corollary to the following proposition, which gives a bound on the total number of operations needed to compute on each subgrid, i , in turn, as needed to solveΦ̂=f for̂.
In both of the following results, ( + , ) is the number of points in the sparse grid associated with ( + , ).
In this proposition, we use the fact mentioned above that when > , then the sparse grid is a union of i over i as indicated in the sum given here. Given this result, the following theorem is nearly immediate.
Theorem 4.
For fixed , there is > 0 so that for > , the coefficients in the gPC expansion of ( + , )( ) may be found using (14) with computation time bounded by ( + , ); that is, for fixed , the running time is linear in the number of grid points.
We have not attempted to calculate the best possible in the proof of this result. In fact, numerical results given below show that in some cases, the coefficient actually decreases as increases due to the spread of fixed overhead costs over a small number of points when and are small. Additionally, the numerical results show that the time per point evaluated is roughly constant through = 8.
Note that this algorithm computes the gPC coefficients of the interpolating polynomial rather than the original function itself. However, for a function, , with ≥ 1, the error estimate from (5) implies that the interpolating function converges uniformly to as the depth, , increases. Since the gPC coefficients are obtained by weighted integration of against the orthogonal polynomials, the gPC coefficients for the interpolating polynomials converge to the gPC coefficients for , and (5) provides a means to estimate the error in these coefficients based on the set of orthogonal polynomials and their corresponding weights.
Proof of Proposition 3. Given |i| = + between and + , consider the number of points in the corresponding grid i . Let = (i) be the number of nontrivial entries in i (i.e., entries larger than 1). Suppose without loss that the first entries of i are nontrivial with entries = + 1 ≥ 2. Then, the number of points in this grid is ∏ =1 (2 + 1). If 1 is the only entry larger than 1, then 1 = − ( − 1), and so the number of points is exactly 3 −1 (2 − +1 +1). A simple counting argument implies that any other distribution of nontrivial entries produces no more grid points, so To construct i with |i| = + and nontrivial entries, we select out of nontrivial entries, then distribute elements to these entries and so that each entry gets at least one element. There are ( ) ( −1 −1 ) ways to do this. Summing from = 1 to , and using ( ) ≤ , we have .
The summation on the right hand side is a binomial expansion of a power of − 1, and distributing a factor of (2 ) −1 Computational Biology Journal 5 into this power converts the right hand side to 4 (3 + 2 ) −1 . Using this in place of the sum over i and summing over gives As noted above, ( + , ) is asymptotically equal to 2 / ! for large in the sense that for fixed , lim → ∞ ( + , )/(2 / !) = 1. In particular, there is > 0 so that ≤ ( + , ) for all ≥ 1. Hence, the results just given imply Hence, taking , = 2 4 (2 ⋅ 3 ) −1 , we obtain the desired result.
Proof of Theorem 4. Given the sparse grid, , associated with ( + , ) and the vector of values, f, we need first to form the matrices ,̂, andΦ as in (13). From the construction of , it has exactly one nonzero entry per row and is × , where = ( , ) is asymptotic to 2 / ! as increases [10], and = ∑ ≤|i|≤ + | i |. Hence, and the diagonal matrix, , may be constructed and applied in time ( ). A given block ofΦ is obtained by evaluating products of one-dimensional polynomials on the points of a product grid, i . For a nontrivial entry i = 1 + , the polynomials in have degree at most 2 . Using a three-term recurrence, these polynomials may be evaluated at a given point in (2 ). Since the projection of i to the coordinate has 2 + 1 points, these polynomials are evaluated in time (2 2 ). This is repeated for each nontrivial for a total of ( 2 2 ), which is ( | i | 2 ). Then for each point in i , we multiply at most polynomials, which are ( | i |). Summing over i and applying Proposition 3, we see thatΦ may be constructed in time ( ( + , )).
As noted above, to find the gPC coefficients, , we definê f = f, solveΦ̂=f for̂, and then use =̂̂. Sincê Φ has blocks of size | i | × | i |, we may solve each block in time at most (| i | 3 ). Combining this with Proposition 3 and the estimates ( ) and ( ) above, we obtain the theorem.
A practical point is that many of the blocks inΦ are identical up to a permutation of the rows. Hence, for each of these blocks, we may use a single LU decomposition and appropriate permutations of the entries in f to reduce the total running time (although not, of course, the linear dependence described in the theorem).
We note also that the algorithm given here is compatible with the anisotropic adaptive sparse grids of [19]. The analysis given here shows that the time per point depends only on the maximum size of a block in the factorization of Φ −1 , and this size is bounded by the maximum depth, , in both the isotropic and the anisotropic cases.
Finally, the analysis for fixed and increasing is relevant in that as models become more complex, increases, while often the important behavior of a model can be captured by a relatively low-order approximation. An example of this is seen in the T-cell signaling model in Section 4.2.
Efficiency of Conversion.
In this section, we provide results of numerical experiments using the algorithm described in the previous sections. All computations were performed in MATLAB 7.7.0 on a Dell Precision PWS690 with an Intel Xeon running at 3 GHz with 3 GB of RAM.
To evaluate the running time of the conversion algorithm, we used the test function labeled oscillatory in [10] as where and 1 are chosen at random as indicated in [22]. The domain of definition is [0, 1]. The sparse grid for a given dimension and depth was created using the MATLAB package spinterp, version 5.1.1 [23]. The resulting functional values were then used as input to the revised conversion algorithm using the Legendre polynomials as basis.
In Figure 1, we plot the running time of the conversion algorithm for fixed order of accuracy, , and increasing dimension. Figure 4(a) clearly shows the linear dependence on the number of points evaluated. Figure 4(b) shows the same data as a function of the number of dimensions. Here, the nonlinear increase for bigger than 1 is due to the nonlinear dependence of the number of points of evaluation on dimension. Nevertheless, for = 2, the algorithm is reasonably fast even up to 100 dimensions.
In Figure 2, we plot the running time of the conversion algorithm for fixed dimension and increasing order of accuracy, . Here, the bound in (18) implies that the running time is bounded by 1 2 per point for some constants 1 and 2 . However, Figure 4(a) shows that the deviation from nonlinear is relatively small for small . This is made more precise in Figure 4(b), which shows the time per point as a function of . Perfect linear dependence would imply that the traces for different dimensions would coincide. While this is essentially true for ≥ 5 in the data presented, there are significant deviations for small and . This is due to fixed overhead time that must average over fewer points in these cases, which gives rise to the decrease in time per point as dimension increases when ≤ 4. Somewhat more surprisingly, the time per point (after discounting the fixed overhead) is essentially constant up to = 8.
Sensitivity Analysis for T-Cell Signaling Model.
To illustrate the calculation of global sensitivity coefficients as described in Section 2.2, we consider a mathematical model in [24] and examined further in [25]. This model describes the effect on T-cell activation of agonist and antagonist binding of major histocompatibility complex molecules, with both positive and negative feedback present in the signaling network. We use a form of this model that is a standard ODE system with 19 parameters, 37 state variables, and fixed initial conditions. We focus on the dynamics of phosphorylated ZAP, which plays an important role in positive feedback through the MEK/ERK kinase cascade. We take the uncertain parameter space to be a hypercube with sides given by [ /5, 5 ], where is the nominal th parameter value. For purposes of constructing the gPC expansion and calculating sensitivity coefficients, we transform the parameters to log space. Hence, the function that we evaluate has the form ( ; ), where is a vector of parameters given by = 5 , with each ∈ [−1, 1], and ∈ [0, 2000] is time.
More precisely, for this example, the nodes of a sparse grid are vectors in [−1, 1] 19 , and each such node gives a corresponding parameter vector using the transformation just described. For each such , we evaluate the model at 200 evenly spaced time points between 0 and 2000. For one of these fixed , say , the model evaluations ( ; ) over all the given by nodes of a sparse grid allow us to construct an interpolating polynomial, which can then be evaluated at any in the parameter space, even those that do not arise from nodes of the sparse grid. Of course the resulting interpolation is only an approximation to the true model, but the evaluation of the interpolating polynomial is in this case on the order of 20 times faster than evaluation by integrating the system of ODEs directly, and after change of basis to gPC form, the interpolating polynomial can be used to compute sensitivity coefficients very quickly. Moreover, the remarks in Section 2 quantify the relationship between the number of nodes in the sparse grid and the accuracy of the approximation.
Over the selected parameter space, this model has dynamics that can be difficult to capture using interpolation methods. Depending on parameters, the dynamics can exhibit dramatically different time scales, and, moreover, there is a division of parameter space into regions that produce trajectories with relatively high terminal concentrations of pZAP and other regions with relatively low terminal concentrations; very few trajectories have terminal concentrations in an intermediate range. These features can be difficult to capture accurately using polynomial interpolation in the same way that step functions can be difficult to approximate with Fourier series. Figure 3 shows a set of sample trajectories of the full model along with trajectories obtained by interpolation using an increasing number of model evaluations. The parameter vectors for these trajectories were chosen randomly from the full parameter space and thus are not sparse grid nodes. These plots show both the difficulty in capturing the dynamics accurately (e.g., the overshoot in the trajectory that rises most quickly) and the fact that interpolation captures the main features of the dynamics even in this difficult case and with accuracy increasing with the number of points.
After evaluating the function on a sparse grid, we next computed the Sobol' sensitivity coefficients for each of the 19 parameters as a function of time. As described in Section 2.2, this is accomplished by first using the function values ( ; ) for fixed to change to a gPC basis and thereby obtain an interpolating polynomial for ( ; ⋅); the sensitivity coefficients for this time point are then obtained by summing the squares of coefficients of appropriate terms in this polynomial. This is done for each to obtain the sensitivity coefficients as a function of time. The resulting curves are shown in Figure 4, using interpolating polynomials based on 100 function values (a) and 10000 function values (b). As seen there, the estimates of the sensitivity coefficients change in magnitude between the two panels, but the ranking of sensitive versus insensitive is quite consistent between the two. Moreover, this ranking is nearly identical to that given by an independent estimation using a quasi-Monte Carlo method.
To demonstrate the utility of this sensitivity analysis, we note that all but 4 of the parameters have sensitivity coefficients that are below 0.1 for the entire time course. The 4 remaining parameters have 3 distinct patterns of influence on the sensitivity. The first pattern is due to the ZAP dephosphorylation rate, which gives rise to a sharp spike in sensitivity near the beginning of the time course (in black in Figure 4). This sensitivity at the start of the time course is related to the way in which the model departs from the fixed initial conditions; a larger dephosphorylation leads to a steeper initial decline in the level of pZAP. However, examination of trajectories arising from a range of values for this parameter shows that there is less than 2% variation in these trajectories over the part of the curve corresponding to the peak of sensitivity. Hence, although the sensitivity to this parameter is high in this region, the reason for this sensitivity is not that the trajectories vary greatly in response to changes in this parameter, but rather that changes to the other parameters lead to essentially no change to the output in the early part of the trajectory.
The next sensitivity pattern is a moderate hump in the time period of about 50 to about 500 (shown in green and red in Figure 4). These are due to the T-cell receptor (TCR) phosphorylation rate (the upper, green curve) and the ZAP phosphorylation rate (the lower, red curve). Not surprisingly, with the remaining parameters held fixed, increased values for either of these rates generally lead to significantly increased levels of pZAP from about time 50 onward.
The final sensitivity pattern is shown in Figure 4 in the blue curve that rises around time 500, which is due to the SHP phosphorylation rate. SHP plays a role in a negative feedback loop, and this nonlinear dependence gives rise to complex and often long-lasting influences on the trajectories.
To indicate the utility of the sensitivity analysis in capturing the dynamics of this model, we took the parameter values used in Figure 3, changed all but the 4 parameters discussed as most sensitive back to the nominal values of the original model, and then resimulated. The resulting curves in Figure 5(a) indicate that this simple procedure captures the majority of the dynamical features of the original trajectories of Figure 3(a). We then used sparse grid interpolation on the 4-dimensional grid corresponding to these 4 parameters and produced interpolated curves as in Figure 3. The smaller dimension here means that far fewer points are needed to accurately capture the trajectories relative to the 19 dimensional cases examined earlier.
Conclusions
The algorithm described here provides an efficient method to convert function values to a gPC approximation to a given function and thereby to estimate global Sobol' sensitivity coefficients. The method relies on a block-diagonal factorization of the matrix for changing basis from Lagrange polynomials to orthogonal polynomials based on function values at a Smolyak sparse grid. For a fixed degree of accuracy and increasing dimension, this algorithm is linear in the number of points of evaluation. Moreover, for fixed dimension, the time per point of evaluation is nearly constant as the degree of accuracy increases up to about = 8.
We applied this method to a time-varying model of T-cell signaling and showed that the majority of the dynamics of this model could be explained by variation in only 4 parameters: the ZAP dephosphorylation rate, and the ZAP, TCR, and SHP phosphorylation rates. Taken together, the combination of efficient sensitivity analysis and interpolation provides an easily applied and efficient method for understanding the main dynamics of a complex biological model. | 7,424.2 | 2013-04-11T00:00:00.000 | [
"Computer Science"
] |
Mitochondrial Drp1 recognizes and induces excessive mPTP opening after hypoxia through BAX-PiC and LRRK2-HK2
Mitochondrial mass imbalance is one of the key causes of cardiovascular dysfunction after hypoxia. The activation of dynamin-related protein 1 (Drp1), as well as its mitochondrial translocation, play important roles in the changes of both mitochondrial morphology and mitochondrial functions after hypoxia. However, in addition to mediating mitochondrial fission, whether Drp1 has other regulatory roles in mitochondrial homeostasis after mitochondrial translocation is unknown. In this study, we performed a series of interaction and colocalization assays and found that, after mitochondrial translocation, Drp1 may promote the excessive opening of the mitochondrial permeability transition pore (mPTP) after hypoxia. Firstly, mitochondrial Drp1 maximumly recognizes mPTP channels by binding Bcl-2-associated X protein (BAX) and a phosphate carrier protein (PiC) in the mPTP. Then, leucine-rich repeat serine/threonine-protein kinase 2 (LRRK2) is recruited, whose kinase activity is inhibited by direct binding with mitochondrial Drp1 after hypoxia. Subsequently, the mPTP-related protein hexokinase 2 (HK2) is inactivated at Thr-473 and dissociates from the mitochondrial membrane, ultimately causing structural disruption and overopening of mPTP, which aggravates mitochondrial and cellular dysfunction after hypoxia. Thus, our study interprets the dual direct regulation of mitochondrial Drp1 on mitochondrial morphology and functions after hypoxia and proposes a new mitochondrial fission-independent mechanism for the role of Drp1 after its translocation in hypoxic injury.
INTRODUCTION
Hypoxic injury refers to tissue cell injury caused by insufficient blood perfusion and blood oxygen supply. It is a common pathway for the initiation and development of various critical illnesses, such as hemorrhagic shock, sepsis, and cardiopulmonary failure [1]. Severe hypoxic injury can cause organ dysfunction and may be life-threatening. The mitochondrion, as the main site of aerobic respiration in cells, is one of the organelles that undergo damage immediately after ischemia and hypoxia [2]. In our previous study, we found significant mitochondrial damage in tissues and organs, such as vascular [3], intestines [4], and the heart [5], after hypoxic injury. This mainly manifested as abnormal mitochondrial morphology (such as excessive fission, impaired fusion, etc.) and mitochondrial dysfunction (such as decreased ATP production, excessive ROS accumulation, etc.), which promoted organ dysfunction after hypoxia. However, the relationship between morphological and functional changes in the mitochondria following hypoxia has not been fully elucidated.
Dynamin-related protein 1 (Drp1), as a mitochondrial fissionassociated GTPase, is a classical protein affecting mitochondrial morphological changes. Under normal conditions, free-state Drp1 molecules are located in the cytoplasm. After the hypoxic injury, activated Drp1 proteins translocate from cytoplasm to mitochondrial surface and cleaves mitochondrial phospholipid bilayer through its GTPase activity to facilitate mitochondrial fission, which then results in mitochondrial fragmentation [6]. However, whether Drp1 undergoing mitochondrial translocation has other roles in mitochondrial homeostasis except for regulating mitochondrial morphology after hypoxia is unknown.
The mitochondrial permeability transition pore (mPTP) is a nonselective, highly conductive composite channel that spans the inner and outer mitochondrial membranes and allows the passage of any molecule with a relative molecular mass lower than 1.5 kDa [7]. Excessive opening of mPTP results in mitochondrial expanding due to high osmotic pressure in the matrix, which leads to mitochondrial outer membrane rupture and mitochondrial morphology disruption [8]. Excessive opening of mPTP also causes reduced mitochondrial membrane potential and massive cytochrome C (CytC) release due to loss of ion flow selectivity, which then aggravates the accumulation of mitochondrial reactive oxygen species (ROS), ultimately leading to cell death [9]. Thus, excessive mPTP opening is considered a prerequisite for mitochondrial morphological and functional changes, but the specific regulatory mechanism is unknown. We have previously shown that activated Drp1 can regulate mitochondrial functions, such as ROS production and mitochondrial metabolism through a variety of mitochondrial fission-independent pathways after hypoxic injury [3,4], whether mitochondrial Drp1 directly regulates the opening of mPTP channels has not been reported yet.
In this study, we investigated the direct regulatory mechanisms of mitochondrial Drp1 on mPTP opening after hypoxic injury and explored its links with classical Drp1 mitochondrial fissiondependent pathway. Our study clarified the close relationship between mitochondrial morphology and mitochondrial functions after hypoxia and interpreted the central position of the Drp1-mPTP pathway in the dual regulation of mitochondrial morphology and mitochondrial functions after hypoxia.
mPTP opening detection
The opening of the mPTP was determined by the Calcein-CoCl 2 staining method using confocal microscopy according to a previously reported protocol [11]. In brief, the treated cells were incubated with 2 μM Calcein and 100 nM MitoTracker Deep Red for 30 min in the absence of light. The cells were subsequently washed twice with PBS and then exposed to 2 mM CoCl 2 for 15 min. The Calcein fluorescence is compartmentalized within the mitochondria until the mPTP opening permits the distribution of cobalt inside the mitochondria, which results in the quenching of Calcein fluorescence in the mitochondrial matrix. After three washes with PBS, fluorescence imaging of cells was performed with excitation at 488 nm (Calcein) or 633 nm (MitoTracker) and emission at 510-550 nm (Calcein) or 560-617 nm (MitoTracker) using the Leica TCS software (Leica Microsystems, Wetzlar, Germany). Digital images were analyzed using the Image J software (National Institutes of Health, Bethesda, MD, USA), and the degree of MPTP opening was reflected by the red (MitoTracker)/green (Calcein) fluorescence ratio.
Subcellular fractionation
Isolated SMAs or VSMCs were collected in filter cartridges. The cytosol fractions were isolated using a Minute TM Cytoplasmic Extraction Kit (Invent Biotechnologies, Inc. SC-003), and the mitochondria fractions were isolated using the Minute TM Mitochondria Isolation Kit (Invent Biotechnologies, Inc. MP-007) [12]. Fractioned proteins were used for immunoblotting analyses with the indicated antibodies.
Co-immunoprecipitation (Co-IP)
Co-IP was performed using the Protein A/G Magnetic Beads IP Kit according to the manufacturer's instructions. 10 μg primary antibody was diluted with 200 μl PBST in a tube. 50 μl Protein A/G Magnetic Beads was added into the mixture and then incubated for 10 min at room temperature, and the antibody-conjugated immunomagnetic beads were prepared after removing the supernatant on the magnetic separator. After the samples were harvested, lysed and centrifuged, the supernatants were gently mixed with antibody-conjugated immunomagnetic beads to prepare an immunomagnetic beads-antibody-antigen complex. After washing the beads with PBS three times, the above complex was resuspended in 100 μl PBS and was used to detect endogenous interaction between proteins.
Western blotting
Cell pellets were solubilized with the RIPA buffer (Beyotime Institute of Biotechnology, China) with the addition of cOmplete Protease Inhibitors (Roche, Switzerland) and PhosSTOP Phosphastase Inhibitors (Roche, Switzerland), electrophoresed, and blotted onto polyvinylidene fluoride membranes. The membranes were incubated with the indicated primary antibodies, followed by the incubation with horseradish peroxidaseconjugated secondary antibodies (Jackson ImmunoResearch, UK). The protein concentration was determined with the BCA Protein assay kit (Thermo Scientific Pierce, UK). Blotted proteins were visualized using an enhanced chemiluminescence detection kit (Tiangen Biotech, China). The intensity of the bands was analyzed by Quantity One V 4.62 software (Bio-Rad, Life Science, CA, USA) [3].
Immunofluorescence staining
VSMCs were plated in the confocal chamber and incubated with MitoTracker Deep Red (100 nM for 30 min, 37°C). After washing twice in 1× phosphate-buffered saline (PBS), VSMCs were fixed in a 4% paraformaldehyde solution for 20 min at room temperature. Cells were permeabilized with 0.1% Triton-x 100 in 1× PBS for 5 min at room temperature. Cells were then blocked in a 5% BSA solution for 1 h at room temperature, washed and incubated overnight at 4°C with primary antibodies. Cells were then washed in PBS plus 0.1% Tween-20 and incubated with corresponding fluorophore-conjugated mouse or rabbit secondary antibodies (Invitrogen, Carlsbad, CA, USA) for 1 h at room temperature. Cells were washed as before with a final wash in 1× PBS alone and incubated with DAPI (BD Biosciences, Franklin Lakes, NJ, USA) (1:50) for 5 min at room temperature. Immunofluorescence was visualized using confocal laser-scanning microscopy (Leica SP5, Germany) [10].
Proteome microarray detection
Biotin-tagged Drp1 was synthesized by KMD Bioscience (Tianjin, China). Arrayit HuProt™ v2.0 19K Human Proteome Microarrays (CDI Laboratories, Baltimore, MD, USA) were used to identify Drp1-interacting proteins. Microarrays were blocked with blocking buffer (1% BSA in 0.1% Tween 20; TBST) for 1 h at room temperature with gentle agitation. 50 µM biotintagged Drp1 or free biotin was added and incubated on the proteome microarray at room temperature for 1 h. The microarrays were washed with TBST three times for 5 min each washing and were incubated with Cy3-Streptavidin (1:1000, Sigma, St Louis, MO, USA) for 1 h at room temperature, followed by three 5-min washes in TBST. The microarrays were spun dry at 250 × g for 3 min and were scanned with a GenePix 4200A microarray scanner (Axon Instruments) to visualize and record the results [13]. The signal-to-noise ratio (SNR) was defined as the ratio of the median of the foreground signal to the median of the background signal and was calculated for each protein. The SNR of a protein was averaged from the two duplicated spots on each microarray and the mean SNR was used to represent the signal of the protein. To call the candidates, the cutoff of SNR was set as a ratio >3.0. Data were analyzed by GenePix Pro 6.0. C. Duan et al.
Homology modeling and protein docking
The three-dimensional structures of LRRK2 and DRP1 proteins were constructed with the SWISS-MODEL web server (https://swissmodel.expasy. org/) using the homology modeling approach. A truncated sequence of LRRK2 protein from G1900 to G2090 was used due to the great length of this protein. The GRAMM-X Protein-Protein Docking web server was utilized to predict the complex structure of LRRK2 and DRP1 proteins. LRRK2 was defined as a ligand due to its smaller size, whereas DRP1 was defined as a receptor, and at most 10 models were output. The first of the three output models approximated the experimental result with the best outcome and was thus used as the final model. Since the GRAMM-X models were coarse, and unfavorable atomic clashes were found between receptor and ligand, the model was further minimized in UCSF Chimera with additional hydrogens and an AMBER ff14SB force field. Finally, PyMOL was used for visual analysis.
Statistical analysis
Data are expressed as means and standard deviations. For each set of experiments, the sample size was chosen to ensure adequate power to detect variations. An independent sample t-test was used for experiments with two groups. One-way analysis of variance (ANOVA) was used for experiments with more than two groups and followed by Tukey's post hoc analysis and SNK or LSD comparison using the SPSS 17.0 software (SPSS Inc., Chicago, IL, USA). P < 0.05 was considered to indicate a statistically significant difference in all analyses.
Mitochondrial Drp1 affects mPTP opening after hypoxia
In our previous study, we have shown that in vascular tissue injured by ischemia and hypoxia, Drp1 largely translocated from the cytoplasm to mitochondria where it mediated mitochondrial fission and affected mitochondrial morphology [3]. Here, we aimed to determine whether mitochondrial Drp1 has other biological roles in addition to mediating mitochondrial fission after hypoxia. We performed MitoTracker and Calcein fluorescence labeling to monitor mitochondrial morphological changes and mPTP opening in hypoxia-induced VSMC. The confocal images showed that both excessive mPTP opening (as indicated by an increase in mitochondrial Calcein fluorescence quenching) and excessive mitochondrial fission (as indicated by a decrease in mitochondrial skeleton length) were observed after hypoxia (Fig. 1A).
To determine whether mPTP overopening is closed associated with Drp1 undergoing mitochondrial translocation after hypoxia. we reduced mitochondrial Drp1 levels using Mdivi-1 (50 μM), an inhibitor of Drp1 translocation [3] and found that the mitochondrial skeleton length and Calcein fluorescence intensity were respectively 2.19-fold and 7.28-fold higher when compared with the Hypoxia group (p < 0.05), suggesting that reducing mitochondrial Drp1 levels may improve both mitochondrial morphology and mPTP opening after hypoxia (Fig. 1B). However, after treatment with CsA (10 μM) [14], a specific inhibitor of mPTP opening, only hypoxia-induced mPTP overopening was significantly improved (p < 0.05), but little effect was found on mitochondrial morphology after CsA treatment (Fig. 1B), which may be due to the inability of CsA on mitochondrial Drp1 levels (Fig. 1C). These results suggested that simply inhibiting mPTP opening without reducing Drp1 mitochondrial translocation may not affect the mitochondrial fission process after hypoxia. In addition, we also examined the profile of mitochondrial CytC release to reflect mPTP opening. Our results showed that reducing mitochondrial Drp1 levels with Mdivi-1 could significantly reduce mitochondrial CytC release (p < 0.05), indicating the effects of mitochondrial Drp1 on mPTP opening regulation and mPTP overopening may be downstream of Drp1 mitochondrial translocation after hypoxia.
To investigate whether cytoplasmic Drp1 has a similar regulatory effect on mPTP opening, we subjected cytoplasmic Drp1 overexpression to normal VSMC (Fig. S1) (Drp1 OE + Normal group) and found that the mitochondrial skeleton length was significantly shortened (p < 0.05), whereas the mPTP opening was not significantly reduced (p > 0.05) when compared with Vector + Normal group (Fig. 1E). The above results suggested that cytoplasmic Drp1 may only influence the mitochondrial fission process and had no effect on mPTP opening regulation. We further accelerated Drp1 mitochondrial translocation induced by hypoxia in Drp1 OE-treated VSMC (Drp1 OE + Hypoxia group) and found that the opening of mPTP was significantly decreased when compared with the Drp1 OE + Normal group (p < 0.05), and this effects could be reversed by supplementary CsA treatment (p < 0.05) (Fig. 1E). These results indicate that both cytoplasmic and mitochondrial Drp1 could affect mitochondrial morphology to some extent, but only mitochondrial Drp1 was involved in the regulation of mPTP opening after hypoxia.
Mitochondrial Drp1 interacts with mPTP through BAX and PiC
To understand the mechanism underlying the effects of mitochondrial Drp1 on mPTP opening after hypoxia, we first explored whether mitochondrial Drp1 interacts with mPTP after hypoxia. Previous studies [15][16][17] have shown that mPTP consists of various proteins spanning the inner and outer mitochondrial membranes. It consists of voltage-dependent anion channel (VDAC), Bcl-2associated X protein (BAX), and hexokinase 2 (HK2) in the outer mitochondrial membrane; adenine nucleotide translocase (ANT), mitochondrial phosphate carrier (PiC), adenosine triphosphatase (ATPase) [18] and cyclophilin D (CypD) in the inner mitochondrial membrane; and creatine kinase (CK) between the inner and outer mitochondrial membranes [19] (Fig. 2A). We performed co-IP to screen the mPTP channel proteins for binding with Drp1 after hypoxia and found that more BAX and PiC bound to Drp1 in the hypoxia group than in the normal group (p < 0.05), and that this binding applies only to mitochondrial proteins (Fig. 2B, C). More VDAC also appeared to bind Drp1 after hypoxia, but the amount of VDAC bound to Drp1 in the hypoxia group did not significantly differ from that in the normal group (p > 0.05) (Fig. 2D). These results suggested that mitochondrial Drp1 may target and interact with mPTP after hypoxia by binding BAX and PiC.
To test our hypothesis, we performed Calcein fluorescence staining on hypoxia-induced VSMC treated with BAX activation inhibitor 1 (BAI1, a BAX inhibitor, 5 μM) and N-Ethylmaleimide (NEM, a PiC inhibitor, 5 μM) [20]. We found that mitochondrial Calcein fluorescence intensity increased to varying degrees after BAX or PiC inhibition in hypoxia-induced VSMC respectively (p < 0.05) (Fig. 2E), suggesting that the binding of mitochondrial Drp1 to BAX and PiC is important for mPTP overopening after hypoxia. Additionally, the BAX agonist BTSA1 (15 μM) and the PiC agonist SEW2871 (15 nM) [21] were unable to significantly affect mitochondrial Calcein fluorescence intensity on hypoxia-induced VSMC when the levels of mitochondrial Drp1 were reduced using Mdivi-1 (p > 0.05) (Fig. 2E), which further suggested that the interaction of mitochondrial Drp1 with BAX and PiC is indispensable to the mechanism underlying mPTP overopening in response to hypoxia.
Mitochondrial Drp1 leads to mPTP overopening after hypoxia by promoting dissociation of HK2 from the mitochondrial membrane To determine whether the mechanism by which mitochondrial Drp1 regulates mPTP overopening is also related to an altered mPTP structure, we first examined the expression of all mPTP related proteins in different cellular components after hypoxia. Western blotting results showed that cytoplasmic HK2 expression was significantly elevated and mitochondrial HK2 expression was significantly reduced after hypoxia (p < 0.05) (Fig. 3A, B), suggesting that the HK2 may have dissociated from the outer mitochondrial membrane after hypoxia. We next performed co-IP to examine the binding of HK2 to VDAC, an adjacent mPTP related protein, after hypoxia. We found that the amount of VDACbound HK2 was significantly reduced after hypoxia. Meanwhile, inhibition of Drp1 mitochondrial translocation using Mdivi-1 raised the amount of VDAC-bound HK2, suggesting that the dissociation of HK2 from the mitochondrial membrane after hypoxia is related to the increase in mitochondrial Drp1 levels (Fig. 3C). Confocal images showed significantly elevated Drp1 and significantly reduced HK2 colocalization with the mitochondria after hypoxia (p < 0.05). After inhibition of Drp1 mitochondrial translocation using Mdivi-1, Drp1 colocalization with the mitochondria was significantly reduced, and HK2 colocalization with the mitochondria was significantly elevated (p < 0.05) (Fig. 3D, E). These results reflect the effect of mitochondrial Drp1 on HK2 dissociation from the mitochondria after hypoxia.
Further analysis revealed that the dissociation of HK2 from the mitochondrial membrane after hypoxia may result from HK2 inactivation. Western blotting results showed that HK2 Thr473 phosphorylation levels were significantly reduced after hypoxia (Fig. 3F). Further, an HK2 T473D substitution significantly attenuated the hypoxia-induced HK2-VDAC binding (Fig. 3G) Mitochondrial length and relative mPTP opening ratio in each group were calculated by Image J. a: P < 0.05 compared with Normal group, b: P < 0.05 compared with Hypoxia group, c: P < 0.05 compared with Hypoxia + Mdivi-1 group. and mPTP overopening (Fig. 3H). These results suggest that the mechanism through which mitochondrial Drp1 leads to mPTP overopening is mediated by the dissociation of HK2 from the mitochondrial membrane, which may be due to reduced HK2 phosphorylation.
Mitochondrial Drp1 leads to HK2 dephosphorylation and mitochondrial dissociation after hypoxia by occluding the LRRK2 active site To investigate how Drp1, which does not exhibit kinase and phosphatase activities, affects HK2 activity after hypoxia, we performed protein chip high-throughput sequencing of biotinlabeled Drp1 proteins to screen for kinases or phosphatases that potentially interact with Drp1 after hypoxia. The microarray results suggested an interaction between Drp1 and leucine-rich repeat serine/threonine-protein kinase 2 (LRRK2) after hypoxia (Fig. 4A). Co-IP results confirmed that the amount of LRRK2-bound mitochondrial Drp1 significantly increased after hypoxia (p < 0.05), and that the free cytoplasmic Drp1 hardly bound to LRRK2 (Fig. 4B). Similarly, confocal images suggested that mitochondrial Drp1 recruited large amounts of LRRK2 to the mitochondrial fission site after hypoxia, whereas there was little colocalization between cytoplasmic Drp1 and LRRK2 (Fig. 4C).
The GRAMM-X Protein-Protein Docking algorithm predicted hydrogen bonding between mitochondrial Drp1 and LRRK2 through a number of amino acid residues. The top three potential binding sites were Drp1 Thr595 with LRRK2 Gly2019, Drp1 Lys594 with LRRK2 Glu2033, and Drp1 Ala598 with LRRK2 Lys1906. Among these, the predicted hydrogen bond between Drp1 Thr595 and LRRK2 Gly2019 had the highest ZDOCK score of 1330.963 (Fig. 4D). To verify the specific binding site of mitochondrial Drp1 to LRRK2 after hypoxia and the effects of the Drp1-LRRK2 interaction on mPTP overopening, we introduced a T595A substitutions to Drp1. We found that posthypoxia binding of Drp1 to LRRK2 was significantly attenuated by the T595A substitution in Drp1 (p < 0.05) (Fig. 4E). The Drp1 T595A substitution also led to significantly elevated HK2 Thr473 phosphorylation (p < 0.05), significantly reduced HK2 mitochondrial dissociation (p < 0.05) (Fig. 4F), and significantly reduced mPTP channel opening (p < 0.05) (Fig. 4G). These results suggest that, following hypoxia, mitochondrial Drp1 leads to reduced HK2 Thr473 phosphorylation and mitochondrial HK2 dissociation by recruiting LRRK2 and blocking the LRRK2 G2019 active site, ultimately leading to mPTP overopening.
DISCUSSION
In this study, we found that, in addition to regulating mitochondrial morphology through the classical mitochondrial fissiondependent pathway, activated Drp1 undergoing mitochondrial translocation further recognizes and induces excessive mPTP opening after hypoxia. Firstly, mitochondrial Drp1 maximumly recognizes mPTP channels by binding the mPTP structural proteins BAX and PiC. Then, mitochondrial Drp1 recruits LRRK2 to mitochondrial contraction or fission sites and then blocks the LRRK2 kinase active site. This then leads to HK2 dephosphorylation and its dissociation from the mitochondrial membrane, ultimately causing structural disruption and overopening of mPTP, ultimately causing structural disruption and overopening of mPTP, which aggravates mitochondrial and cellular dysfunction after hypoxia (Fig. 5).
The connection between mitochondrial morphology and mitochondrial function has been the focus of several studies in the field of mitochondrial regulation. The study of Han et al. [22] showed that hypoxia-induced ROS accumulation in cervical cancer cells promoted mitochondrial fission through the downregulation of Drp1 Ser637 phosphorylation. The study of Zhang et al. [23] showed that hypoxia-induced Drp1 Ser616 activation in pancreatic β-cells triggered the release of CytC and the activation of caspases, ultimately leading to pancreatic β-cell apoptosis. These studies suggest that Drp1 is a mechanochemical enzyme that is likely to play a role in the dual regulation of mitochondrial morphology and function after hypoxia. Consistent with our findings, a study on myocardial ischemia by Dhingra et al. [24] showed that Drp1 activation was accompanied by mPTP channel opening and reduced mitochondrial membrane potential after hypoxia. However, Dhingra et al. did not elucidate the direct effects and potential regulatory role of Drp1 on mPTP opening.
Here, we have shown that, following hypoxia, mitochondrial Drp1 promotes mPTP overopening through interactions with BAX-PiC and LRRK2-HK2, which aggravate the mitochondrial injury and cell death. Our study highlights the role of Drp1 in the regulation of mitochondrial morphology and function and suggests a novel mechanism by which Drp1 participates in the regulation of mitochondrial homeostasis after hypoxia.
Previous studies [25,26] have shown that Drp1 can interact with a variety of proteins, such as mitochondrial dynamics proteins (MiD49 and MiD51), the mitochondrial fission factor (MFF), and the mitochondrial fission 1 protein (Fis1). Most of these proteins are distributed on the outer mitochondrial membrane and act as receptors for Drp1 after its translocation to the mitochondria. These proteins are considered important for Drp1-mediated mitochondrial fission. Using a shock model, we have previously demonstrated that Drp1 binds BAX after ischemia and hypoxia, causing increased CytC release and caspase 3/9 activation, which promotes cell death after hypoxia [3]. In the present study, we showed that the Drp1-induced cell death was primarily related to the overopening of mPTP and that this was mediated by the interaction of Drp1 with BAX in the outer mitochondrial membrane and PiC in the inner mitochondrial membrane following hypoxia. Studies have shown that PiC plays an important role in ATP synthesis and can mediate the transport of inorganic phosphate from the mitochondrial inner membrane to the matrix [27,28], and overexpression of PiC protein can induce apoptosis [29]. The study of Kwong et al. [30] showed that cardiac-specific PiC mutant mice had reduced sensitivity to mPTP opening and tolerated ischemic-hypoxic injury to some extent. However, a PiC mutation did not prevent mPTP opening. Here, we have elucidated a potential reason underlying these results. Although PiC is an mPTP related protein, it is only involved in the binding of mPTP to Drp1 and is not directly involved in the subsequent regulation of mPTP opening. Low levels of PiC may only reduce the interaction between mPTP and mitochondrial Drp1 and may only have a weak influence on mPTP opening after hypoxia.
Several factors, such as calcium overload [31], oxidative stress [32], abnormal pH, and increased inorganic phosphate, affect mPTP opening. In addition, mPTP exhibits the capacity to regulate its own structure [33]. ANT in the inner mitochondrial membrane portion of mPTP can control ATP to ADP conversion, and it can influence mPTP opening by altering mitochondrial energy metabolism [34]. Overopening of mPTP is induced when ATP and ADP bind the ANT substrate binding site on the cytoplasmic side, and overopening of mPTP is inhibited when ATP and ADP bind to the ANT substrate binding site on the matrix side [35]. VDAC in the outer mitochondrial membrane portion of mPTP can affect the sensitivity of mPTP to calcium [36]. The study of Lee et al. [37] showed that Drp1 binding to VDAC was significantly elevated in prostate cancer, which resulted in increased pyruvate transport to the mitochondria and affected mitochondrial metabolic processes such as oxidative phosphorylation and lipogenesis. In this study, we similarly found that Drp1 binds VDAC, but the Drp1-VDAC binding was not significantly altered after hypoxic injury. This may be related to the fact that Drp1 mainly undergoes changes in activation and translocation in acute diseases such as ischemic-hypoxic injury [4,10], whereas Drp1 mainly shows increased expression in chronic diseases such as tumors and diabetes [37,38], and these differences may have varying effects on Drp1 binding to target proteins. CypD is an mPTP structural protein located within the mitochondrial matrix; oxidative stress can increase the binding of CypD to ANT and PiC and promote mPTP overopening [39]. The study of Xiao et al. [40] showed that CypD promoted the phosphorylation and mitochondrial translocation of Drp1 in neurodegenerative diseases. Our results suggest that the effects of CypD on Drp1 may be related to the interaction between mPTP and Drp1.
HK2 has been recently proposed as an mPTP structural protein [41]. HK2 was previously thought to only affect glucose metabolism to glucose 6-phosphate [42]. Recent studies have found that glucose 6-phosphate accumulation and decreased pH after prolonged ischemia could affect mitochondrial HK2 dissociation [43], and that enhancing the binding of HK2 to the outer mitochondrial membrane alleviates myocardial injury caused by ischemia and hypoxia [41]. In tumor cells, Akt activation can induce phosphorylation of HK2 Thr473 and enhance HK2 binding to the outer mitochondrial membrane [44]. In cardiomyopathy, glycogen synthase kinase 3 beta (GSK3β)-induced HK2 dephosphorylation can cause mitochondrial HK2 liberation [45], indicating that an altered HK2 activity is a key factor for mitochondrial HK2 expression and translocation. The influence of Drp1 on HK2 mitochondrial translocation has been previously proposed in myocardial ischemia-reperfusion injury [41], but its mechanism was not elucidated. In this study, we found that the effect of mitochondrial Drp1 on HK2 activity and translocation after hypoxia was mainly related to the recruitment of LRRK2 by Drp1 and the kinase domain mutation. The key interaction occurred between Drp1 Thr595 and LRRK2 G2019. Thus, our findings provide a new target for intervention in mPTP channel overopening studies. An interaction between Drp1 and LRRK2 has also been previously observed in other models [46]. Abnormalities in LRRK2 cause F-actin hyperstabilization and Drp1 mislocalization, which induce neurotoxicity of the microtubule-binding protein Tau in Parkinson's disease [47]. In addition, previous studies have shown that LRRK2 is important in the regulation of mitochondrial dynamics in hypoxia [48]. Recent studies [49] have shown that LRRK2 itself can regulate mitochondrial homeostasis, and LRRK2 can disrupt the interaction between Drp1 and Parkin on the mitochondrial membrane and inhibit mitophagy. Our study further indicates the regulatory role of LRRK2 in mitochondrial dynamics and functions, and that this regulatory process is closely related to Drp1. This study has some limitations. Because there is no effective detection method for mPTP opening in vivo, we cannot verify our results in animal models. Next, our protein chip sequencing results suggested several potential Drp1-interacting proteins. The interaction of Drp1 with these other proteins and how they affect mitochondrial homeostasis after ischemic-hypoxic injury will have to be explored further. Lastly, this study continues our previous study on the mechanism of Drp1 in a mitochondrial mass imbalance in vascular tissues after ischemic-hypoxic injury. Whether the mechanism by which mitochondrial Drp1 recognizes and induces the excessive opening of mPTP in VSMC after hypoxia through BAX-PiC and LRRK2-HK2 also applies to other tissue cell types, such as cardiomyocytes and intestinal epithelial cells, will have to be explored in future studies.
CONCLUSION
Here, we have shown the mechanisms underlying the effects of elevated mitochondrial Drp1 on mPTP opening after hypoxia. In particular, we report that the direct interaction of Drp1 with mPTP related proteins promotes mPTP destabilization through the LRRK2-mediated dissociation of HK2 from the mitochondrial membrane. Our results provide insights into the role of Drp1 on hypoxia-induced mitochondrial dysfunction and cell death and The schematic diagram showing the mechanisms of mitochondrial Drp1 recognizing and inducing excessive mPTP opening after hypoxia. Hypoxia-activated mitochondrial Drp1 firstly recognizes mPTP channels through binding with its structural proteins, BAX and PiC, then recruits LRRK2 to mitochondrial contraction or fission sites and blocks its kinase activity site (LRRK2 G2019), leading to the dephosphorylation and mitochondrial separation of mPTP related protein HK2, resulting in structural damage and excessive opening of mPTP channel after hypoxia.
propose Drp1-mPTP pathway may be the effective potential targets for dual regulating mitochondrial morphology and functions in acute ischemic/ hypoxic injury.
DATA AVAILABILITY
All data in this study are available from the first or corresponding author upon reasonable request. | 6,700.4 | 2021-11-01T00:00:00.000 | [
"Medicine",
"Biology",
"Environmental Science"
] |
A Long Short-Term Memory Framework for Predicting Humor in Dialogues
We propose a first-ever attempt to employ a Long Short-Term memory based framework to predict humor in dialogues. We analyze data from a popular TV-sitcom, whose canned laughters give an indication of when the audience would react. We model the setup-punchline relation of conversational humor with a Long Short-Term Memory, with utter-ance encodings obtained from a Convolutional Neural Network. Out neural network framework is able to improve the F-score of 8% over a Conditional Random Field baseline. We show how the LSTM effectively models the setup-punchline relation reducing the number of false positives and increasing the recall. We aim to employ our humor prediction model to build effective empathetic machine able to understand jokes.
Introduction
There has been many recent attempts to detect and understand humor, irony and sarcasm from sentences, usually taken from Twitter (Reyes et al., 2013;Barbieri and Saggion, 2014;Riloff et al., 2013;Joshi et al., 2015), customer reviews (Reyes and Rosso, 2012) or generic canned jokes . Bamman and Smith (2015) and Karoui et al. (2015) included the surrounding context.
Our work has a different focus from the above. We analyze transcripts of funny dialogues, a genre somehow neglected but important for human-robot interaction. Laughter is the natural reaction of people to a verbal or textual humorous stimulus. We want to predict when the audience would laugh.
Compared to a typical canned joke or a sarcastic Tweet, a dialog utterance is perceived as funny only in relation to the dialog context and the past history. In a spontaneous setting a funny dialog is usually built through a setup which prepares the audience to receive the humorous discourse stimuli, followed by a punchline which releases the tension and triggers the laughter reaction (Attardo, 1997;Taylor and Mazlack, 2005). Automatic understanding of a humorous dialog is a first step to build an effective empathetic machine fully able to react to the user's humor and to other discourse stimuli. We are ultimately interested in developing robots that can bond with humans better (Devillers et al., 2015).
As a source of situational humor we study a popular TV sitcom: "The Big Bang Theory". The domain of sitcoms is of interest as it provides a full dialog setting, together with an indication of when the audience is expected to laugh, given by the background canned laughters. An example of dialog from this sitcom, as well as of the setup-punchline schema, is shown below (punchlines in bold): LAUGH He started America on a path to the metric system but then just gave up. LAUGH The utterances before the punchline are the setup. Without them, the punchlines may not be perceived as humorous (the last utterance, out of context, may be a political complaint), only with proper setup a laughter would be triggered. The humorous intent is also strengthen by the fact the dialog takes place in a bar (evident from the previous and following utterances), where a request of 40 ml of "Ethyl Alcohol" is unusual and weird.
Our previous attempts on the same corpus (Bertero and Fung, 2016b; Bertero and Fung, 2016a) showed that using a bag-of-ngram representation over a sliding window or a simple RNN to capture the contextual information of the setup was not ideal. For this reason we propose a method based on a Long Short-Term Memory network (Hochreiter and Schmidhuber, 1997), where we encode each sentence through a Convolutional Neural Network (Collobert et al., 2011). LSTM is successfully used in context-dependent sequential classification tasks such as speech recognition (Graves et al., 2013), dependency parsing and conversation modelling (Shang et al., 2015). This is also to our knowledge the first-ever attempt that a LSTM is applied to humor response prediction or general humor detection tasks.
Methodology
We employ a supervised classification method to detect when punchlines occur. The bulk of our classifier is made of a concatenation of a Convolutional Neural Network (Collobert et al., 2011) to encode each utterance, followed by a Long Short-Term Memory (Hochreiter and Schmidhuber, 1997) to model the sequential context of the dialog. Before the output softmax layer we add a vector of higher level syntactic, structural and sentiment features. A framework diagram is shown in Figure 1.
Convolutional Neural Network for each utterance
The first stage of our classifier is represented by a Convolutional Neural Network (Collobert et al., 2011). Low-level, high-dimensional input feature vectors are fed into a first embedding layer to obtain a low dimensional dense vector. A sliding window is lt are the high level feature vectors, and yt the outputs for each utterance.
then moved on these vectors and another layer is applied to each group of token vectors, in order to capture the local context of each token. A max-pooling operation is then applied to extract the most salient features of all the tokens into a single vector for the whole utterance. An additional layer is used to generalize and distribute each feature to its full range before obtaining the final utterance vector. In our task we use three input features: 1. Word tokens: each utterance token is represented as a one-hot vector. This feature models how much each word is likely to trigger humor in the specific corpus.
2. Character trigrams: each token is represented as a bag-of-character-trigrams vector. The feature models the role of the word signifier and removes the influence of the word stems.
3. Word2Vec: we extract for each token a word vector from word2vec (Mikolov et al., 2013), trained on the text9 Wikipedia corpus 1 . This representation models the general semantic meanings, and matches words that do not appear to others similar in meaning.
The convolution and max-pooling operation is applied individually to each feature, and the three vectors obtained are then concatenated together and fed to the final sentence encoding layer, which combines all the contributions.
Long/Short Term Memory for the utterance sequence
The LSTM is an improvement over the Recurrent Neural Network aimed to improve its memory capabilities. In a standard RNN the hidden memory layer is updated through a function of the input and the hidden layer at the previous time instant: where x is the network input and b the bias term. This kind of connection is not very effective to maintain the information stored for long time instants, as well as it does not allow to forget unneeded information between two time steps. The LSTM enhances the RNN with a series of three multiplicative gates. The structure is the following: where is the element-wise product. Each gate factor is able to let through or suppress a specific update contribution, thus allowing a selective information retaining. The input gate i is applied to the cell input s, the forget gate f to the cell value at the previous time step c t−1 , and the output gate o to the cell output for the current time instant h t . In this way a cell value can be retained for multiple time steps when i = 0, ignored in the output when o = 0, and forgotten when f = 0.
As dialog utterances are sequential, we feed all utterance vectors of a sitcom scene in sequence into a Long Short-Term Memory block to incorporate contextual information. The memory unit of the LSTM keeps track of the context in each scene, and mimics human memory to accumulate the setup that may trigger a punchline.
Before the output we incorporate a set of high level features from our previous work (Bertero and Fung, 2016b) and past literature (Reyes et al., 2013;Barbieri and Saggion, 2014). They include: • Structural features: average word length, sentence length, difference in sentence length with the five previous utterances.
• Part of speech proportion: noun, verbs, adjectives and adverbs.
• Speaker and turn: speaker character identity and utterance position in the turn (beginning, middle, end, isolated).
• Speaking rate: time duration of the utterance from the subtitle files, divided by the sentence length.
All these features are concatenated to the LSTM output, and a softmax layer is applied to get the final output probabilities.
Corpus
We built a corpus from the popular TV-sitcom "The Big Bang Theory", seasons 1 to 6. We downloaded the subtitle files (annotated with the timestamps of each utterance) and the scripts 2 , used to segment all the episodes into scenes and get the speaker identity of each utterance. We extracted the audio track of each episode in order to retrieve the canned laughters timestamps, with a vocal removal tool followed by a silence/sound detector. We then annotated each utterance as a punchline in case it was followed by a laughter within 1s, assuming that utterances not followed by a laughter would be the setup for the punchline.
We obtained a total of 135 episodes, 1589 overall scenes, 42.8% of punchlines, and an average interval between two punchlines of 2.2 utterances. We built a training set of 80% of the overall episodes, and a development and test set of 10% each. The episodes were drawn from all the seasons with the same proportion. The total number of utterances is 35865 for the training set, 3904 for the development set and 3903 for the test set.
Experimental setup and baseline
In the neural network we set the size to 100 for all the hidden layers of the CNN and the LSTM, and 5 to the convolutional window. We applied a dropout regularization layer (Srivastava et al., 2014) after the output of the LSTM, and L2 regularization on the softmax output layer. The network was trained with standard backpropagation, using each scene as a training unit. The development set was used to tune the hyperparameters, and to determine the early stopping condition. When the error on the development set began to increase for the first time we kept training only the final softmax layer, this improved the overall results. The neural network was implemented with THEANO toolkit (Bergstra et al., 2010). We ran experiments with and without the extra high-level feature vector.
As a baseline for comparison we used an implementation of the Conditional Random Field (Lafferty et al., 2001) from CRFSuite (Okazaki, 2007), with L2 regularization. We ran experiments using the same high level feature vector added at the end of the neural network, 1-2-3gram features of a window made by the utterance and the four previous, and the two feature sets combined. We also compared the overall system where we replace the CNN with an LSTM sentence encoder , where we kept the same input features.
Results and discussion
Results of our system and our baselines are shown in table 1. The LSTM with the aid of the high level feature vector generally outperformed all the CRF baselines with the highest accuracy of 70.0% and the highest F-score of 62.9%. The biggest improvement of the LSTM is the improvement of the recall without affecting too much the precision. Lexical features given by n-gram from a context window are very useful to recognize more punchlines in our baseline experiment, but they also yield many false positives, when the same n-gram is used in different contexts. A CNN-LSTM network seems to overcome this issue as the CNN stage is better in modeling the lexical and semantic content of the utterance, as the LSTM allows to put each utterance in relation with the past context, filtering out many false positives from wrong contexts.
The choice of the CNN is further justified by the results obtained from the comparison between the CNN and the LSTM sentence encoding input, shown in table 2. The CNN is more effective, obtaining a recall of 10% higher and 6% more in Fscore. The CNN is a simpler model that might benefit more of a small-size corpus. It also required a much shorter training time compared to the LSTM. We may consider in the future to use more data, and try other sentence input encoders, including deeper or bi-directional LSTMs, to find the most effective one.
Predicting humor response from the canned laughters is a particularly challenging task. In some cases canned laughters are inserted by the show producers with the purpose of solicit response to weak jokes, where otherwise people would not laugh. The audience must also be kept constantly amused, extra canned laughters may help in scenes where fewer jokes are used.
Conclusion and future work
We proposed a Long Short-Term Memory based framework to predict punchlines in a humorous dialog. We showed that our neural network is particularly effective in increasing the F-score to 62.9% over a Conditional Random Field baseline of 58.1%. We furthermore showed that the LSTM is more effective in obtaining an higher recall with fewer false positives compared to simple n-gram shifting context window features.
As future work we plan to use a virtual agent system to collect a set of human-robot humorous interactions, and adapt our model to predict humor from them. | 3,008.6 | 2016-06-01T00:00:00.000 | [
"Computer Science"
] |
Stochastic Recursive Zero-Sum Differential Game and Mixed Zero-Sum Differential Game Problem
Under the notable Issacs’s condition on the Hamiltonian, the existence results of a saddle point are obtained for the stochastic recursive zero-sum differential game and mixed differential game problem, that is, the agents can also decide the optimal stopping time. Themain tools are backward stochastic differential equations BSDEs and double-barrier reflected BSDEs. As the motivation and application background, when loan interest rate is higher than the deposit one, the American game option pricing problem can be formulated to stochastic recursivemixed zero-sumdifferential game problem. One example with explicit optimal solution of the saddle point is also given to illustrate the theoretical results.
Introduction
The nonlinear backward stochastic differential equations BSDEs in short had been introduced by Pardoux and Peng 1 , who proved the existence and uniqueness of adapted solutions under suitable assumptions.Independently, Duffie and Epstein 2 introduced BSDE from economic background.In 2 , they presented a stochastic differential recursive utility which is an extension of the standard additive utility with the instantaneous utility depending not only on the instantaneous consumption rate but also on the future utility.Actually, it corresponds to the solution of a particular BSDE whose generator does not depend on the variable Z. From mathematical point of view, the result in 1 is more general.Then, El Karoui et al. 3 and Cvitanic and Karatzas 4 generalized, respectively, the results to BSDEs with reflection at one barrier and two barriers upper and lower .
BSDE plays an important role in the theory of stochastic differential game.Under the notable Isaacs's condition, Hamadène and Lepeltier 5 obtained the existence result of a saddle point for zero-sum stochastic differential game with payoff J u, v E u,v T t f s, x s , u s , v s ds g x T . 1.1 Using a maximum principle approach, Wang and Yu 6, 7 proved the existence and uniqueness of an equilibrium point.We note that the cost function in 5 is not recursive, and the game system in 6, 7 is a BSDE.In 8 , El Karoui et al. gave the formulation of recursive utilities and their properties from the BSDE's pointview.The problem that the cost function payoff of the game system is described by the solution of BSDE becomes the recursive differential game problem.In the following Section 2, we proved the existence of a saddle point for the stochastic recursive zero-sum differential game problem and also got the optimal payoff function by the solution of one specific BSDE.Here, the generator of the BSDE contains the main variable solution y t , and we extend the result in 5 to the recursive case which has much more significance in economics theory.Then, in Section 3 we study the stochastic recursive mixed zero-sum differential game problem which is that the two agents have two actions, one is of control and the other is of stopping their strategies to maximize and minimize their payoffs.This kind of game problem without recursive variable and the American game option problem as this kind of mixed game problem can be seen in Hamadène 9 .Using the result of reflected BSDEs with two barriers, we got the saddle point and optimal stopping strategy for the recursive mixed game problem which has more general significance than that in 9 .
In fact, the recursive mixed zero-sum game problem has wide application background in practice.When the loan interest rate is higher than the deposit one.The American game option pricing problem can be formulated to the stochastic recursive mixed game problem in our Section 3. To show the application of this kind of problem and our motivation to study our recursive mixed game problem, we analyze the American game option pricing problem and let it be an example in Section 4. We notice that in 5, 9 , they did not give the explicit saddle point to the game, and it is very difficult for the general case.However, in Section 4, we also give another example of the recursive mixed zero-sum game problem, for which the explicit saddle point and optimal payoff function to illustrate the theoretical results.
Stochastic Recursive Zero-Sum Differential Game
In this section, we will study the existence of the stochastic recursive zero-sum differential game problem using the result of BSDEs.
Let {B t , 0 ≤ t ≤ T } be an m-dimensional standard Brownian motion defined on a probability space Ω, F, P .Let F t t≥0 be the completed natural filtration of B t .Moreover, i C is the space of continuous functions from 0, T to R m ; ii P is the σ-algebra on 0, T × Ω of F t -progressively sets; iii for any stopping time ν, T ν is the set of F t -measurable stopping time τ such that P -a.s.ν ≤ τ ≤ T ; T 0 will simply be denoted by T; iv H 2,k is the set of P-measurable processes ω ω t t≤T , R k -valued, and square integrable with respect to dt ⊗ dP; v S 2 is the set of P-measurable and continuous processes ω ω t t≤T , such that The m × m matrix σ σ ij satisfies the following: i for any 1 ≤ i, j ≤ m, σ ij is progressively measurable; ii for any t, x ∈ 0, T × C, the matrix σ t, x is invertible; iii there exists a constants Then, the equation has a unique solution x t .Now, we consider a compact metric space A resp.B , and U resp.V is the space of P-measurable processes u : u t t≤T resp.v : iii there exists a constant K such that |Φ t, x, u, v | ≤ K 1 |x| t for any t, x, u, and v; iv there exists a constant M such that |σ −1 t, x Φ t, x, u, v | ≤ M for any t, x, u, and v.
For u, v ∈ U × V, we define the measure P u,v as Thanks to Girsanov's theorem, under the probability P u,v , the process is a Brownian motion, and for this stochastic differential equation x t t≤T is a weak solution.
Suppose that we have a system whose evolution is described by the process x t t≤T .On that system, two agents c 1 and c 2 intervene.A control action for c 1 resp.c 2 is a process u u t t≤T resp.v v t t≤T belonging to U resp.V .Thereby U resp.V is called the set of admissible controls for c 1 resp.c 2 .When c 1 and c 2 act with, respectively, u and v, the law of the dynamics of the system is the same as the one of x under P u,v .The two agents have no influence on the system, and they act to protect their advantages by means of u ∈ U and v ∈ V via the probability P u,v .
In order to define the payoff, we introduce two functions C t, x, y, u, v and g x satisfying the following assumption: there exists L > 0, for all x, x ∈ H 2,m and Y, Y ∈ S 2 , such that and g x is measurable, Lipschitz continuous function with respect to x.The payoff J x 0 , u, v is given by J x 0 , u, v Y 0 , where Y satisfies the following BSDE: 2.6 From the result in 10 , there exists a unique solution Y, Z for u, v.The agent c 1 wishes to minimize this payoff, and the agent c 2 wishes to maximize the same payoff.We investigate the existence of a saddle point for the game, more precisely a pair u * , v * of strategies, such that and we say that the Isaacs' condition holds if for t, x, Y, Z ∈ 0, We suppose now that the Isaacs' condition is satisfied.By a selection theorem see Benes 11 , there exists 2.9 Thanks to the assumption of σ, Φ, and C, the function Now we give the main result of this section.
.10
Then, Y * 0 is the optimal payoff J x 0 , u * , v * , and the pair u * , v * is the saddle point for this recursive game.
Proof.We consider the following BSDE:
2.11
Thanks to Theorem 2.1 in 10 , the equation has a unique solution
2.12
We can get
2.13
By the comparison theorem of the BSDEs and the inequality 2.9 , we can compare the solutions of 2.11 , and 2.13 and get Y 0 and u * , v * is the saddle point.
Stochastic Recursive Mixed Zero-Sum Differential Game
Now, we study the stochastic recursive mixed zero-sum differential game problem.First, let us briefly describe the problem.Suppose now that we have a system, whose evolution also is described by x t 0≤t≤T , which has an effect on the wealth of two controllers C 1 and C 2 .On the other hand, the controllers have no influence on the system, and they act so as to protect their advantages, which are antagonistic, by means of u ∈ U for C 1 and v ∈ V for C 2 via the probability P u,v in 2.2 .The couple u, v ∈ U × V is called an admissible control for the game.Both controllers Mathematical Problems in Engineering also have the possibility to stop controlling at τ for C 1 and θ for C 2 ; τ and θ are elements of T which is the class of all F t -stopping time.In such a case, the game stops.The controlling action is not free, and it corresponds to the actions of C 1 and C 2 .A payoff is described by the following BSDE: and the payoff is given by where the U t t≤T , L t t≤T , and Q t t≤T are processes of S 2 such that L t ≤ Q t ≤ U t .The action of C 1 is to minimize the payoff, and the action of C 2 is to maximize the payoff.Their terms can be understood as i C s, x, Y, u, v is the instantaneous reward for C 1 and cost for C 2 ; ii U τ is the cost for C 1 and for C 2 if C 1 decides to stop first the game; iii L θ is the reward for C 2 and cost for C 1 if C 2 decides stop first the game.
The problem is to find a saddle point strategy one should say a fair strategy for the controllers, that is, a strategy u * , τ * ; v * , θ * such that Like in Section 2, we also define the Hamiltonian associated with this mixed stochastic game problem by H t, x, Y, Z, u, v , and thanks to the Benes's solution 11 , there exist
3.4
It is easy to know that H t, x, Y, Z, u, v is Lipschitz in Z and monotone in Y .
From the result in 12 , the stochastic mixed zero-sum differential game problem is possibly connected with BSDEs with two reflecting barriers.Now, we give the main result of this section.Theorem 3.1.Y * , Z * , K * , K * − is the solution of the following reflected BSDE: Proof.It is easy to know that the reflected BSDE 3.5 has a unique solution Y * , Z * , K * , K * − , then we have
3.6
Since K * and K * − increase only when Y reaches L and U, we have t≤T is an F t , P u * ,v * -martingale, then we get
3.7
We know that Y *
Finally, let us show that the value of the game is Y * 0 .We have proved that
3.16
The proof is now completed.
Application
In this section, we present two examples to show the applications of Section 3.
The first example is about the American game option pricing problem.We formulate it to be one stochastic recursive mixed game problem.This can be regarded as the application background of our stochastic game problem.
Example 4.1.American game option when loan interest is higher than deposit interest is shown.
In El Karoui et al. 13 , they proved that the price of an American option corresponds to the solution of a reflected BSDE.And Hamadène 9 proved that the price of American game option corresponds to the solution of a reflected BSDE with two barriers.Now, we will show that under some constraints in financial market such as when loan interest rate is higher than deposit interest rate, the price of an American game option corresponds to the value function of stochastic recursive mixed zero-sum differential game problem.
We suppose that the investor is allowed to borrow money at time t at an interest rate R t > r t , where r t is the bond rate.Then, the wealth of the investor satisfies where Z t : σ t π t , θ t : σ −1 t b t − r t .b t represents the instantaneous expected return rate in stock, σ t which is invertible represents the instantaneous volatility of the stock, and C t is interpreted as a cumulative consumption process.b t , r t , R t , and σ t are all deterministic bounded functions, and σ −1 t is also bounded.An American game is a contract between a broker c 1 and a trader c 2 who are, respectively, the seller and the buyer of the option.The trader pays an initial amount the price of the option which guarantees a payment of L t t≤T .The trader can exercise whenever he decides before the maturity T of the option.Thus, if the trader decides to exercise at θ, he gets the amount L θ .On the other hand, the broker is allowed to cancel the contract.Therefore, if he chooses τ as the contract cancellation time, he pays the amount U τ , and U τ ≥ L τ .The difference U τ − L τ is the premium that the broker pays for his decision to cancel the contract.If c 1 and c 2 decide together to stop the contract at the time τ, then c 2 gets a reward equal to and Q t are stochastic processes which are related to the stock price in the market.
We consider the problem of pricing an American game contingent claim at each time t which consists of the selection of a stopping time τ ∈ F τ or θ ∈ F θ and a payoff U τ or L θ on exercise if τ < θ < T or θ < τ < T and ξ if τ T .Set L s }.We formulate the pricing problem of American game option to the stochastic recursive mixed zero-sum differential game problem which was studied in Section 3, so the previous example provides the practical background for our problem.This is also one of our motivations to study the recursive mixed game problem in this paper.
In the following, we give another example, where we obtain the explicit saddle point strategy and optimal value of the stochastic recursive game.The purpose of this example is to illustrate the application of our theoretical results.
Example 4.2.We let the dynamics of the system x t t≤T satisfy dx t x t dB t , t ≤ 1, where the initial value is x 0 . 4.8 The control action for c 1 resp.c 2 is u resp.v which belongs to U resp.V .The U is 0, 1 , and the V is 0, 1 , while the function Φ x t u t v t .Then, by the Girsanov's theorem, we can define the probability P u,v by Under the probability P u,v , the process B u,v t B t − t 0 u s v s ds is a Brownian motion.
First, we consider the following stochastic recursive zero-sum differential game: Therefore, and obviously, the Isaacs condition is satisfied with
4.13
We also can the conclusion that the optimal game value Y * 0 J x 0 , u * , v * is an increasing function with the initial value of the dynamics system x 0 from the previous representation.Now, we give the numerical simulation and draw Figure 1
4.17
We also can get the conclusion that the optimal game value Y * 0 J x 0 , u * , τ * ; v * , θ * is an increasing function with the initial value of the dynamics system x 0 from the previous representation.
Figure 1 :
Figure1: Y 0 stands for the optimal game value, and x 0 stand for the initial value of the dynamics system. | 3,959.2 | 2012-12-25T00:00:00.000 | [
"Mathematics"
] |
On the Bivariate Spectral Homotopy Analysis Method Approach for Solving Nonlinear Evolution Partial Differential Equations
and Applied Analysis 3 where u j = u(x, τ j ), d i,j (i, j = 0, 1, . . . ,M) are entries of the standard Chebyshev differentiation matrix d = [d i,j ] of size (M + 1) × (M + 1) (see, e.g., [12, 13]). By evaluating (6) at the grid points τ i , we obtain F [u i , u i , u i ] + G [u i , u i , u i ] − 2 M
Introduction
The study of nonlinear evolution partial differential equations (PDEs) is a vast area of research with well-developed and documented theories and applications in almost all areas of science and engineering.The PDEs are used to describe many complex nonlinear settings in applications such as vibration and wave propagation, fluid mechanics, plasma physics, quantum mechanics, nonlinear optics, solid state physics, chemical kinematics, physical chemistry, population dynamics, and many other areas of mathematical modelling.The development of both analytical and numerical methods for solving complicated highly nonlinear PDEs continues to be a fertile area of research geared towards enriching and deepening our understanding of these intriguing nonlinear problems.
The homotopy analysis method (HAM) has been widely discussed in the literature for solving both nonlinear ordinary and partial differential equations.A comprehensive exposition of the underlying concepts and applications of the HAM can be found in recently published books [1][2][3][4].A unique feature of the HAM, which sets it apart from all perturbative and nonperturbative methods reported in the literature, is the flexibility to vary its embedded convergence controlling auxiliary parameters and functions.Previous studies have illustrated that a carefully selected choice of auxiliary linear operators result in significant improvement of the convergence and accuracy of the HAM [5][6][7].Baxter et al. [5] examined multiple auxiliary linear operators to find the best operator that yields the best accuracy for the solution of the Cahn-Hilliard equation, a nonlinear partial differential equation.The linear operators in the study of Baxter et al. [5], as well as in numerous other HAM based studies for solving nonlinear PDEs, are conveniently chosen to guarantee that the HAM algorithm yields analytical series solutions.The limitation of these HAM based approaches that seek to obtain completely analytical results is that generating incremental terms of the HAM series solution becomes progressively cumbersome and the problem solving exercise becomes intractable eventually.This is particularly true when a nontrivial linear operator is used or required for optimal accuracy.Accordingly, an approach that can admit any form of linear operator, no matter how complex, is required in the HAM algorithm.However, complicated linear operators preclude the possibility of resolving the HAM series solutions analytically.Motsa et al. [8,9] proposed a discrete version of the HAM that is based on Chebyshev spectral collocation approach for implementing the HAM algorithm which was otherwise impossible to solve analytically.This discrete variant of the HAM was called the spectral homotopy analysis method (SHAM) in [8,9].The SHAM has recently been extended to solve a nonlinear partial differential equation based problem of unsteady boundary layer flow caused by an impulsively stretching plate in [10].Motsa [10] used a partial differential equation based auxiliary operator to improve on the ordinary derivative based linear operator approach used previously by Liao [11] to solve same problem.Motsa [10] concluded that when solving nonlinear PDEs, the use of PDE based linear operators leads to better results than the use of ODE based linear operators.In implementing the method, the decomposed equations were solved by applying the spectral method in the space variable and monomial series expansion in the time variable.This approach was found to work well for the unsteady boundary layer problem considered in [10] because the dimensionless time variable was defined in the range ∈ [0,1].However, series approaches of this kind are well known to have the capacity to resolve accurate solutions only in the region 0 ≤ < 1.Consequently, there is a need to develop variants of the SHAM that give solutions that are uniformly valid including regions where ≫ 1.More robust SHAM variations are needed for the solution of complex nonlinear PDEs that model important problems with wide applications in science, engineering, and other areas of applied mathematics.
The main objective of this work is to introduce a new variant of the spectral homotopy analysis method for solving nonlinear partial differential equations.The proposed method is developed by defining a rule of solution expression based on bivariate Lagrange interpolation.The homotopy analysis method algorithm is then applied to decompose the governing nonlinear PDEs into a sequence of linear PDEs.The resulting linear sequence of PDEs contains variable coefficients and is impossible to solve exactly.Consequently, the Chebyshev spectral collocation method is applied independently in the space and time independent variables.In view of the application of the combination of bivariate interpolation and spectral collocation differentiation, the new method is called bivariate interpolated spectral homotopy analysis method (BI-SHAM).The study presents a general BI-SHAM algorithm that can be used to solve second order nonlinear evolution equations.The applicability, accuracy, and reliability of the proposed BI-SHAM is confirmed by solving the Fisher, Burger-Fisher, Burger-Huxley, and Fitzhurg-Nagumo equations.The BI-SHAM results are compared against known exact solutions that have been reported in the scientific literature.
The remainder of the paper is organized as follows.In Section 2, we introduce the algorithm of the Bi-SHAM for a general nonlinear evolution PDE.Section 3 describes the application of the BI-SHAM in the problems that are selected for numerical experimentation.The numerical simulations and results are presented in Section 4. Finally, we conclude and describe future work in Section 5.
Bivariate Interpolated Spectral Homotopy Analysis Method (BI-SHAM)
In this section we introduce the Bivariate Interpolated Spectral Homotopy analysis Method (BI-SHAM) used to solve the governing nonlinear evolution PDEs.Without loss of generality, we consider nonlinear PDEs of the form where (, ) is the solution and is a nonlinear operator which contains all the spatial derivatives of .The solution procedure is based on the initial assumption that the solution can be approximated by a bivariate Lagrange interpolation polynomial of the form which interpolates (, ) independently at selected points in both the and directions defined as follows: The choice of grid points (3), which are called Chebyshev-Gauss-Lobatto points, ensures that there is a simple conversion of the continuous derivatives, in both space and time, to discrete derivatives at the grid points as will be discussed later.
The functions () are the characteristic Lagrange cardinal polynomials defined as follows: which obey the Kronecker delta equation.Consider The functions () are defined in a similar manner.
To derive the HAM equations corresponding to the nonlinear equation ( 1), it is convenient to rewrite the governing equation in the form where the dot and primes denote the time and space derivatives, respectively, is a linear operator, and is a nonlinear operator.A crucial step in the implementation of the solution procedure is the evaluation of the time derivative at the grid points ( = 0, 1, . . ., ).Before the derivative is applied, the given physical region, say ∈ [0, ], is converted to the region ∈ [−1, 1] using the linear transformation = ( + 1)/2.The values of the derivatives at the Chebyshev-Gauss-Lobatto points are computed as follows: where = (, ), , (, = 0, 1, . . ., ) are entries of the standard Chebyshev differentiation matrix d = [ , ] of size ( + 1) × ( + 1) (see, e.g., [12,13]).By evaluating (6) at the grid points , we obtain If the initial condition for ( 1) is given at = 0 (corresponding to = −1), we write (8) as follows: where () = (, 0) is the known initial condition.Equation ( 9) forms a system of coupled nonlinear ordinary differential equations with unknowns (), = 0, 1, 2, . . ., − 1. Below, we describe the spectral homotopy analysis method used to solve (9).
The algorithm of the HAM begins with the construction of the homotopy for a given linear operator L[ ] defined as follows: where ∈ [0,1] is an embedding parameter; ℎ denotes a nonzero convergence controlling auxiliary parameter; ,0 () is the initial approximation of the solution of for = 0, 1, 2, . . ., − 1.It should be emphasised that, in the specialized language of the HAM, the homotopy equation ( 10) is referred to as the zeroth order deformation equation.
From (10) it can be noted that, as increases from 0 to 1, (; ) varies from the initial approximation ,0 () to the solution () of nonlinear equation (9).Expanding (; ) using Taylor series about gives where Thus, since (; 1) = () and (; 0) = ,0 (), we obtain The series (15) converges when the auxiliary parameter ℎ is carefully chosen.The functions , appearing in series (15) are obtained as solutions of the so-called higher order deformations which are obtained by differentiating the zeroorder deformation equation (10), times with respect to , then dividing by !, and finally setting = 0.This gives where The initial approximations ,0 () are chosen in such a way that which, when using the definition of L in (11), can be written as follows: Equation ( 20) to be solved for the initial approximations ,0 together with higher order deformation equations giving , constitute sequence of linear ordinary differential equations and are solved using the Chebyshev spectral collocation method which is applied independently in the direction (where ∈ [, ]) using + 1 Chebyshev-Gauss-Lobatto points.Consider defined as The derivatives with respect to are defined is terms of the Chebyshev differentiation matrix as where is the order of the derivative, D = (2/( − ))[ , ] (, = 0, 1, 2, . . ., ) with [ , ] being an ( + 1) × ( + 1) Chebyshev derivative matrix, and the vector U , is defined as Thus, substituting (23) in the equations that give the initial approximations (20), we obtain the following ( + 1) × ( + 1) matrix system: where and I is the identity matrix of size ( + 1) × ( + 1), the superscript denotes transpose, and the function is the coefficient of U ,0 after the spectral method has been applied to the linear function [ ,0 , ,0 , ,0 ].Solving (25) gives the initial approximation ,0 .To obtain the approximate solutions for , (for ≥ 1), the spectral collocation method, with discretisation in the direction, is applied in a similar manner to the higher order deformation equations (16).This gives the following ( + 1) × ( + 1) matrix system: where , and , are as defined in (26) and In the above equation, R ,−1 is obtained by converting the continuous derivatives in ,−1 to Chebyshev spectral derivatives.
Numerical Experiments
To demonstrate the applicability of the proposed Bi-SHAM algorithm as an appropriate tool for solving nonlinear partial differential equations, we apply the proposed algorithm to well-known nonlinear PDEs of the form (1) with exact solutions.In order to determine the level of accuracy of the BI-SHAM approximate solution, at a particular time level, in comparison with the exact solution we report maximum error which is defined by where ũ( , ) is the solution obtained by (28) and is the ( , ) exact solution at the time level .
Example 1.We consider Fisher's equation as follows: subject to the initial condition and exact solution [14] (, ) = 1 where is a constant.Fisher's equation represents a reactivediffusive system and is encountered in chemical kinetics and population dynamics applications.For this example, the linear and nonlinear operator to be used in the Bi-SHAM algorithm are chosen as follows: Thus, using ( 17) we obtain The approximate solution at the ( , ) is obtained by solving (28).
Example 2. We consider the generalized Burgers-Fisher's equation [15] as follows: with initial condition and exact solution where , , and are parameters which, for illustration purposes, are chosen to be one in this paper.In this example, when = = = 1, the linear and nonlinear operator to be used in the Bi-SHAM algorithm are chosen as follows: Thus, using ( 17) we obtain Example 3. Consider the Fitzhurg-Nagumo equation as follows: with initial condition and exact solution [16] where is a parameter.In this example the linear and nonlinear operator to be used in the Bi-SHAM algorithm are chosen as follows: Thus, using ( 17) we obtain Example 4. Consider the Burgers-Huxley's equation as follows: where , ≥ 0 are constant parameters, is a positive integer (set to be = 1 in this study), and ∈ (0, 1).
In this example the linear and nonlinear operator to be used in the Bi-SHAM algorithm are chosen as follows: Thus, using ( 17) we obtain
Results and Discussion
In this section we present the numerical solutions of the implementation of the BI-SHAM algorithm on the nonlinear evolution equations as described in the previous section.
The number of collocation points in the space variable used to generate the results presented here was = 10 in all cases.Furthermore, unless otherwise specified, the order (total number of terms of the HAM series) of the HAM series was set to be = 10.It was found that sufficient accuracy was achieved using these values in all numerical computations of the examples considered in this paper.Using finite terms of the SHAM series we define th order approximation at the collocation points and (for = 0, 1, 2, . . ., and = 0, 1, 2, . . ., ) as follows: Assuming that ( , ) is the BI-SHAM approximate solution at the collocation (grid) points, the residual error is defined as where N is defined as The residual error is used in establishing the suitable convergence controlling parameter ℎ.A carefully selected ℎ is paramount in obtaining accurate and converging SHAM series solutions.The infinity norm of the residual error at a particular time level, defined as was used to identify the optimal value of ℎ that gives the best accuracy.
In Figures 1, 2, 3, and 4 we give illustrations of typical residual error curves that can be used to calculate the optimal value of ℎ for the Fisher, Burgers-Fisher, Fitzhugh-Nagumo, and Burgers-Huxley equations, respectively, when = 1.The residual ℎ-curves are plotted using different orders of the BI-SHAM series.The optimal values of ℎ are chosen to be the clearly defined minimum of the residual curve.It can be seen from the figures that the optimal ℎ value lies in the range −1 < ℎ < −0.9 for Fisher's equation, Fitzhugh-Nagumo's equation, and Burgers-Fisher's equation.In the case of Burgers-Huxley's equation, it can be observed from Figure 4 that the optimal ℎ value is near −1.1.We also note that the residual error decreases with an increase in the order of the BI-SHAM series.This denotes convergence of the proposed method.It was also observed that convergence also improves with an increase in , the number of collocations used in the -variable.This observation is in accord with an earlier observation made in a related study by [21] where an interpolation based spectral homotopy analysis method was used to solve PDE based unsteady boundary layer flows.
In Tables 1, 2, 3, and 4 we give the maximum errors between the exact and BI-SHAM results (defined using (30)) for the Fisher, Burgers-Fisher, Fitzhugh-Nagumo, and Burgers-Huxley equations, respectively, at selected values of for different collocation points, , in the -variable.It is worth mentioning here that the results in all tables were computed on the space domain ∈ [0, 1].To give a sense of the computational efficiency of the proposed method, the computational time taken to generate the results is also displayed in the tables.The results displayed in Tables 1, 2, 3, and 4 clearly show the accuracy of the proposed method.The accuracy is seen to improve with an increase in the number of collocation points .It is remarkable to note that accurate results with errors of order up to 10 −10 are obtained using very few collocation points in both the and variables ≤ 10, ≤ 10.This is a clear indication that the BI-SHAM is powerful method that is very appropriate in solving nonlinear PDEs of the type discussed in this investigation.We remark, also, that the BI-SHAM is computationally fast as the desired accurate results are generated in a fraction of a second in all the examples considered in this work.
Conclusion
This paper has presented a new variant of the spectral homotopy analysis method for solving general nonlinear evolution partial differential equations.The new method, called bivariate spectral homotopy analysis method (BISHAM), was developed from a combination of the homotopy analysis method algorithm with bivariate Lagrange interpolation and spectral collocation differentiation.The main goal of the current study was to assess the accuracy, applicability, and effectiveness of the proposed method in solving nonlinear partial differential equations.Numerical simulations were conducted on the Fisher equation, Burger-Fisher equation, Fitzhurg-Nagumo, and Burger-Huxley equations.This study has shown that the BISHAM gives very accurate results in a computationally efficient manner.Further evidence from this study is that the BISHAM gives solutions that
Table 4 :
Maximum errors for the Burgers-Huxley equation when = = = 1 and = 0.1 using ℎ = −1.areuniformly accurate and valid in large intervals of the governing space and time domains.The apparent success of | 4,030.6 | 2014-08-14T00:00:00.000 | [
"Mathematics"
] |
Lipid Driven Nanodomains in Giant Lipid Vesicles are Fluid and Disordered
It is a fundamental question in cell biology and biophysics whether sphingomyelin (SM)- and cholesterol (Chol)- driven nanodomains exist in living cells and in model membranes. Biophysical studies on model membranes revealed SM and Chol driven micrometer-sized liquid-ordered domains. Although the existence of such microdomains has not been proven for the plasma membrane, such lipid mixtures have been often used as a model system for ‘rafts’. On the other hand, recent super resolution and single molecule results indicate that the plasma membrane might organize into nanocompartments. However, due to the limited resolution of those techniques their unambiguous characterization is still missing. In this work, a novel combination of Förster resonance energy transfer and Monte Carlo simulations (MC-FRET) identifies directly 10 nm large nanodomains in liquid-disordered model membranes composed of lipid mixtures containing SM and Chol. Combining MC-FRET with solid-state wide-line and high resolution magic angle spinning NMR as well as with fluorescence correlation spectroscopy we demonstrate that these nanodomains containing hundreds of lipid molecules are fluid and disordered. In terms of their size, fluidity, order and lifetime these nanodomains may represent a relevant model system for cellular membranes and are closely related to nanocompartments suggested to exist in cellular membranes.
concerning properties of biological membranes question the biological relevance of the L o phase microdomains found in model membranes. The assumption that experiments on model membranes can reveal biologically relevant information leads us to two central questions: firstly, can lipid driven domains in model membranes be smaller than 40 nm and secondly if so, do such nanodomains have a L o character?
In this work we used MC-FRET in combination with novel monosialoganglioside GM 1 fluorescent probes to uncover the existence of nanodomains in lipid bilayers that should be in a homogeneous liquid disordered (L d ) phase according to published phase diagrams 14,15 . FRET has been frequently used in the past to reveal microto nano-scale heterogeneities in lipid membranes [16][17][18] but mostly on a qualitative level. Combination of FRET with MC simulations enabled us quantifying the sizes of domains down to a few nanometers and the fractional area occupied by these domains. To assess the fluidity and phase of the nanodomains we employed solid state wide-line and high resolution MAS (magic angle spinning) NMR spectroscopy, two-color z-scan fluorescence correlation spectroscopy (FCS) 19 and FRET.
Determination of nanodomain size and fractional bilayer area by MC-FRET.
Description of the MC-FRET approach. FRET between a single donor and a single acceptor occurs at distances between 1 to 10 nm and can be used as a molecular ruler within this accessible range. The situation is different when FRET occurs in a lipid bilayer that contains nanodomains and an ensemble of heterogeneously distributed donors and acceptors. Here, the formation of nanodomain structures forces a homogeneous distribution of donors and acceptors (Fig. 1, case 1) into a heterogeneous one (Fig. 1, cases 2 and 3) when using appropriate fluorescent probes that possess either an increased or decreased affinity for such nanodomains. This causes a change in FRET efficiency that can be seen in the recorded fluorescence decays (Fig. 1, bottom right corner). In these cases, the range of accessible distances (domain radii) that can be determined is significantly broader (2-50 nm) 20 and lies exactly in the region where other techniques become less efficient. The remarkably broad range of accessible distances is a consequence of FRET that occurs at the boundary of the nanodomains and of the fact that the length of that boundary depends on the nanodomain radius R D . The entire process of energy transfer can be modeled using MC simulations under certain assumptions (see Materials and Methods for details). The simulated decay curves were fitted to the experimental data by varying the radius of the nanodomains R D , the fractional bilayer area occupied by the nanodomains Ar (which is proportional to nanodomain concentration c D by c D = Ar/(πR D )) and the distribution constants of donors K D (D) and acceptors MC-FRET can detect various kinds of membrane heterogeneities, such as domains or pores 21,22 . However, its resolution significantly depends on both K D (D) and K D (A). If the probes possess equal affinity for the nanodomains and the remaining bilayer, then the formation of heterogeneities in a bilayer will not induce a heterogeneous probe distribution and therefore no change in FRET efficiency will occur. Consequently, such heterogeneities will not be 'seen' by FRET and the selected probes. Thus, Donor/Acceptor (D/A) pairs with suitable K D have to be chosen. Based on literature 23 and our previous work 24, 25 we used two different D/A pairs for the detection of lipid driven nanodomains. The first one consisted of ganglioside GM 1 molecules labeled at the headgroup with either FL-BODIPY (g-GM 1 ) or 564/570-BODIPY (r-GM 1 ). Both g-GM 1 and r-GM 1 show increased affinity for the L o microdomains but also to less ordered fluid nanodomains (refs 24 and 25 and as shown on Table 1 and Table 2). Importantly, these GM 1 probes do not intrinsically self-aggregate at the concentrations used in the FRET experiments (see SI and ref. 26). The second D/A pair consisted of 1,2-distearoyl-sn-glycero-3-phosphoethanolamine-N-[amino (Polyethyleneglycol) 2000] labeled at the end of the Polyethyleneglycol chain with either carboxyfluorescein (CF-PEG-DSPE) or Rhodamine101 (Rh-PEG-DSPE). PEG-DSPE lipids were shown to have increased affinity for the L o phase 23 , which is confirmed by this work (Table 2). In addition, here we demonstrate that these probes preferentially partition to the nanodomains that are rich in Chol and SM but still maintain their liquid-disordered character.
Nanodomains outside the L d /gel phase coexistence region in DOPC/SM. These mixtures phase-separate at 23 mol% of SM (L d + gel phase coexistence) and are fully converted into the gel phase at 81 mol% of SM at room ; ** as compared to FRET obtained in a homogeneous bilayer; *** this conclusion is drawn based on K D s given in Table 1.
temperature ( Fig. 2) 27,28 . To detect and characterize nanodomains in this binary system we performed MC-FRET experiments in the range 0-15 mol% of SM with g-GM 1 /r-GM 1 and CF-PEG-DSPE/Rh-PEG-DSPE D/A pairs. All bilayers appeared homogeneous in confocal images (see an example in Fig. 2). MC-FRET results obtained using g-GM 1 /r-GM 1 D/A pair indicated that the bilayers were homogeneous at DOPC/SM (100-92/0-8), while at (90-85/10-15) we were able to detect nanodomains ( Table 1). The presence of nanodomains is reflected in an enhanced relative FRET efficiency (see Table 1 for results and Materials and Methods for definition) and a faster Scientific RepoRts | 7: 5460 | DOI:10.1038/s41598-017-05539-y fluorescence decay of the donor g-GM 1 in the presence of r-GM 1 (compare the two decays in the top left panel of Fig. 3). Determination of average domain radius yielded two global minima at R D = (8 ± 1) nm, Ar = (37 ± 10) % and R D = (12 ± 3) nm, Ar = (55 ± 10) %. The best fit was obtained for K D = 10, showing a high affinity of the GM 1 probes for the domains. In contrast, the distribution of the CF-PEG-DSPE/Rh-PEG-DSPE D/A pair in the bilayer was not affected by the presence of nanodomains (K D (D,A) = 1), which did not allow for the detection of nanodomains by means of this D/A pair (see Table 1 and overlapping decays in the bottom left panel of Fig. 3). It is worth noting that binary DOPC/Chol mixtures exhibit different behavior. We showed previously that lipid mixtures of DOPC/Chol (65/35) were homogeneous as determined by FRET 24 . Transient nanodomains were found for this binary mixture only close to the phase separation boundary by other methods 29,30 where miscibility of Chol with DOPC is low 28 and Chol starts to phase-separate into anhydrous and monohydrate crystals 31 .
Nanodomains outside the L d /L o phase coexistence region in DOPC/Chol/SM. Addition of 25 mol% of Chol to the DOPC/SM bilayers promoted the formation of nanodomains. Here nanodomains were detected at DOPC/ Chol/SM (70-65/25/5-10) by g-GM 1 /r-GM 1 D/A pair. The enhanced relative FRET efficiency as compared to the homogeneous DOPC/Chol/SM (75/25/0) bilayer and the time resolved fluorescence decays of the donor g-GM 1 can be seen in Table 1 and Fig. 3, respectively. Determination of domain sizes by MC-FRET yielded an average R D (9 ± 1) nm and Ar = (45 ± 5) %. Deep chi-squared minima were only reached when K D (D) and K D (A) were at least 20, demonstrating that the GM 1 probes were highly localized in the nanodomains (see Fig. SI4). Moreover, the nanodomains were also detected by the CF-PEG-DSPE/Rh-PEG-DSPE DA pair at DOPC/Chol/ SM (65/25/10) and (63/25/12) ( Table 1 and Fig. 3). The affinity of the PEG-DSPE probes for the domains was lower (K D (D) and K D (A) ≈ 5), but sufficient to cause a change in the relative FRET efficiency and enable the determination of domain sizes at the higher SM amounts. The determined average R D = (8 ± 1) nm and Ar = (55 ± 5) % are in good agreement with the parameters determined using the g-GM 1 /r-GM 1 pair.
Supportive evidence for nanodomain existence by z-scan FCS. Our FRET measurements indi-
cate that the nanodomains occupy up to 55% of the entire bilayer area in binary DOPC/SM as well as ternary DOPC/Chol/SM lipid mixtures and exhibit an average radius of approximately 10 nm. According to our previous work focusing on MC simulations of molecular probe diffusion in a lipid bilayer 32 , the presence of stable (ca. >10 ms) nanodomains at such high domain concentration slows down the diffusion of fluorescently labeled lipids (=probes, Fig. 4). The extent to which the diffusion of the probes is slowed down depends in particular on their K D , the size of the nanodomains, the diffusion coefficient of the nanodomains themselves D (nanodomain), and the diffusion coefficient of the probes within those nanodomains D (probe). The strongest impact on probe diffusion occurs when the nanodomains are immobile and the probes have high affinity for them. When using classical FCS (where the focal waist is much larger than the nanodomains, about 300 nm vs. 10 nm) the presence of nanodomains is reflected in a slower diffusion behavior of the probe that can still be described by the free diffusion model (for example at K D (probe) = 25, domain radius = 50 nm and diffusion coefficient of nanodomains = 0.8 μm 2 /s probe diffusion is 5 times slower) 32 . However, considering the small size of the nanodomains described in this manuscript, it can be expected that they are mobile. In such case their impact on probe diffusion is less pronounced but still significant in most cases (for details see ref. 32). When probes avoid entering the nanodomains (K D < 1, panel A of Fig. 4) their diffusion is slowed down as well and does not exhibit any deviations from free diffusion as seen by classical FCS. In general, their sensitivity to the presence of nanodomains is smaller compared to probes that partition mostly towards nanodomains.
Considering the high affinity of g-GM 1 for nanodomains (Table 1), we used it as a probe to detect nanodomains by FCS. For comparison, we also used the DiD probe. It can be inferred from FRET experiments using g-GM 1 and DiD (Fig. SI3), that DiD is homogeneously distributed between the nanodomains and the remaining bilayer. Therefore, the expected impact of nanodomains on its diffusion will be smaller.
In DOPC/SM lipid mixtures (panel B of Fig. 4) the dependence of the diffusion of g-GM 1 on SM content in the lipid bilayer could be divided into two regimes: In the first regime, at DOPC/SM (100-92/0-8), the diffusion coefficients were constant within the error of the FCS measurement. In this regime, our FRET experiments showed a homogeneous bilayer. At DOPC/SM (90-85/10-15), where FRET detected nanodomains, the g-GM 1 diffusion coefficient decreased. A similar trend but with slightly less distinct differences between the two regimes was obtained for DiD. According to panel B of Fig. 4, the diffusion of g-GM 1 was on average about 5% slower in bilayers with nanodomains than in homogeneous bilayers, whereas the diffusion of DiD slowed down on average about 3%. In order to judge the significance of the decrease in the diffusion coefficient D a t-test was performed (see Table SI2 and SI3 in SI). P-values lower than 0.1 determine a significant difference between two sets of data. In case of g-GM 1 the change in D in respect to the composition DOPC/SM (100/0), which contains no nanodomains, was significant for the compositions (90/10) and (85/15) and insignificant for (88/12). In case of DiD the drop was significant only for (85/15).
The impact of nanodomains on the diffusion of g-GM 1 was much stronger in DOPC/Chol/SM mixtures (panel C of Fig. 4), where g-GM 1 partitioned into the nanodomains more efficiently (see Table 1 for K D s), compared to the DOPC/SM bilayers. A significant drop in the diffusion of g-GM 1 (see Table SI3 in SI) occurred already at (70/25/5), where nanodomains were detected by FRET. The diffusion slowed down further as more SM was accumulated into the nanodomains (note that the average domain radius and area coverage remain the same (Table 1) as SM content is increased). Based on the similarity of the hydrophobic molecular regions of SM and g-GM 1 , and the results of the MC simulation of probe diffusion in the presence of nanodomains 32 this can be explained by more efficient entrapment (longer dwell-time) of g-GM 1 in the SM-rich nanodomains. The abrupt increase in the diffusion coefficient at (63/25/12) occurred due to formation of microscopically phase-separated domains and concentration of Chol and SM into such domains. The bilayer was very close to the L d /L o phase separation boundary at this composition. Here, there is an increased risk of measurements being unintentionally Scientific RepoRts | 7: 5460 | DOI:10.1038/s41598-017-05539-y performed on phase-separated GUVs, where the diffusion in the L d phase is fast 33 . This is also reflected in the large standard deviation error bar associated with this data point (panel C of Fig. 4). A similar pattern of diffusion coefficients was obtained for DiD. For comparison, the diffusion coefficient of g-GM 1 decreased on average about 14% in bilayers containing nanodomains, whereas DiD diffusion was slowed down on average about 5%. Of note, the decrease in D of DiD in (67/25/8) and (65/25/10) bilayers was determined to be significant by the t-test (Table SI3). Nanodomain fluidity. According to the published phase diagrams 27, 28 , the investigated bilayers (Table 1) should be homogeneous and in a neat liquid-disordered state. However, we observed nanodomains under these conditions, thus, we questioned to which extent these nanodomains were fluid and disordered. We Table 2, Atto-488-DOPE, Atto-633-DOPE and DiD partition preferably into the L d phase and are efficiently expelled from microscopic L o domains in DOPC/ Chol/SM (55/25/20) bilayers. This composition is close in the phase diagram (Fig. 2) to those where nanodomains are observed (e.g. DOPC/Chol/SM 65/25/10). Based on these results, one expects that these probes should be driven out of nanodomains with L o character. This would change the relative FRET efficiency for both D/A pairs (see Table 2 for implications in FRET in the presence of L d or L o nanodomains). On the other hand, formation of L d nanodomains would not change either the distribution of the probes or the relative FRET efficiency for both DA pairs. As Figs SI2 and SI3 show, the kinetics of fluorescent relaxation of the donors remained the same for all the cases. This indicates that the relative FRET efficiency for both D/A pairs remained constant in all investigated bilayer compositions, with and without nanodomains. This finding supports our hypothesis that the nanodomains have a L d character.
To obtain further insight at a molecular level into the organization and dynamics of the lipid bilayers studied here, additional solid-state wide-line and high-resolution MAS 31 P NMR experiments were performed 34,35 . Wide-line NMR spectra obtained for pure DOPC bilayers (Fig. 5) exhibited a "powder-like" lineshape at 298 K, which is typical for a lamellar PC bilayer in its liquid-crystalline L d -phase 36,37 . Under these conditions, the individual lipid molecules in the bilayer undergo fast rotational dynamics, which causes the typical shape and reduced width of the obtained NMR spectra. Analysis of the lineshapes revealed a chemical shift anisotropy, Δσ, where Δσ = σ ∥ − σ ⊥ is the width of NMR spectrum, of approximately 45.3 ppm, which is typical for this phase. Addition of 25 mol% of cholesterol to the DOPC bilayers generated a spectrum representative for a lamellar bilayer system at 298 K (Fig. 5B) with a hint of a second subspectrum, i.e. showing a homogeneous bilayer with perhaps a small fraction of a second, slightly more ordered subdomain. In contrast, the sample composed of DOPC/SM (90/10) resulted in significant changes in the corresponding NMR lineshape (Fig. 5A). The NMR spectrum is clearly composed of two sub-spectra, the main component with an intense 90° edge at −14.6 ppm and a second at −18.3 ppm. As SM is a minor component (10%) in the lipid bilayer, we attribute these two sub-spectra to DOPC in two different dynamic environments. Fitting the lineshape to two axially symmetric powder patterns reveals that the first sub-spectrum, characterized by a chemical shift anisotropy of 38.4 ppm, contributes approximately 47% of the total intensity. This component reflects a lipid environment with a slightly increased disorder in the headgroup region of the DOPC lipids. The second sub-spectrum with an intense 90° edge at 16 ppm, is characterized by a larger chemical shift anisotropy of approximately 49.0 ppm. This increase indicates a different membrane environment with the lipid headgroup regions (and presumably the whole lipid molecules) undergoing reduced dynamics (with less motional averaging of the chemical shift anisotropy) presumably due to the close proximity of stiff SM molecules. In summary, the DOPC/SM (90/10) membranes appear to consist of two different environments: 47% of the bilayer was presumably SM free, fluid and disordered whereas the rest of the bilayer was richer in SM with DOPC lipids in direct contact with SM molecules. Such estimation is in agreement with our MC-FRET results, according to which the SM rich domains occupied 37% or 55% (two global chi-squared minima) of the entire bilayer area (Table 1). Also in agreement with FRET data, 5 mol% SM was not able to induce nanodomains as seen in Fig. 5A, which would have been visible as a second NMR sub-spectrum characterized by a larger chemical shielding anisotropy.
For the ternary systems composed of DOPC/Chol/SM (70,65/25/5,10) lipid mixtures, static NMR spectra were clearly composed of two sub-spectra with different widths (Fig. 5B). Although multiple components (subspectra) are present in the system the spectral properties of each are inconsistent with the broader spectra that would be expected of lipids in their gel phase. The broader of these two components has an intense 90° edge at −14 ppm, whilst the remaining component has a smaller chemical shift anisotropy similar to that of DOPC/Chol bilayers in the absence of SM. Fitting the lineshape to two axially symmetric powder patterns indicates that the SM rich contributes approximately 44% of the entire spectral intensity in good agreement with MC-FRET results.
Our MAS 31 P NMR results further confirm the L d character of the nanodomains found in the studied bilayers. As seen in Fig. 6, the isotropic NMR signal occurred at −0.71 ppm, which is the isotropic chemical shift value expected for DOPC bilayers. Upon addition of SM, or Chol and SM, the variations in this value were minor (upfield to −0.74 ppm) but the NMR linewidth increased dramatically from 15 Hz to 45 Hz. Despite the increase in the linewidth, the resonances remained relatively narrow, supporting the disordered character of the lipid headgroups within the domains. The lipids are undergoing fast dynamics and the isotropic lineshapes of the MAS spectra are influenced by exchange processes. Such behaviour is also typical for L d phase. The isotropic linewidths in the MAS 31 P NMR spectra of lipids are largely dominated by the spin-spin/transverse relaxation times which are sensitive to motion on the ms to µs timescale 38 . The similarity of the linewidths obtained for DOPC in the presence of SM, Chol or both SM and Chol indicates that both species are likely to interact with DOPC headgroups, slowing down the motions on the ms to µs timescale and resulting in reduction of the T 2 and an increase in the corresponding linewidth. The absence of more significant changes in the 31 P powder lineshape reflects that the exchange processes occurring do so from populations of lipids exhibiting similar isotropic and anisotropic chemical shielding suggesting that both populations exhibit similar dynamic properties.
The fact that the nanodomains were detected both by NMR and FCS helps to restrict the range of possible nanodomain lifetimes. The readouts of both techniques are influenced by processes that occur on the micro-to millisecond time scale. Therefore, the lifetimes of nanodomains should be roughly in this range. The broadening that is present in the SM/DOPC MAS 31 P NMR spectra (Fig. 6, red colour) together with the underlying broad subspectrum (ranging from 0.3 to −1.5 ppm) indicates that the DOPC is in exchange between different environments on the NMR timescale (ms/µs) in a manner analogous to that reported by e.g. by Bonev et al. 39 . This observation is consistent with SM lipids forming a dynamic complex with multiple DOPC molecules over the timescale of the NMR experiment (ms/µs). Interestingly, g-GM 1 , r-GM 1 , CF-DSPE-PEG and Rh-PEG-DSPE probes seem to exhibit increased affinity for this dynamic complex (see Table 1 for K D s). Preferable localization of these probes in the nanodomains originates presumably from structural similarity of hydrophobic regions of these probes with SM and allows for determination of nanodomain sizes by MC-FRET.
Of note, transient and dynamic heterogeneities of much shorter lifetime, about 100 ns, and size of 10 nm were revealed by neutron scattering 29,40 and MARTINI simulations 41 . These were reported to exist in DPPC/Chol and DMPC/Chol bilayers close to the phase separation boundary where Chol crystals start to form. Moreover, longer lived fluctuations in composition of about 0.8 ms were observed in ternary mixtures of high and low melting Scientific RepoRts | 7: 5460 | DOI:10.1038/s41598-017-05539-y temperature lipids near miscibility critical points 42 . In this work, on the contrary to the above mentioned cases, the nanodomains are found further away from the phase separation boundaries (Fig. 2).
Although the mechanism of how these nanodomains are formed is not yet well understood, we expect that the process is facilitated by the following factors. First, geometrical factors result in different packing preferences of DOPC and SM. Consequently, SM tends to be surrounded by other SM rather than DOPC molecules. Moreover, it has been documented by a variety of experimental approaches that SM and Chol preferably interact with each other (for a review see ref. 43). It is also known that Chol promotes segregation of different PC components at low Chol contents whereas it suppresses the segregation at higher concentrations (above 50 mol%) 44 . All these interactions seem to be re-enforced by hydrogen bonding between -NH group of SM and hydroxyl group of Chol and between SM and DOPC 43,[45][46][47] . In addition, temporal thermal fluctuations and fluctuations in concentration may perhaps be involved in the formation of nanodomains 41 . Groupings of three to five molecules have been found even in ideal binary mixtures with only nearest neighbor interactions 48 . Thus, hypothetically, it is possible that these temporal fluctuations function as seeds for the liquid disordered nanodomains in a similar way as nanodomains work as formation platforms for microscopic L o phase domains.
Implications for the raft theory. So far, scientists have frequently used L o microdomains as a model system for rafts despite insufficient experimental evidence for the existence of such domains in cells. The size and physical properties of such microdomains are extreme in the context of the plasma membrane, but they are still used as a model system for putative nano-scale rafts in cellular membranes. In respect to the complex composition of a cellular plasma membrane where sharp and well-defined phase transitions can hardly be expected, differences between various local environments are presumably more subtle than the differences between the L d and L o phases encountered in model systems. To bridge the gap between the extremes of L d or L o phase bilayers in artificial model systems and the plasma membrane of living cells scientists started using giant plasma membrane vesicles (GPMVs) as an intermediate model system that more or less preserve biological complexity of native plasma membranes 5,49 . Similarly to the synthetic model membranes of GUVs, GPMVs may phase separate into two distinct microscopic phases. However, the differences between the L d and L o phase are significantly smaller to those observed in GUVs, presumably better corresponding to what is encountered in plasma membranes of living cells. A disadvantage might be a worse control of the GPMVs' composition 6 and phase behaviour which is dependent on the detergent used 49 .
The L d nanoscale domains that have been found and characterized in this work have in analogy to the L o microdomains encountered on GUVs a very simplified composition. However, in terms of domain sizes and their physical properties the nanodomains seem to represent a good model system for cellular rafts. Moreover, the plasma membrane of a living cell is permanently changing. Thus, rafts can be expected to form and disappear or change their properties during their lifetime. In this context, the transient nature of the L d nanodomains may also correspond better to the properties of cellular rafts.
Summary.
In this work, we discovered and characterized nanodomains in binary DOPC/SM and ternary DOPC/Chol/SM bilayers of compositions that should result in homogeneous bilayers according to published phase diagrams. The results of our MC-FRET, solid-state NMR and z-scan FCS experiments are summarized in the phase diagram of Fig. 2 Table 3. Briefly, all three methods indicate that binary mixtures DOPC/SM (100-92/0-8) are homogeneous and that DOPC/SM (90-85/10-15) exhibit nanodomains. In the ternary lipid mixtures containing Chol, nanodomains were revealed at DOPC/Chol/SM (70-65/25/5-10).
and in
The nanodomains of approximately 10 nm can be estimated to consist of roughly 400 to 500 molecules, are enriched in SM but still contain a high amount of DOPC molecules, which is sufficient to maintain the nanodomains fluid and disordered. Despite their L d character the nanodomains exhibit subtle differences on average environment and dynamics as compared to the surrounding. The nanodomains appear long-lived with a lifetime in the range of microseconds to several milliseconds. In terms of their size, fluidity, order and lifetime these nanodomains may represent a relevant model system for cellular membranes and perhaps be more closely related to heterogeneities, e.g. nanocompartments, observed in cellular plasma membranes.
Methods
GUV preparation. GUVs were prepared by the electroformation method as described previously by Angelova et al. 50 . All lipid mixtures were made from stock solutions in chloroform. The lipid mixture (100 nmol in approximately 200 μL of chloroform) containing the additional labelled lipids was spread onto two hollowed titanium plates. These were placed on a heating plate at approximately 47 °C to facilitate solvent evaporation. The plates were subsequently put under vacuum for at least 1 h to evaporate remaining solvent traces. The lipid-coated plates were assembled using one layer of Parafilm as an insulating material. The electroswelling chamber was filled with 1 ml of preheated sucrose solution (with the osmolarity of 103 mOsm/kg) and sealed with Parafilm. An alternating electrical field of 10 Hz rising from 0.02 V to 1.1 V (peak-to-peak voltage) during the first 45 min was applied and kept at 1.1 V and 47 °C for additional 1.5 h. This sequence was followed by a so-called detaching phase at 4 Hz and 1.3 V for 30 min. Finally, the GUVs were added to a microscope chamber containing glucose buffer (~80 mM glucose, 10 mM HEPES and 10 mM NaCl, pH 7.2) at the osmolarity of 103 mOsm/kg. All lipid mixtures contained 2 mol% of biotinyl-PE to immobilize the GUVs on the bottom of a chamber coated with BSA-biotin/streptavidin. For the FCS experiments, the probe-to-lipid ratio was 1:100000 whereas for the FRET experiments, the donor (acceptor)-to-lipid ratio was 1:1000 (1:200) in case of g-GM 1 /DiD pair and 1:200 (1:200) in case of g-GM 1 /r-GM 1 pair, respectively.
Sample preparation for NMR experiments. The lipid mixtures were prepared by dissolving the appropriate lipids in a 2/1 vol/vol HCCl 3 /MeOH solution, followed by evaporation, resuspending in water and freeze-drying, as described previously 51 . To produce multilamellar vesicles, appropriate amounts (around 20 mg) of dry lipid powder was then rehydrated using in the same buffer as used above (except D 2 O was used here instead) at a one-to-one weight ratio, followed by several freeze-thaw cycles and vortexing. Finally the membrane suspensions were pelleted into 4 mm MAS NMR rotors (Bruker, Germany) and measured immediately or kept at −20 °C prior NMR experiments.
FCS and FLIM-FRET measurements.
Both types of measurements were performed on a home-built confocal microscope consisting of an inverted confocal microscope body IX71 (Olympus, Hamburg, Germany) and pulsed diode lasers (LDH-P-C-470, 470 nm, and LDH-D-C-635, 635 nm PicoQuant, Berlin, Germany) operated at 10 MHz repetition rate. The lasers were pulsing alternatively to avoid artifacts caused by signal bleed-through. The laser light was coupled to a polarization maintaining single mode optical fiber and re-collimated at the output with an air space objective (UPLSAPO 4X, Olympus). The light was up-reflected to a water immersion objective (UPLSAPO 60x, Olympus) with a 470/635 dichroic mirror. The signal was split between two single photon avalanche diodes using 515/50 and 697/58 band pass filters (Chroma Rockingham, VT) for green and red channel, respectively. z-scan measurements were conducted on the top of selected GUVs. First, a membrane was placed to the waist of a laser, moved 1.5 µm below the waist afterwards and finally, scanned vertically in 20 steps (150 nm spaced). A 60 second long measurement was performed at each step. The laser intensity at the back aperture of the objective was around 6 µW for each laser line. To obtain the average diffusion coefficients presented in Fig.4 z-scan FCS measurements on 5-10 different GUVs were performed. Further details of the data analysis are given elsewhere 19 .
FLIM-FRET measurements were done by acquiring an image (512 × 512 pixels, 0.6 ms/pixel) of a GUV at its cross-section. The experimental fluorescence decay of the donor that was taken for further analysis was obtained by summing up the measured fluorescence decays from at least five different GUVs. However, variability between fluorescence decays obtained from individual GUVs was negligible (see Fig. SI6). Laser intensity of 1 µW for the 470 nm laser was chosen low enough to avoid pile-up effect for the FLIM-FRET measurements. The experiments were performed at 25 °C. NMR experiments. All 31 P NMR experiments were acquired using a 500 MHz Avance III spectrometer (Bruker, Switzerland). Static wideline NMR spectra of multilamellar vesicles were acquired at 298 K using a Hahn echo pulse sequence with a single π/2 pulse of 7.8 µs pulse length, an inter-pulse delay of 50 µs and a recycle rate of 4 s. During acquisition, TPPM proton decoupling 52 was used (40 W) and ca. 10000 scans were accumulated. For high-resolution MAS NMR spectra, the samples were spun at 5 kHz and a single pulse excitation followed by proton decoupling (parameters as for static NMR experiments) was used. Between 200 to 600 scans were accumulated.
NMR data was processed in matNMR 53 , with all spectra zero-filled to 4096 pts and 30 Hz line-broadening added prior to Fourier transform. Powder lineshapes were analyzed by fitting to one or two axially symmetric powder patterns, using the fitting routines within matNMR.
Analysis of FLIM-FRET data. Förster resonance energy transfer (FRET) was analyzed from fluorescence lifetime images (FLIM). Each pixel contains information on the arrival times of individual photons. These times are used to construct the fluorescence decay, whose shape can be modified by FRET. Analysis of the decay by an appropriate mathematical model yields further information. In this work, the so-called Baumann-Fayer (BF) model was used (see SI) to (i) determine the experimental surface concentration of the acceptors, which was required as one of the input parameters for the MC simulations; (ii) obtain information about how donors and acceptors were distributed in the lipid bilayer. Relative FRET efficiency E rel used in the manuscript is defined as the ratio between FRET efficiency 54 for a heterogeneous bilayer E hetero (with nanodomains) and the FRET efficiency for a homogeneous bilayer E homo (without nanodomains). Homogeneous bilayers were selected to contain 0% of SM and the same amount of Chol as the heterogeneous bilayers.
The determination of nanodomain sizes was performed by analyzing the experimental fluorescence decay with Monte Carlo simulations. Two sets of GUVs are always prepared: the first set contains GUVs with homogeneous bilayers and is used to calculate the number of acceptors in the GUVs by the Baumann-Fayer model (see SI). The number of acceptors is assumed to be the same in the other set of GUVs, where nanodomains with unknown dimensions might exist. This is done in order to reduce the number of optimized parameters. The entire fitting procedure was described in detail elsewhere 24 and is shortly summarized in what follows. A defined number of donors, acceptors and circular domains with a given radius R D was generated in the lipid bilayer. Whereas the number of donors was kept at a sufficiently high value for statistical reasons, the number of acceptors had to be determined by the BF model (SI) to correspond to the actual experimental conditions. First, the donors and acceptors were distributed according to the distribution constants defined as K D (D) = [D inside ]/[D outside ], K D (A) = [A inside ]/[A outside ]. In the next step, a donor was randomly excited and the time at which an energy transfer event took place calculated. This process was random and modulated by the overall energy transfer rate Ω i according to Δt i = −lnγ/Ω i where γ is a randomly generated number between 0-1. The outcome of each simulation step was the time interval Δt i between the excitation and the energy transfer event. To achieve good statistics, each generated configuration was used 100 times before a new configuration was generated. The total number of all excitation events was 3 × 10 5 . By constructing a histogram of Δt i intervals the total survival probability function G(t) was obtained and the simulated decay of donors quenched by the acceptors calculated. The simulated decay was fitted to the experimental one by varying the input simulation parameters, i.e. the domain radius R D , the area fraction the domains occupied Ar and K D (D,A). The global minimum was found by scanning the chi-squared space of physically acceptable parameters R D , Ar, and K D (D,A). Because of structural similarity between donors and acceptors and a weak dependence of R D and Ar on the actual values of K D , K D (D) was kept identical to K D (A).
Analysis of z-scan FCS data has been described many times before 19,55 and is briefly summarized in SI. | 8,356.6 | 2017-07-14T00:00:00.000 | [
"Biology",
"Chemistry",
"Physics"
] |
Circuit Implementation of Function Cascade Synchronization
The Rössler chaotic system was chosen for implementing function cascade synchronization using electronic components. Utilizing the cascade technique with the functional relationship, we achieve the synchronization of specific chaotic system signals. On the basis of the Lyapunov stability theory and by designing the appropriate controllers with the estimated parameters, we successfully make the driving system synchronize with the final response system. Finally, the circuit implementation of function cascade synchronization is in good agreement with numerical and circuit simulations. The circuits inplemented utilizing the function cascade technique could be applied to sensors designed to detect signals of a specific chaotic system.
The cascade technique has the advantage of high adaptability even when the parameters of the driving system are uncertain for the response system. Additionally, in accordance with the projective synchronization method, the error function with the functional relationship can be designed. Therefore, the cascade synchronization with the functional error function can be used to achieve the synchronization of a specific system successfully. In this study, the Rössler chaotic system is used to achieve the function cascade synchronization. Furthermore, the numerical simulation, circuit simulation, and even circuit implementation are successfully achieved to verify the feasibility of the function cascade synchronization.
Function Cascade Synchronization
Consider n-dimensional chaotic systems following the form ( ) ( ) where X and Y are the state vectors of the driving and response systems, respectively. Moreover, f and g are continuous vector functions; F and G are state matrix functions; ( ) ( ) X = f X + F X Π and Π are the theoretical and estimated parameter vectors, respectively; ξ is the controller vector. The Lyapunov function and its derivative are as follows: where Π = Π − Π . Then, design a suitable error function e = Y − QX T , where Q is the vector function, and choose the adaptive controller ξ appropriately, and the function cascade synchronization will be achieved if there exists < V 0 and the form
Numerical simulation
The driving system of the Rössler chaotic system is described as where a, b, and c are completely unknown parameters. If (a, b, c) is (0.2, 0.2, 5.77), the driving system is chaotic (Fig. 1).
The subresponse system is in the form and the final response system is in the form where α 1 and α 2 are the estimated values of the parameter a; β and γ are the estimated values of the parameters b and c, respectively; ξ 1 , ξ 2 , ξ 3 , and ξ 4 are controllers. The Lyapunov function and its derivative are as follows: ; the controllers and estimated parameters are presented as According to the Lyapunov stability theorem, the method utilizing the function cascade indeed can achieve the synchronization of the Rössler chaotic system with uncertainty parameters, as shown in Fig. 2.
The special feature of the cascade method is that it must proceed through two steps. In each step, one of the signals of the first system remains the same as that of the second system, the error functions are quadratic, and all the parameters are completely unknown, which constitute our study method, that is, function cascade synchronization. Figure 2 shows that the error functions tend to be zero when the controller are input to the subresponse and final response systems, which means that the function cascade synchronization is achieved.
Owing to the characteristic of the Rössler chaotic system, the circuit does not interfere with any noise and it is possible to implement the synchronization circuit. Therefore, the supply voltages of all the amplifiers in the circuit design are connected to an additional capacitor of 0.1 μF and then grounded. In this way, the signals in the circuit can be pure.
Finally, the phase diagram obtained by circuit simulation (Fig. 4) and that obtained by circuit implementation (Fig. 5) show diagonal lines, which means that two variables are identical, that is, the function of the driving system and the final response system successfully achieve the function cascade synchronization.
Conclusions
Firstly, the result of the numerical simulation shows the feasibility and effectiveness of the function cascade method. Secondly, despite the specific characteristic of the Rössler chaotic system, the complexity of the function cascade method, and even the difficulty in real circuit implementation, we overcome all the problems encountered and achieve the circuit implementation of the function cascade synchronization of the Rössler chaotic system by designing the circuit skillfully, that is, connecting capacitors and grounding to avoid any noise interference. Finally, the result of the circuit implementation verifies the success of our scheme for implementing the circuit of the function cascade synchronization of the Rössler chaotic system with fully unknown parameters. | 1,100.4 | 2019-08-09T00:00:00.000 | [
"Computer Science"
] |
‘Me inda nampak’ – Pronoun Use in Malay- English Codemixed Social Media Texts
Available online: 30/12/2020 Abstract. This paper investigates the use of English the first-person singular object pronoun ‘me’ as a subject in conversation on WhatsApp and Telegram between university students in their twenties. It was found that the feature occurs more when interlocutors are code switching, especially in paired chats when ‘me’ often replaces the Malay pronoun aku or saya. This paper explores reasons for this, and how this feature has come to be used in synchronous electronically mediated conversations between young Bruneians. The findings show that using ‘me’ serves as a polite speech marker which is perceived as a softer expression than Malay aku in conversations, depending on the interlocutors.
INTRODUCTION
In a bilingual or multilingual society, code switching is a common phenomenon (Fatimah Haji Awang Chuchu, 2007). Code switching is the use of two or more languages or varieties in a conversation, mostly by bi-or multilingual speakers. In Brunei, we find Malay-English language alternation to be common, as Malay is generally the first language of the population and English serves as a second language (Jones, 2007). With more than one language at their disposal, interlocutors are able to choose which one is most convenient to convey their meaning. In fact, code switching is a common choice for Bruneians in both written and spoken contexts (Deterding, 2009;McLellan & Noor Azam Haji-Othman, 2012). The ability to code switch between languages arises because interlocutors have high levels of language proficiency (Wood, 2016) as a result of the bilingual education, family background, and exposure to social media and entertainment which are mainly in English.
Bruneians, especially those of the younger generation, tend to use common phrases in English, such as 'I love you' or 'I'm sorry' because they are accustomed to them as opposed to the equivalent Malay phrases, which they might find more unnatural and awkward as they are rarely used. Most studies on code switching in Brunei seek functions and reasons why interlocutors code switch (Fatimah Haji Awang Chuchu, 2007;Deterding & Salbrina, 2013;Faahirah Rozaimee, 2016) but they rarely look at the choice of pronouns used.
In Malaysia, the use of pronouns is influenced by gender. In a Malay-medium sentence or conversation, females have the tendency to use English pronouns while the males are more likely to use Malay pronouns (Normala Othman, 2006). Women from urban areas would use English pronouns whereas men's choice of pronouns is affected by whom they are talking to. According to Lukman (2009), the use of pronouns is influenced by age, social status and the level of closeness in a relationship and Nor Shahila Mansor, Normaliza Abd Rahim, Roslina Mamat and Hazlina Abdul Halim (2018) also reported that the use of pronouns is heavily affected by social status and relationship of the interlocutors as well as the context of the conversation.
On other countries practicing Malay pronouns, as such in Indonesia, only the use of first person plural pronouns differs between formal and informal Indonesian with little variation towards other pronouns (Sneddon, 2002). Instead, they substitute pronouns with kinship terms or personal names. Other politeness strategies Indonesians use would be to use softeners or hedging which makes the them sound softer (Sneddon, 1996). Like Malaysia, the choice for personal pronouns are affected by factors such as age, social status and social setting (p. 134). Their first person singular pronoun 'gua' or 'gue' is associated for informal situations used between equals or from higher to lower. This paper investigates the use of the English first person singular 'me' pronoun as the subject of a sentence. This is because English is considered to be the language of the young (Ożóg, 1992). Sometimes, 'me' is also used as a possessive pronoun. This pattern can be seen when interlocutors are code switching between Malay and English.
PRONOUNS
Malay personal pronouns differ from those of English. Firstly, Malay does not distinguish between subject and object pronouns (Othman Sulaiman, 2010) while English does. For possessives pronouns, Malay adds -ku, -mu and -nya suffixes (Asmah Haji Omar, 1982). Informal pronouns such as aku (I), kamu (you), engkau (you), ia (him/her), kami (we) and kita (us) are indigenous to Malay. In both Brunei and Malaysia, formality and respect are complicated, especially in age, social rankings as well as the proximity in a relationship.
Unlike English, Malay does not have any gender-specific pronouns, but it does have the distinction between formal and informal pronouns. Asmah Haji Omar (1982) describes Malay pronouns into three categories; polite, neutral and intimate, in which the first two are considered as formal. Formal pronouns (saya/kita) are rarely used in a daily conversation, often associated at the workplace or during interviews. This is seen as a form of politeness, as well as using specific terms of address that comes with it in terms of seniority and/or the indication of social status of the individual.
The Malay first person pronoun used by Bruneians in an informal context would be aku/ku, which could be used both as a subject and object pronoun, often used by friends who are close with each other or by 'a superior to an inferior, either in age or in social status' (Othman Sulaiman, 2010). English, on the other hand, has different pronouns for these function, which are 'I' and 'me'. The subject 'I' comes 'Me inda nampak before the verb, while the object 'me' comes after the verb, of which Wales (1996) refers the subject to as the reflection of the ego, or the speaker. This paper emphasizes the use of the first person singular pronoun, 'me', and how it mimics the functions of 'I' and aku in Brunei's context in electronically mediated communication (EMC) conversations through the social media platforms, such as WhatsApp and Telegram. This may help us to understand the use of pronouns by young people in Brunei might come to be and how code switching affects their choices.
RESEARCH METHOD
The data are taken based on electronic conversations (WhatsApp and Telegram) by Universiti Brunei Darussalam students with their consent over a period of two weeks between May 2016 and February 2017. The data were collected from the dates prior of the research being conducted, thus eliminating the participants' feeling of being 'observed' on what they said, and to ensure that the conversations are reasonably natural (Labov, 1972). The chats were then narrowed down to focus on synchronous conversation. Names and places were then made anonymous to ensure their confidentiality. All 11 participants are in their twenties and are bilingual in Malay and English.
There are four paired chats and two group chats, making six datasets in total. Two paired chats are female-male interactions and the other two are female-female interactions, while one of the group chats is between three females and one male interaction, and the other is between three female participants. The female participants are henceforth referred to as F(n) while the males are M(n), with (n) being the participant's number. For example, F1 for the first female participant and M2 is the second male participant.
Using Myer-Scotton's (1997) Matrix-Language-Frame Model (MLF), the analysis focuses on code switching where the Matrix Language (ML) is the dominant language supplying the majority of the morphemes, and the Embedded Language (EL) supplies only a proportion of the lexical content. In an attempt to make it simpler, only the language switches will be looked at, adopting Jacobson's (2001, p. 61) view, in which 'one language occupies a dominant position and the other is subordinated' together with the MLF model. In this context, the dominant language is the ML and the other language present is the EL. For instance, example [1] has English as the ML as it has a higher number of words than Malay, which is the EL. Table 1 below shows an overview of the use of first person singular pronouns in English and Malay from the data. The paired chats are labelled as samples A, B, C and D, while the group chats are labelled as sample A1 and B1. For samples A and D, the chats are between female-female participants, and for samples B and C, between female-male participants. Sample A1 has four participants of three females and one male, while sample B1 consist of all-female participants of three. Looking at the data, there are some differences in the use of pronouns between the two languages. Overall, the subject first person singular pronoun 'I' has the highest percentage of use with 49.4% making it the popular choice of pronoun used in both paired and group chats. This could be due to the high number of monolingual English (44.21%) and predominantly English (13.37%) messages found in the data. This is followed by the use of Malay first person singular pronoun aku with 25.6% and English's object first person singular pronoun 'me' with 25%. Developing the argument that interlocutors are using the 'me' pronoun as subject, it is not surprising that it has the same percentage as its Malay counterpart. This suggests that these two pronouns might be used interchangeably following the same function. The analysis will only look at the use of 'me' pronouns by the interlocutors as a subject, object first-person and possessive pronoun.
Token Analysis
In the keyboarded conversation, it was found that the use of 'me' is common between interlocutors in three different categories: 'me' as subject, 'me' as object 'Me inda nampak and finally, 'me' as possessive pronoun. Table 2 below shows the use of 'me' pronoun in its object, subject and possessive form in the paired chats. It is found that there is a higher percentage of 'me' as a subject pronoun with 62.8% in paired chats, especially in Samples B and D. Sample B has an equal percentage use of 'me' as object and possessive pronouns with 6.1%, in comparison to 'me' as a subject first person pronoun (87.9%). Meanwhile, Sample D has an almost equal use of 'me' as object (45.1%) and subject (49%) pronouns, and only 5.9% is used as possessive. Interestingly, Sample C only has one function of 'me' pronoun, and that is as an object (100%). This is in contrast to Sample A, in which 'me' is not used as an object pronoun at all, but 83.3% of it acts as subject and 16.7% of it as possessive. Table 3 shows the use of 'me' pronoun in group chats. In comparison with the paired chats, the group chats have a higher percentage of the 'me' pronoun being used as an object with 58.1%. Although, it is difficult to say that group chats have the tendency to use 'me' in its object form than in paired chats, as Sample A1 has a higher percentage of 'me' as a subject pronoun (80%) than object (13.3%). It is possible that the number of participants in the conversation might affect the interlocutors' choice of pronouns. The following section discusses the linguistic patterns of the pronoun 'me' followed by English and Malay.
'me' followed by English
There are cases in which the 'me' pronoun is followed by English, although most of the time they are shorter in length. In both examples [2] and [3], the 'me' pronoun is followed by English, regardless whether it is in the beginning or the ending of the utterance. , all the pronouns used are in English, followed by Malay words inter-sententially. In example [5], 'me' is introduced at the beginning of the utterance, and then followed by Malay. It seems typical in the data that interlocutors would start conversations in English, and then switched to Malay after they use 'me'. This suggests that when using English pronouns, it does not necessarily trigger the interlocutors to switch back to English. It should be noted that, in example [5], M3 used both Malay and English pronouns in the same utterance. It can be said that the 'me' pronoun is interchangeable with aku and have a similar function in the sentence. The 'me' and aku in the context both represent the object, but when translated it becomes the subject. This suggests that the 'me' pronoun is reflecting the Malay syntax, as takut and nervous are both adjectives. In examples [9] and [10], F1 and M3 used both 'me' as an object and subject pronoun, respectively. In [9], the first section of the utterance follows English as the ML, as it complies to its grammatical structure. However, in the second section of the utterance, F1 switched to Malay after 'me', which seems to comply to Malay sentence structure, as it translates to 'aku balum liat hari ini'. And then, she switched back to English at the end of the utterance, perhaps unconsciously trying to correct her choice of language into the one she started with, which was in English.
Analysis of data extracts 'me' as the object pronoun
[9] Dont tell me! Me balum liat today not yet see
('Don't tell me! I haven't seen it today') (Sample D: F1)
In example [10], the concept of 'me' is the same as saying aku (I). In a loose translation, what M3 would have meant to say would be 'kalau aku, aku lari', which means 'if it were me, I'd run'. However, in order to simplify his message, M3 simply shortened it by mixing the two languages together following the Malay grammatical structure, with English words, except for 'lari' (run) in the last part. Example [11] has Malay as the ML, although it starts with an English object pronoun 'me'. The sentence translates to 'aku inda nampak', which means 'I don't see'. This is an instance where participants use 'me' synonymously to the Malay pronoun, aku.
'Me inda nampak 'me' as possessive pronoun There are also cases in which the use of 'me' pronoun neither functions as the object nor subject pronouns but is used as a possessive instead. This is in relation to Malay possessive having the same form as the first-person pronoun, which is -ku, for example, buku aku or buku ku (my book). This allows interlocutors to adapt it onto the English pronoun 'me' to make it simpler.
('Mine is at 8.30') (Sample B: F3)
In the example above, the 'me' pronoun functions as an independent possessive pronoun. F3 responds to a question about what time her class would be, in which prompted her to respond simply by addressing herself and the time only. This of course could go in two different perspectives, one of which would be 'aku 8.30', which is a direct translation of the sentence. This could mean that the Malay sentence structure might have an influence on the use of the 'me' pronoun. In example [13], the pronoun functions as a possessive pronoun. At this point, the use of 'me' pronoun could be considered as habitual (Table 2), as F4 could have easily used 'my' instead 'me'.
Discussion
The use of 'me' for aku instead of 'I' is found to be common in a Bruneian context, especially among the younger generation which could be seen as form of creolisation between English and Malay. This is similar to the situation between the Sranan Tongo and Standard English pronoun systems (Sebba, 1997). Stranan Tongo bases their first person 'mi' and second person pronoun 'yu' from the English's pronouns 'me' and 'you'. However, they have simplified the system by turning the English pronoun 'me' into a subject form instead of using 'I' (pp. 153). This concurs with what was found in the data. Apart from Brunei, Malaysia is also known to alter their English pronouns to fit into their community's language. However, instead of using 'me' in their sentence, Malaysians use the subject first person pronoun 'I', for example 'I tak suka' (I do not like), which is a direct translation of the phrase from English to Malay. In Brunei however, English pronouns have the tendency to conform to Malay syntactic patterns in an attempt to simplify language that coincides with what is shown in this study in examples [5] and [10], as it was found by Ożóg (1987).
The use of 'me' instead of 'I' by Bruneians could be to dissociate themselves from Malaysians and claiming this use to be their own sense of identity and of solidarity, just as how they are proud and feel Brunei Malay is superior and different from Standard Malay (Martin, 1996).
In Malaysia, aku and kau do not occur freely for men depending on who they talk to, while women tend to use 'I' and 'you' more often (Normala Othman, 2006). The study by Normala Othman conducted three different experiments between mixed groups, male-only and female-only conversations and found that the use of Malay and English pronouns were in agreement, male-male would use Malay and male-female would use English while female-female would use both Malay and English, with the latter dominant. This concurs with what was found in the data from Tables 2 and 3, in which most of the conversations of female-female and malefemale follow the heavy use of English pronouns and mixed Malay and English pronouns, respectively. Therefore, while Malaysians have 'I', Bruneians have 'me', they aim to show that English pronouns are used to replace Malay pronouns, particularly in code switching which could be argued as an emerging feature of Brunei English.
There may be several factors that lead to this phenomenon, apart from the avoidance of repetition; one of which is politeness. The Malay pronouns aku or kau seem to be rude or sound rough, and should be avoided especially when talking to strangers, and to someone older or superior (Normala Othman, 2006). In a way, aku and kau are terms that are used by the older generation to the younger generation only (Nik Safiah Karim, 1995). Because the Malay pronouns have a hierarchical system in terms of respect and seniority, younger people tend to lean on English pronouns as they do not mark any status (Noor Azlina Abdullah, 1979).
By using English pronouns, they successfully make themselves equal to the other participants, regardless of age without offending them. However, the data consist of only interlocutors who are close friends and are within the same age group, and participants still tend to use English pronouns instead. Perhaps, as Krumholz et al. (1995) reported (cited in Siewierska, 2004, p. 219), the use of 'I' is considered as authoritarian, therefore in Sierra Popoloca, they use 'we' which is more normal, although in this case, most people are more comfortable using 'me'. This lead to Bruneians to accommodate politeness strategies in their discourse by using 'me' instead (Kamsiah Abdullah, 2016) for a softer, less assertive and intimate address.
As the majority of the participants are women, they generally tend to steer away from what they consider to be rude and opt for a politer form of communication, which has become known as women's language (Lakoff, 1973). This is supported by one of the participants who claimed that it is easier and friendlier to 'Me inda nampak use English than Malay when asked about their use of pronouns in texts. However, English pronouns was not limited to women, but was used by men as well, as seen in examples [5] and [10]. It could be said that in a female dominant group, the male might be influenced to use English pronouns as a form of politeness and accommodation. Normala Othman (2006, p. 25) concluded that while men are able to switch between the two languages, women are not flexible in their choices because Malay pronouns "are not available to them" which could be one of the reasons why aku is lacking in the all-female interactions, as shown from Table 2 in Samples D and B1, with the exception of Sample A. Ożóg (1996) claims that mixed language pronouns "occur very infrequently in Brunei" (p. 186) unlike in Malaysia, but in recent research in the last 20 years, we can say this is no longer true, although there is still importance attached to using "the correct form of address within Bruneian society" (p. 187). Bruneians still strongly believe in the hierarchy system however it does not prevent them from using English pronouns in their interactions.
CONCLUSION
This paper reveals that although EMC focuses on informal language, there is an avoidance of being impolite or rude between interlocutors. This can be seen through the use of English pronouns instead of Malay. Participants were seen to use the object first person pronoun 'me' instead of 'I' referring themselves, as it gives a sense of closeness. Following Ożóg's (1996) observation, politeness is one motivation to code switch from Malay to English to avoid addressing people impolitely. Arguably, English pronouns sound tamer, softer and shorter than the Malay counterparts and maybe it is due to these characteristics that younger people are more inclined to use them. There is a notion that the use of English pronouns tends to be friendlier and intimate towards the speaker than Malay. | 4,866.4 | 2020-12-29T00:00:00.000 | [
"Linguistics"
] |
Association of Virulence Markers With Resistance to Oral Antibiotics in Escherichia coli Isolates Causing Uncomplicated Community-Acquired Cystitis
Introduction: Uropathogenic Escherichia coli (UPEC) strains equipped with putative virulence factors (VFs) are known to cause approximately 90% of lower urinary tract infections (UTIs) or cystitis affecting individuals of all age groups. Only limited laboratory-based data on the correlation of antimicrobial resistant patterns and VFs of UPEC are available. Materials and methods: A total of 100 non-duplicate E. coli isolates associated with community-acquired UTIs in sexually active women were analysed for antimicrobial susceptibility patterns and putative virulence-associated genes. Antimicrobial susceptibility testing (AST) was carried out by the Kirby-Bauer disk diffusion method, and results were interpreted as per Clinical and Laboratory Standards Institute (CLSI) guidelines. The isolates non-susceptible to ≥1 agent in ≥3 different antimicrobial categories were considered multidrug-resistant (MDR). Multiplex polymerase chain reaction assay was performed on each E. coli isolate to characterize putative virulence genes (VGs) such as papA, malX, PAI, ibeA, fimH, fyuA, sfa/focDE, papGIII, iutA, papGI, kpsMTII, hlyA, papGII, traT, afa/draBC, cnf1, vat, and yfcV. Results: Capsule synthesis gene kpsMTII (59%)was the most predominant VG present, followed by serum resistance-associated transfer protein gene traT (58%) and adhesin gene fimH (57%); however, adhesin gene papGI (2%) was the least present. The prevalence of antimicrobial resistance was relatively high for commonly used oral antimicrobials of UTI treatment, such as trimethoprim-sulfamethoxazole (68%) and fluoroquinolones (63%). The majority of isolates were MDR (78%) and resistant to extended-spectrum cephalosporins (63.5%). Isolates resistant to norfloxacin and trimethoprim-sulfamethoxazole were also resistant to almost all available oral antimicrobials. Isolates resistant to extended-spectrum cephalosporins showed increased resistance to aztreonam and trimethoprim-sulfamethoxazole (84.6% each) and fluoroquinolones (ciprofloxacin and norfloxacin; 81.5% each). Fosfomycin and nitrofurantoin were the most sensitive antimicrobials for all these resistant isolates. In a multivariate analysis, it was found that MDR isolates were associated with many of the VGs; fimH (65.4%) being the most frequent followed by traT (64.1%). traT (66.2%) and iutA (60.3%) were most commonly present in E. coli isolates resistant to trimethoprim-sulfamethoxazole, while66.7% norfloxacin-resistant isolates have them. Isolates resistant to extended-spectrum cephalosporins were most commonly associated with fimH and traT (66.2% each). However, E. coli isolates positive for sfa/focDE and vat were more sensitive to norfloxacin and trimethoprim-sulfamethoxazole and were non-MDR strains predominantly (p < 0.05). Only two VGs (fimH and traT) were significantly associated with MDR strains. Discussion: The results of the present study clearly show the association of VFs with some of the commonly used oral antibiotics emphasizing the need for further molecular studies and surveillance programs to monitor drug-resistant UPEC so as to form optimized diagnostic stewardship and appropriate regimen for patient treatment. The reason behind this phenomenon of association has not been studied in much detail here but it can be assumed that genes responsible for drug resistance may share neighbouring loci with VGs on the mobile genetic elements (e.g., plasmid), which transfer together from one bacterium to another.
Introduction
Cystitis or lower urinary tract infection (UTI) is one of the most frequent bacterial infections affecting individuals of all age groups, including both outpatients and inpatients, causing significant morbidity throughout the world [1]. The chances of developing UTI are significantly higher in females than males due to their anatomical structure (short urethra) and hormonal milieu. It is estimated that around 50% of women will develop UTI once in their lifetime. Approximately 90% of UTIs are caused by Escherichia coli, also termed uropathogenic E. coli (UPEC) strains [1]. These UPEC strains are known to express certain putative virulence factors (VFs), such as adhesins, toxins, and capsules, which help them to invade, establish, and survive in the urinary tract, and prevent their detachment while urinating [2]. It has been observed that the severity of UTIs from being asymptomatic and uncomplicated to complicated infection with sequelae depends upon the subset and frequency of VFs present in UPEC strains, adhesive molecules perhaps being the most important determinants of pathogenicity [3].
The phenomenon of antimicrobial resistance has been a major problem for many years. Treatment for uncomplicated community-acquired UTI is achieved most of the time empirically without waiting for a culture report of urine specimen. Cephalosporins, fluoroquinolones, and trimethoprim-sulfamethoxazole are often used to treat patients with UTIs; but excessive and erratic use of antibiotics has led to the development of multidrug-resistant (MDR) strains, causing these drugs to be ineffective in many cases. The increased frequency of drug resistance/MDR UPEC is related to inadequate antibiotic empirical therapies without any laboratory evidence of antibiotic susceptibility profile, which finally leads to ineffective treatment of UTIs [4]. Only limited laboratory-based data on resistant UPEC causing community-acquired UTIs are available; furthermore, these studies do not include detailed molecular characterization of the isolates [5].
Details of resistant patterns and molecular characterization of UPEC are not frequently available. Hence, reliance only on clinical presentation should be avoided in the presence of limited laboratory data of resistant patterns and molecular characterization of UPEC. Therefore, the detection and identification of drug resistance, along with virulence and virulence-related gene profiles, accommodate a more robust and more accurate characterization for drug-resistant UPEC pathotypes, as the presence of drug resistance among UPEC has led to unsuccessful or prolonged treatment [6].
Resistance to many antimicrobials in bacterial strains is often associated with the transfer of plasmids from one strain to another that may also carry some of the virulence genes with them. The presence of antimicrobial resistance genes can be attributed to DNA mutations or horizontal transfer of drug resistance among UPEC strains [6]. Similar to antimicrobial resistance genes, virulence genes are also located on chromosomes or mobile genetic elements such as plasmids and transposons. Thus, the association between antimicrobial resistance and virulence genes is quite understood [6]. It has been reported in many studies that there is some correlation between resistance patterns and virulence genes, which may help bacteria to survive effectively. An earlier study done by Neamati et al. reported that traT was more prevalent in multidrug-resistant E. coli and could be considered a potential target for therapeutic intervention [7].
Proper understanding, detection, and identification of antibiotic-resistant patterns and their association with virulence genes can be used to develop better and more targeted regimens for drug-resistant UPEC and prevention of antibiotic misuse. Thus, the present study was planned to study antibiotic susceptibility profile and any correlation of VFs with antibiotic resistance in E. coli isolates associated with community-acquired UTIs in sexually active women.
Materials And Methods
A total of 100 non-duplicate E. coli isolates associated with community-acquired UTIs in sexually active women attending the OPD of Obstetrics and Gynaecology were studied in the Department of Microbiology at King George's Medical University (KGMU), Lucknow, India. The study protocol was approved by the ethical committee of the host institution (reference no.: 78th ECM II BMD-Ph.D./P1).
Patient enrolment criterion, urine sample processing, and antimicrobial susceptibility testing (AST) were performed as per our previously published paper [8]. AST was carried out using the Kirby-Bauer disk diffusion method, and results were interpreted as per the Clinical and Laboratory Standards Institute (CLSI) guidelines described in the AST interpretation criterion in Table 1 [9]. The isolates non-susceptible to ≥1 agent in ≥3 different antimicrobial categories were considered MDR [10].
Statistical analysis
Data analysis was carried out using SPSS version 20.0 statistical software package (IBM Corp., Armonk, NY). The relationship between VFs and antibiotic susceptibility was determined using Pearson's chi-square test or Fisher's exact test. To facilitate the final analysis, the isolates showing intermediate susceptibility were grouped with the sensitive strains. The descriptive statistics for various variables were reported as percentages for qualitative variables and a p-value < 0.05 was considered significant.
Resistance pattern of antibiotics commonly used in the treatment of uncomplicated community-acquired UTI
Details of the antimicrobial resistance pattern of E. coli isolates resistant to antimicrobials commonly used in the treatment of community-acquired UTI are shown in Table 2. Isolates resistant to norfloxacin and trimethoprim-sulfamethoxazole were also resistant to almost all available oral antimicrobials such as ampicillin, cefazolin, and extended-spectrum cephalosporins ( Table 2). Isolates resistant to extended-spectrum cephalosporins were most commonly associated with fimH and traT (66.2% each). Isolates resistant to extended-spectrum cephalosporins showed increased resistance to aztreonam and trimethoprimsulfamethoxazole (84.6% each) and fluoroquinolones (ciprofloxacin and norfloxacin; 81.5% each). None of the isolates showed resistance to fosfomycin. Apart from fosfomycin, nitrofurantoin was the most sensitive antimicrobial for all these resistant isolates. The prevalence of antimicrobial resistance was relatively high for commonly used antimicrobials of UTI treatment, such as trimethoprim-sulfamethoxazole (68%) and fluoroquinolones (63%). The majority of isolates were resistant to extended-spectrum cephalosporins ceftazidime (63%) and cefotaxime (64%) (Figure 1). Among isolates resistant to cefotaxime, two were sensitive to ceftazidime while one isolate was resistant to ceftazidime only and sensitive to cefotaxime. Thus, 65% of E. coli isolates were found resistant to extended-spectrum cephalosporins. Multidrug resistance was found in 78% of isolates ( Figure 1).
FIGURE 1: Percentage frequency of resistance to antibiotics commonly used in the treatment of UTIs
MDR: multidrug-resistant.
Virulence genotyping
Capsule synthesis gene kpsMTII (59%) was the most predominant virulence gene present, followed by serum resistance-associated transfer protein gene traT (58%) and adhesin gene fimH (57%) (Figure 2). The gene cluster associated with P-fimbrial structural subunits, i.e., papA, known to be associated with the formation of P-fimbriae was present in 33% of isolates only whereas adhesin gene papGI (2%) was the least frequent gene associated with E. coli isolates obtained from the urine of community-acquired uncomplicated UTIs ( Figure 2).
Co-relation of virulence factors with antibiotic sensitivity pattern
Any statistical association between antibiotic sensitivity patterns and virulence genes of isolates was subsequently investigated. On further analysis, we found that few virulence genes showed significant association with isolates resistant to various antibiotics ( Tables 3-5). The gene cluster associated with P-fimbrial structural subunits, i.e., papA, papGI, and papGIII, and toxin-producing gene hlyA were equally distributed in both resistant as well as sensitive E. coli isolates ( Table 3). However, E. coli isolates positive for sfa/focDE and vat were more sensitive to norfloxacin and trimethoprim-sulfamethoxazole and were non-MDR strains predominantly (p < 0.05), i.e., these genes were more associated with sensitive strains. In a multivariate analysis to find the correlation between the virulence genes and antimicrobial resistance, it was found that MDR isolates were associated with many of the virulence genes, fimbrial gene fimH (65.4%) being the most frequent, followed by serum resistance traT (64.1%) ( Table 4 and Figure 3).
FIGURE 3: Distribution of virulence genes according to antibiotic resistance profile among E. coli isolates
MDR: multidrug-resistant.
sfa/focDE was present in 17.9% of MDR strains and 45.5% of non-MDR strains while vat was present in 8.9% and 45.5% of MDR and non-MDR strains, respectively ( Table 4). Only two virulence genes such as fimH and traT were significantly associated with MDR strains. Among trimethoprim-sulfamethoxazole-resistant isolates, only traT was found significantly associated while sfa/focDE, cnf1, and vat showed significant association with trimethoprim-sulfamethoxazole-sensitive isolates.
Discussion
E. coli inhabits the gastrointestinal tract of humans in a symbiotic relationship, which helps in maintaining normal intestinal homeostasis by promoting the stability of intestinal microbial flora [13]. However, if the host is immunocompromised or gastrointestinal barriers are breached, even non-pathogenic strains of E. coli can cause diseases [14]. It is known that strains of E. coli causing extraintestinal disease originate from the normal intestinal flora, diverge from their ecological niche, and cause infection after acquiring some unique VFs [3]. These diverged E. coli strains obtain various VFs via DNA horizontal transfer by transposons, plasmids, bacteriophages, and pathogenicity islands, resulting in enhanced pathogenic potential [3,15]. UPEC are a special subset of faecal E. coli, which can enter and colonize the urinary tract and cause infection.
The steady increase in antibiotic resistance is being reported in many UPEC strains [4,16]. β-lactams, trimethoprim-sulfamethoxazole (TMP/SMX), fluoroquinolones, and nitrofurantoin are the most common antibiotics used in the treatment of community-acquired UTIs [17]. Improper stewardship, inappropriate use of unprescribed antibiotics, and over-prescription of broad-spectrum antibiotics are among a few contributing factors to the rapid emergence of antibiotic resistance. The presence of MDR is interconnected with high rates of imperfect empirical antibiotic therapy, which ultimately leads to treatment failure in patients suffering from UTI [4,16,18]. Therefore, in recent years, the treatment of community-acquired UTIs is becoming a global concern due to the emergence of MDR E. coli [16].
Although antibiotic resistance genes and virulence genes are believed to be developed in different timescales, there are chances of the interplay between virulence genes and antibiotic resistance genes under selection pressure [19]. It was believed that strains with antibiotic resistance might be coupled with fewer virulence genes, but this may not always be accurate. Many published data revealed the relationship between resistance and virulence is adjusted in such a way that it is beneficial for pathogen survival [20]. VFs are essential for the bacteria to overcome the host defence system, colonize, and survive, while the acquisition of antibiotic resistance helps bacteria to overcome antimicrobial therapies and to adapt to colonize adverse environments [19,21]. Thus, establishing any correlation between virulence and antibiotic resistance can further help in studying targeted/alternative drug therapy [16,22]. This study was planned to determine the presence of virulence genes among E. coli isolates and their correlation with antimicrobials commonly used in the treatment associated with community-acquired UTI.
Capsule synthesis gene kpsMTII (59%) was the most predominant virulence gene present, followed by serum resistance-associated transfer protein gene traT (58%) and adhesin gene fimH (57%). The gene cluster associated with P-fimbrial structural subunits, i.e., papA, known to be associated with the formation of Pfimbria, was present in 33% of isolates. Distribution of adhesins papGI, papGII, and papGIII was 2%, 42%, and 16%, respectively, which was quite like earlier reported observation by Kudinha et al. (2012) [23]. Interestingly, the distribution of adhesins afa/draBC was found, which contrasts with previously published studies reporting its low prevalence [24].
Earlier studies have reported the estimated prevalence of resistance in high-income countries as 53.4% for trimethoprim and 2.1% for ciprofloxacin [16]. In comparison, low and middle-income countries showed higher resistance rates for ciprofloxacin (26.8%) [16]. We found similar results as observed earlier, demonstrating relatively higher resistance to commonly used antimicrobials for the treatment of UTIs, such as trimethoprim-sulfamethoxazole (68%) and fluoroquinolones (63%).
The majority of the study isolates were resistant to extended-spectrum cephalosporins, ceftazidime (63%), and cefotaxime (64%); however, overall resistance to extended cephalosporins was found among 65% of isolates, as among isolates resistant to cefotaxime, two were sensitive to ceftazidime while one isolate was resistant to ceftazidime only and sensitive to cefotaxime. Such a higher degree of resistance among extended-spectrum cephalosporins can be attributed to inappropriate prescription by physicians, negligible toxicity and wide spectrum of oral drugs, and over-the-counter availability of antibiotics [16]. Indeed, it has also been noted that isolates resistant to extended-spectrum cephalosporins themselves show more resistance to aminoglycosides, ciprofloxacin, and trimethoprim-sulfamethoxazole, which may be due to sharing of resistance genes on the same plasmid [25].
About 78% of E. coli isolates were MDR in the present study. Gatya et al. (2022) reported around 100% MDR strains among outpatients in their study [26]. High rates of resistance to various antibiotics among UPEC have been reported in many previous studies [27,28]. Such a high resistance can be explained due to the fact that E. coli has developed resistance to almost every class of antimicrobials introduced to treat human and animal infections [26].
Various theories have been proposed for such a high resistance apart from misuse or inappropriate prescription of an antibiotic, such as the irrational introduction of antibiotics in the food chain. Several studies have shown that multi-drug resistance is easily transferable from one ecosystem to another via direct or indirect contact with contaminated animal or their product, the environment, contaminated soil, or water [25]. Irrational use of antibiotics among animals to increase production or prophylactic use of antibiotics among them to prevent them from getting an infection or their use in crop culture is also responsible for the spread of antibiotic resistance [25]. Thus, the emergence of antibiotic resistance in the food chain has emerged as a global health concern particularly due to the emergence of carbapenemresistant strains or strains having co-resistant genes for many antibiotics. These strains have the capability of high genetic exchange and are a serious threat to the world as random genetic exchange could lead to the development of new and more resistant and with higher virulence potential unknown to the human immune system. These antibiotic-resistant strains can enter the human ecosystem either through direct contact, e.g., with animal handlers and their family members, or through the food chain [25].
Mobile genetic elements like integrons are competent to attain new genes by recombination process, which leads to incorporation and subsequent expression of new genetic material in bacteria. This phenomenon of gene acquisition is responsible not only for bacterial evolution by enabling bacteria to adapt to the changing surrounding environment but also plays an important role in acquiring drug resistance [29]. Genes responsible for drug resistance and phylogroups can share neighbouring loci and may be transferred from one bacterium to another [29].
Proper understanding, detection, and identification of antibiotic-resistant pattern and their association with virulence genes can be used to develop stronger and appropriate regimens for drug-resistant UPEC and prevent antibiotic misuse. The findings of the current study demonstrated the association of several VFs with resistance to one or more antibiotics in some isolates. We found that isolates with reduced susceptibility to trimethoprim-sulfamethoxazole were more frequently associated with traT (45/68), followed by iutA (41/68) and kpsMTII (40/68), whereas extended-spectrum cephalosporins isolates were more regularly associated with fimH and traT (43/65 each).
We observed that several MDR isolates have increased frequency of fimH, papGII, kpsMTII, traT, fyuA, etc.; however, only fimH, papGII, and serum resistance gene traT were significantly associated with MDR isolates while sfa/focDE adhesin and toxin gene vat were associated considerably with non-MDR trimethoprimsulfamethoxazole-sensitive and norfloxacin-sensitive strains. Ochoa et al. (2016) have observed that many MDR-UPEC isolates have a high association for fimH and toxin gene hlyA [30]. Serum resistance gene was more significantly associated with multi-drug resistance, including trimethoprim-sulfamethoxazole and norfloxacin-resistance isolates (p < 0.05). The gene responsible for iron acquisition (iutA) was significantly distributed among norfloxacin-resistant strains. Neamati et al. (2015) also reported that traT was more prevalent in MDR E. coli and could be considered a potential target for therapeutic intervention [7]. Our results are in agreement with previous studies, which emphasized that increased virulence may either be related to antibiotic-resistant or sensitive strains [19].
Though the reason behind this phenomenon of association has not been studied in much detail here, it can be assumed that genes responsible for drug resistance may share neighbouring loci with virulence genes on the mobile genetic elements (e.g., plasmid). When a transfer takes place in bacteria, drug resistance-causing genes may carry virulence genes along with them from one bacterium to another [19,24]. Simultaneously, gene transfer events (e.g., conjugation and transduction) and a large genetic library will contribute to bacterial strains to compensate for or overcome fitness costs, resulting in the successful colonization and emergence of resistant as well as virulent strains [19].
Limitations of the present study include the detection of antibiotic resistance using the disk diffusion method instead of a more precise minimum inhibitory concentration detection method and the study population of younger women presenting in an outpatient setting to a tertiary care hospital, which may not be a true representative of the community at large.
Conclusions
The increasing emergence of antibiotic resistance and its association with virulence genes is indicated in the findings of the present study. Though the reason behind this phenomenon of association has not been studied in much detail here, it can be assumed that genes responsible for drug resistance may share neighbouring loci with virulence genes on the mobile genetic elements (e.g., plasmid). When a transfer takes place in bacteria, drug resistance-causing genes may carry virulence genes along with them from one bacterium to another. Future in-depth investigations would provide broader insights into the association and co-selection dynamics of antimicrobial resistance among UPEC isolates, which can further be explored for comprehensive research studies emphasizing upon new therapeutic medicines and vaccines against these putative virulence factors. Upcoming research must continue to explore the latest changes in the epidemiology of UPEC isolates to assist in timely intervention for patient treatment and prevention of antibiotic misuse and the development of optimized diagnostic stewardship.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Ethical Committee, King George's Medical University, Lucknow issued approval 78th ECM II BMD-Ph.D./P1. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 4,939.2 | 2023-05-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Trapped Gravitational Waves in Jackiw-Teitelboim Gravity
We discuss the possibility that gravitational waves are trapped in space by gravitational interactions in 2-dimensional Jackiw-Teitelboim gravity. In the standard geon (gravitational electromagnetic entity) approach, the active region is introduced to confine gravitational waves spatially. In our approach, however, spacetime dependent traceless metric perturbations, i.e."gravitational waves"are trapped by the vacuum geometry and can be stable against the backreaction due to the metric fluctuations. We expect that our approach may shed light on finding similar self-trapping solutions in 4-dimensional gravity.
I. INTRODUCTION
In 1916, Einstein predicted that gravitational sources could produce waves of spacetime from his theory of general relativity [1]. In 2016, the LIGO (Laser Interferometer Gravitational-Wave Observatory) and VIRGO teams finally detected gravitational waves generated from a pair of merging black holes [2]. Since then, additional events have been observed, including the merger of two neutron stars [3]). Gravitational waves (GW) are no longer a possible mathematical solution of the theory, but a true physical object. The era of gravitational waves astronomy has begun.
In 1955, Wheeler introduced a particle-like object, geon (gravitational electromagnetic entity), where gravitational waves confined in space by electromagnetic interaction [4]. He hoped to construct the geon as an elementary particle but that did not seem fruitful. Brill and Hartle elaborated this idea by considering GW trapped by gravitational interactions [5], i.e., that GW are somewhat localized in space by their self-interaction. Given the dispersive nature of radiation, it seems such objects are metastable at best. Analyses in general relativity have devoted much effort to the discussion whether such a solution is selfconsistent and metastable [6][7][8][9][10]. These analyses assumed an empty asymptotic Minkowski background. We would like to consider the more realistic scenario of an FLRW or at least asymptotically dS such that the background is also nonstatic. Significant works have been done on asymptotic AdS in [11] and references therein.
In this age of gravitational waves detection, self-confining GW are therefore interesting and attractive as their very existence could be detected. Self-confining gravitational waves are obtained by splitting Einstein's tensor into a back-ground gravitational field γ µν , namely G 0 [g µν ] and the disturbance G[γ µν , h µν ] and taking the average over high frequencies and angular momentum. In this paper, we study fluctuations of the gravitational field ("gravitational waves") trapped in space by the vacuum geometry in 2-dimensional gravity in the framework of Jackiw-Teitelboim (JT) gravity [12,13]. Even though it is two-dimensional gravity, the existence of a lagrange multiplier makes it non-trivial as the (vacuum) equation of motion now becomes R = Λ. Using perturbation theory, the zeroth order gives the background solution, the first order gives the "wave" equation, and the averaged second order, the backreaction on the background geometry. We prefer to use "trapped gravitational waves" instead of geon because in the classical geon solution the effective energy-momentum that corrects the unperturbed solution is entirely deposited in a thin shell enclosing the geon (active region). Our motivation is to circumvent the need of an active region, which makes our solution more physically plausible. Clearly, the reason for choosing 2-dimensional (2D) gravitational theory is that calculations are tremendously simplified and the solution could shed light on the more complicated situation of 4D gravity. We assume a non-vanishing cosmological constant -intuitively the attractive self-gravity and the space-time expansion yields a potential in which the gravitational waves could be trapped. Our analysis yields trapped GW in some region of space. Furthermore, in JT gravity in two dimensions, we obtain the exact solution of the gravitational field equations in the synchronous gauge, the conformal gauge and in a spatially flat gauge. We discuss its connection to the self-trapping solution. As such, our perturbative analysis can, in principle, be described as an approximation to an exact solution, given a proper transformation. Nevertheless, our method is a step towards performing a similar analysis in 3D (where gravity-dilaton waves exist) or 4D, where true GW are known to exist.
The paper is organized as follows. In Section II, we apply the [9] method of finding gravitational geons to JT gravity. In Section III, we try to find analytic conditions for having gravitational waves trapped stably in space and present numerical results on the conditions. In Section IV, we display the exact solution in the synchronous and conformal and spatially flat frame of references. In Section V, we summarize our results and discuss future works.
II. HOW TO FIND A GEON IN JACKIW-TEITELBOIM GRAVITY
Our starting point is 1+1 gravity introduced by Jackiw and Teitelboim [12,13], where the Einstein equation is given by where R is the curvature scalar, Λ is the cosmological constant, and T is the energymomentum. As in [14,15], we put the metric ansatz to be where γ µν is the unperturbed metric with the signature of (−, +) and h µν is traceless and represnts the perturbations standing for a toy model of gravitational waves.
If we consider no matter, i.e., T = 0, Eq. (1) becomes Following [5,9], we expand it perturbatively as where (0), (1), (2), . . . imply the orders in h. We then solve this equation in the following three steps. First, the background geometry for the vacuum state comes from Second, the first order perturbation equation in h is a wave-type equation. A second order linear partial differential equation for h. Hence, the gravitational waves h trapped in space are determined by (6). Third, we test the stability of the solution h by considering the backreaction of gravitational waves to the metric through where the original metric γ changes intoγ by the backreaction and · · · means the time average.
A. Background Geometry for the Vacuum Solution
Consider the unperturbed metric The equation of motion Eq. (1), is then and the solution of Eq. (5) is given by where A and B are constants. This is similar to the dS solution in static coordinates in 4D if we suppress the angular part.
B. Gravitational Waves as Perturbations
Let us now consider the perturbed metric as in [14].
A geon would have the form of where the time part would be T (t) ∝ e −iωt and the spatial part R(r) should be confined in From Eq. (6), we can derive where prime denotes a derivative with respect to r. We expect that the possibility of trapped gravitational waves would be checked by exploring the form of asymptotic behavior of Eq. (13) with given p(r). Now we can find two asymptotic behaviors as follows.
[AB1]: The first asymptotic behavior is that the gravitational waves can be trapped in the region where p → 0. For p → 0, Eq. (13) becomes We may put and Eq. (14) bcomes Without loss of generality, we can consider the asymptotic behavior of the solutions around r = α. The solution is which means the solution R(r) becomes zero as p → 0 for r → α 1 .
[AB2]: The second asymptotic behavior is that the gravitational waves cannot be trapped in the region where p → ±∞. For p → ±∞, Eq. (13) becomes In this case, when p(r) = A + Br − Λ 2 r 2 , the solution is given by In Eq. (19), C 1 term is definitely divergent. The C 2 term diverges or goes to zero as r → ∞ on very particular cases such as a single degenerate root. Nevertheless, waves extending from some root of p(r) to infinity cannot be considered as finite and localized.
These two conditions seem simple but predict where the GW can be confined in space.
If R(r) has some finite support, then we get trapping. If not, then we cannot say that the GW are confined. We will turn back to this point in Section III D.
C. Backreaction of Gravitational Waves
The backreaction of gravitational waves to the vacuum metric is calculated by Eq. (7) which reduces into where p(r) is modified intop(r) and a dot dentoes a derivative with respect to time. After considering e iωt 2 = 1 2 , Eq. (21) becomes 1 An interesting situation occurs if there is a single root, i.e. α = β in region AB1. In such case, the lowest order approximation becomes a Bessel-type equation: R − p 2p R + ω 2 R = 0, with the solution In such a case, the limit r → α can actually be finite with It is difficult to find analytic solutions for Eq. (22) and we will try to solve it by numerical simulations. However, we can mention two main features that would be reflected in the numerical results: First, when R 1 and R 1, Eq. (22) givesp + Λ 0 reproducing Eq. (9). Hence, the background geometry will not change and p is nearly the same asp.
Second, when ω 1, ω terms get important in the right hand side of Eq. (22). One may expect that the mode of large ω cause substantial backreaction to the background metric.
D. Numerical Results
In JT gravity, the metric component p(r) is presented by quadratic curves, as given in 1. If there exists no zero of p(r) for r ≥ 0, GW cannot be trapped. 2. If there exists one zero r 1 of p(r) for r ≥ 0, GW can be trapped in the region where p(r) is finite or does not go to the infinity (i.e., 0 < r < r 1 ).
3. If there exist two zeros of p(r) for r ≥ 0, GW can be trapped in two regions, between the origin and the smaller zero (0 < r < r 1 ), and between the zeros (r 1 < r < r 2 ).
These conditions apply to all the cases regardless of the values of Λ as demonstrated in When p(r) has two distinct zeros, r 1 and r 2 (r 2 > r 1 > 0), we have chances to trap GW in 0 < r < r 1 or in r 1 < r < r 2 that is clear from the asymptotic behaviors AB1 and AB2.
The reasoning of FIG 4, FIG 5 and FIG 6 applies to the trapping region 0 < r < r 1 , and hence we focus on r 1 < r < r 2 which seems to be more interesting. In FIG 7, we consider p(r) = (r − 5) 2 − 4 with Λ < 0 whose two zeros are r = 3, 7. GWs are confined in the region Fortunately, the two dimensional case is amenable to an exact analytical solution. Counting degrees of freedom, any spacetime in any dimensions can always be put into the synchronous form, and in the case of 1+1D into a conformally flat form. In particular, in such forms, we do not limit ourselves to a static ansatz plus a time-dependent perturbation. Considering the synchronous gauge where F (r, τ ) can be any function. The solution of the JT equation of motion (EOM) is given by where f (r) , g(r) are arbitrary functions.
Similarly, considering the conformal gauge, The EOM can be solved by where c 1 , c 2 , c 3 are determined by the boundary conditions. Finally, considering a "spatially flat" gauge where yields a solution that clearly exhibits an oscillatory propagating behavior: where again f 1 t , f 2 t are arbitrary functions. Switching from our ansatz to this spatially flat gauge is rather simple, as it simply requires the definition of the tortoise coordinate Our purpose in this paper is to learn the guiding ideas that could lead us to a solution of self-confining gravitational solutions in 4D gravity that could eventually be detected in our real universe. We need to extend our approaches to deal with the possibilities in 4D Einstein gravity. In [5,9], they already suggested the possibility that the gravitational geon can be attained. We hope that from what we have learned here, we can find self-confining gravitational waves in our expanding universe, either in Friedman-Lemaitre-Robertson-Walker (FLRW) metric or in the particular case of de Sitter geometries. If such solutions are obtained, it would be important to ask whether we can observe the gravitational waves trapped in space by gravitational interactions or find traces of such trapping by measuring the diffused remnants. Our metric ansatz is for spaces with 'horizons,' which are found at the points p(r) = 0 in our representation. If these GW are as in FIG 4,FIG 5, and FIG 6 then the GW may be trapped behind a "black hole" horizon and may be unobservable. The most promising example is that of FIG 7 and FIG 8. We may interpret the background geometry as the Schwarzschild -de Sitter space, according to [14]. In this case, the gravitational waves are trapped between the two horizons, i.e., outside of the "black hole" and inside the dS horizon in the expanding dS-like space, and the solution is meta-stable. In such a case, there are good chances that such an object may be observable. Still, it seems extremely difficult to construct the gravitational waves fairly localized as a geon particle. We hope the analysis presented here will be useful to tackle the full 4D problem. | 3,096.8 | 2020-03-25T00:00:00.000 | [
"Physics"
] |
WOMEN WORKERS AND THEIR ECONOMIC ROLES DURING COVID-19 OUTBREAK FROM AN ISLAMIC PERSPECTIVE: A CASE OF BENTOR DRIVERS IN GORONTALO, INDONESIA
The objective of this research is to examine and reveal the impact of the adoption of LargeScale Social Restrictions (Pembatasan Sosial Berskala Besar PSBB) in Gorontalo province on the roles of women working as drivers of pedicab motorcycle (Becak Motor – Bentor) in addition to being a housewife. This study is qualitative research that gathered data through interviews with the drivers and related parties. Data were analyzed using a phenomenological approach with a thematic study of the Qur'an and Hadith. The results revealed that the family's economic needs became the main reason for women choosing to be Bentor drivers. The implementation of working hours and quantity controls, as well as the prevalence of staying at home, had decreased their income; however, their household duties were still performed, despite the worst economic conditions. Another fact is that the implementation of PSBB has succeeded in decreasing the introductory reproduction rate (R0) of the transmission of Covid-19. In addition, the Qur'an and Hadith allow women who want to work outside their houses but still follow religious instructions always to protect themselves and their dignity and not to ignore their household duties.
INTRODUCTION
In response to the Covid-19 outbreak, Indonesia's government, on 16 March 2020, insisted on not choosing an alternative lockdown as a solution. The government has only urged the public to adopt health protocols such as to use a mask and avoid the crowd, in addition to working; studying, and worshiping are carried out from home, including the approval of ministry-level policies to support the campaign according to their respective fields and authorities (Sekretariat Kabinet Republik Indonesia, 2020). Seeing the case of Covid-19 spread leading to an epidemic, the President of the Republic of Indonesia then released a Government Regulation covering the implementation of Large-Scale Social Restrictions (Pembatasan Sosial Berskala Besar -PSBB). It applies to all provinces or cities through a referral process to the Health Minister as stipulated in Government Regulation Number 21, 2020.
In Gorontalo province, an area known as "Serambi Madinah," when the Covid-19 Task Force announced the existence of the first positive patient on 10 April 2020, its Governor immediately suggested the implementation of PSBB by releasing Gorontalo Governor Regulation No. 15 of 2020 and Governor's Decree No. 152/33 / V/2020. Some of the points are restricted in the Decree that public activities are allowed only until 07.00 pm, closing traditional markets and limitations on public transport modes (Kompas, 2020). So far, the type of job involving many Gorontalo people is bentor driver, especially in the mid-to-low economic community. According to the Head of Bentor Drivers Association, up to August 2020, the number of drivers in Gorontalo reached 30,000 people. If each driver bears four people in their households, about 120,000 people depend on the bentors' income; this number is approaching 10 % of Gorontalo's population.
There has been a long debate on gender equality regarding women's role in the public domain career. First, the theory of nature argues that given biological differences between men and women, different roles in society are necessary and natural (Rajab, 2009). Second, nurture theory only positions the different roles between men and women as a result of social construction so that the general understanding of role division is not standard and can be remodeled (Coleman & Hong, 2008). This theory brings the principle of perfect gender equality in social roles (Khuza'i, 2012). Third, equilibrium theory is regarded as a mediator of the two previous theories with the principle of compromise to create balance. A biological distinction is a fact that definitely involves differences in roles between men and women, but the distinction in roles can also be undermined by mutual consent in order to create peace between men and women (Aldianto, 2015;Rocca, Mielke, Vemuri, & Miller, 2014;Kamri, Ramlan, & Ibrahim, 2014). Those three theories are not yet entirely representative, and each one has the same side as Islamic teachings. It is possible that the concepts of Qur'an and Hadith will be in line with one of the three theories mentioned above or that they have their own concepts that are different from the three.
Several studies regarding the impact of covid-19 on the Indonesian economy have been conducted recently. For instance, the works entitled "The Economic Impact of The Covid-19 Outbreak: Evidence from Indonesia" by Albab Al Umar, Pitaloka, Hartati, and Fitria (2020); "Impact of Covid-19's Pandemic on The Economy of Indonesia" by Susilawati, Falefi, and Purwoko (2020); and "the Impact of the Covid-19 Pandemic on the Indonesian Economy by Nasution, Erlina, & Muda (2020). More specifically, the impact on specific regions and business fields has also been conducted, such as "the Socio-Economic Impacts of Covid-19 Pandemic: The Case of Bandung City by Supriatna (2020); and "The Impact of Covid-19 Pandemic on Business and Online Platform Existence by Taufik and Ayuningtyas (2020).
These studies, however, are limited to only providing an overview or mapping of the effects of a pandemic on general economic growth. Their findings provide an initial picture for other researchers to develop further studies on different occasions. Therefore, our study focuses on Gorontalo people's efforts in maintaining their families' economy during the PSBB time. In particular, we seek to understand the role of female bentor drivers in supporting their families' economy. In the last part, we compare the results with an Islamic perspective. This study focuses on seeking the answers to the following questions: what reasons make housewives in Gorontalo choose to be bentor drivers? What is the impact of applying PSBB on their daily income? And, what is the Islamic perspective upon this phenomenon?
RESEARCH METHOD
This study is a qualitative research that explains the privileges of social influence that cannot be measured through a quantitative approach (Saryono, 2010;Ibrahim, 2020). Meanwhile, this research method is based on the postpositivism philosophy, used to examine the condition of natural objects, where the researchers are the key instrument, the triangulation collection technique, inductive data analysis, and research results emphasize meaning rather than generalization (Sugiyono, 2012). In gaining data, we interviewed and built an emotional relationship with the respondents. We also explored the problem using in-depth observation.
In addition, we also employed a phenomenological approach to understand and interpret the informant's experiences related to research phenomena (Ghony & Almanshur, 2012). There are three concepts of phenomenology: each symptom that appears consists of a series of participants that surround it; it is the root of qualitative research, and the problem in question is caused by the subject's views (Sujarweni, 2015). In this study, the researchers dug up any information regarding the imposition of large-scale social restrictions due to efforts to eradicate Covid-19 transmission, which ultimately affects the income of the dual role of women whose jobs are bentor drivers.
This research is undertaken in the "Bumi Serambi Madinah" which is designated for Gorontalo Province. Bentor is a typical vehicle that is commonly used as reliable transportation in this area. Gorontalo province, during the pandemic period, had implemented large-scale social restrictions for three periods. It has been chosen as one of the three initial areas for implementing the new normal in Indonesia by the Covid-19 Task Force. The informants in this study are the housewives who also work as bentor drivers in Gorontalo. In addition, the thematic study method of Qur'an and Hadith was also used by tracing the related verses or hadiths and then drawing a conclusion from the verses and hadiths.
Overview of the Research Location
Some sources explain that the name of Gorontalo came from the word Hulontalangi (Lembah Mulia), which was also a kingdom. Gorontalo is also derived from the word "Hulondalo" as the Dutch call Gorontalo (Apriyanto, 2006). Gorontalo since 400 years ago has become an old city in Sulawesi besides Makassar, Pare-pare, and Manado (Batubara, 2016). Along with the emergence of regional expansion concerning regional autonomy in the reformation era, this province was formed based on 22 December 2000 through Law Number 38 of 2000, and thus, became the 32 nd Province in Indonesia (Amin, 2013). The area of Gorontalo province is 11,257.07 km², while the 2019 population projection figure is 1,202,631 million people with a growth rate of 1.45% (BPS, 2020) "Serambi Madinah" is inseparable from the history of the decisive role of Gorontalo in the spread of Islam in Eastern Indonesia. The spread of religion that developed in Gorontalo also opened Gorontalo into a center of education and trade. One version of the local history states that the Gorontalo plain stood the Limboto kingdom, preceding the Gorontalo kingdom. Due to one and other things, there was a civil war between the two from 1485 AD to 1672 AD. The civil war was finally successfully resolved through peace negotiations in 1673 AD Popa (representing Limboto), and Eyato (representing Gorontalo) became the main actors in the incident. Eyato, who was originally a Khatibida'a (great preacher) after being successful as a diplomat and negotiator, was then crowned as the King of Gorontalo. Of course, not only because of that, but mainly because he was intelligent, his knowledge of religion was broad and deep. The leadership relay then shifted to Sultan Botutihe, where Islamic values at that time were further strengthened. So that the customary philosophy of "Adati hula-hula to syaraa, syaraa hula-hula to Quruani," which translates to "Adat is based on sharia, sharia is based on the Qur'an (Christianto, 2009).
Islamic values, which have long been a pillar of government, have brought Gorontalo to be the center of Islamic culture in eastern Indonesia as proclaimed by then Indonesian Minister of Religion, Said Agil Al-Munawar, in 2002. Currently, Gorontalo's nickname is "Serambi Madinah," a special designation due to its role in promoting Islamic values since the beginning of its heyday (Botutihe, 2003). Apart from that, it is also because of its people's traditional philosophy and religious life. Therefore, the nickname cannot be separated from the Gorontalo community, which is predominantly Muslim. Even Serambi Madinah has become a tourism brand that has started to be recognized by the public.
Becak Motor (Bentor)
Becak Motor or abbreviated as "Bentor" is paratransit that was first discovered in Gorontalo and is even very easy to find in remote areas of Gorontalo villages. This Gorontalo's bentor is quite similar to a rickshaw however is powered by a modified motorcycle, and that the two passengers sit in front. Although bentor is not yet worthy of being public transportation, it is still the leading transportation choice due to its high accessibility and mobility. In addition, it has been entrenched among the community and even able to contribute to employment (Moha, 2014). The founder of bentor is Ferry Hasan, a workshop owner who experienced an economic crisis in 1998. Inspired by the rickshaw, he modified a motorcycle and created bentor (Terrajana, Syam, Basri Amin, Jamil Massa, 2011).
This modification was unlawful as it is not in line with the regulation of Road Traffic and Transportation, Number 22/2009. Therefore, the province traffic and transportation authority, DLLAJR, has not issued their operating licenses. Nonetheless, these types of vehicles were increasing day by day and jammed the roads in Gorontalo Province. The potential for bentor as transportation in an urban and rural area in some cities in Indonesia shows significant improvement. It happens because of the increasing need for facilities freight and also service areas transportation that cannot be served by other means of transportation. The existence of bentor as local transportation reduces unemployment both in urban and rural areas because many unemployed people use these means as an alternative job. When viewed from the results of the frequency survey, the use of bentor is frequent and based on comfort and security levels. Most respondents declared safe and comfortable using motorized rickshaw transport in urban and rural areas (Mudana & Heriwibowo, 2018).
Findings
The role of women in boosting their families' economy is now no longer a hot topic due to the necessities of life. It is increasingly complex in the modern era and needs a solution more than just debating the issue of equality itself (Kartika Qori & Kanada Rabial, 2017;Zuhdi, 2018). From the results of the study, the researchers got several facts which are presented in Table 1. She is a widow who has to support her two small children alone.
Income before PSBB IDR200,000 per day. After PSBB IDR100,000 4 SH (53) Her husband is a palm sugar farmer, and they have to support the family in her old age in modest conditions. Some of their children are married but remain in need.
Income before PSBB IDR25,000 per day. After PSBB IDR25,000 Source: Data Processed (2020) It is clear that the decline of their family income is the impact of PSBB, and it mostly impacts the families that do not have any permanent jobs. In the next session, the researcher discusses the reason for being a bentor driver, the contribution to support the family, impact on household tasks, and impact on family
Reason for choosing to be Bentor Driver
The respondents in this study simultaneously answered that the economic factors that pushed them to finally decided to take the profession that was viewed belong to men. Several reasons have been acquired to be justification for the women in Gorontalo to be a bentor driver. First, to support the inadequate income of the husband. SI, a respondent, stated that: Based on their experiences, most respondents admitted that being a bentor driver was actually not an easy decision for a woman in Gorontalo. This is relevant to the common stigma--which may be driven by religious or social norms--that women's responsibility is to take care of domestic duties (Pieters & Klasen, 2020). Performing two roles at once to run well requires more effort.
Although it is very prone to sacrifice one of them, it is a life risk that must be faced. However, the opportunity to get through it well is very wide open; some can play both roles equally.
Impact of PSBB on Female Bentor Driver in the Family Economy
Working as a female Bentor for the women in Gorontalo undoubtedly has contributed to boosting up family income. Some of them see this as an effort in helping the husband gaining additional income for the family: "Pas kita so bawa bentor paitua so tabantu sadiki kasiang. Jadi so lumayan no itu penghasilan". (After I became a bentor driver, I can reduce my husband's burden." However, as it is supposed only to support the family, for those who are a single worker, the income from this job was not enough although they have worked all day long, a respondent (MM) stated that: "Untung ada keluarga yang ikhlas babantu biar cuma kase makan tu ade, sala-sala so nyak ada paitua." (Fortunately, there are families who are willing to help, even though they only feed children, I am a single parent) For some of them, the job as bentor driver can reduce their transportation cost fo going somewhere, as it was stated by SH: "Saya bawa bentor hanya misalnya mo pigi babalanja ka pasar, ke kios atau ngantar anak kemana bagitu. Supaya tidak kaluar ongkos transportasi." (I drive the bentor only to go shopping at market or drive the children to somewhere, so that I no longer need to pay transportation costs)
Impact of PSBB on Female Bentor Driver
The In order to avoid the continuing declining incomes, she tried to create a new market system through her cell phone by accepting the transfer of goods, services, and courier services. This idea is apparently worked, and her income increased 25%.
Another respondent, RL, experienced a similar case as she was no longer able to help her husband to earn a living: She added that during PSBB, a bentor was only allowed to carry one passenger only though compared to the normal situation that was usually up to three passengers at a time.
Women Workers in Islamic Perspective
The Prophet's described women's creation came from Adam's rib. If they are forced to change, they will be broken. If left alone, they stay bent as narrated by Imam Bukhari in the authentic hadith book number 5186 (Al-Bukhari, 2002). The Prophet also described women as one of the two weak parties, praying to God to defend their rights (Al-Nasa'i, 2001). However, family insistence and guidance finally made them strong in heart and compassion.
Nevertheless, as a woman, whether she chooses to be active in the domestic or participate in working in the public domain, she can still adjust to the rules outlined in Islamic teachings. Among them, the most important things are as follow: 1. Leave the house after getting permission from and have discussed with the husband or guardian based on QS. Al-Ahzab ( The aforementioned review indicates the limitations that must be considered in regulating the activities and movements of women's lives; of course, this is not in the context of discrimination against women's rights but instead leads to preventive measures to protect women's dignity. In Islamic history, the role of reliable women cannot be denied. Since the time of the Prophet Muhammad to the Khulafaurrasyidin and even to the Umayyad and Abbasid caliphates, several female figures have played essential roles in the public domain (Mazaya, 2014). Women in Islam get a place to take part as their wishes, and there is absolutely no specific prohibition from the text of the verses of the Qur'an or the Hadith of the Prophet to take part in the public domain (Ibrahim, 2015). However, they have to maintain the norms and corridors explained in the Qur'an and Hadith (Nisak & Ibrahim, 2014).
Discussion
Women who work as bentor drivers are not common in Gorontalo. This kind of job is viewed as belonging to men. However, due to the economic pressure, the view has to be changed. Allah SWT has provided a vast and airy nature as a field to earn a living, of course, by working and being patient for the test of the Creator as in QS. Al-Baqarah (2): 177. This ayat is the basic that motivates them to choose to keep working when other people choose to stay at home.
The fact showed that Gorontalo Province was the last area infected by the Covid-19 outbreak in Indonesia. The first case was confirmed on 9 April 2020, which was experienced by residents of Tumbihe Village, Bone Bolango Regency. When this study was conducted, the covid-19 outbreak continues.
Implementation of PSBB
Gorontalo is one of the provinces where the Indonesian Minister of Health granted the proposal to implement the Large-Scale Social Limitation (PSBB) through the decree number HK.01.07/Menkes/279/2020 dated 28 April 2020 (Ministry of Health of the Republic of Indonesia, 2020). Since it was first applied on 4 May 2020, it is proven that the PSBB application has succeeded in reducing the rate (R0) of Covid-19 transmission from 2.74 to 2.12. This number had continued to decline to 1.5 when PSBB extended to the second phase. For this reason, the government then extended the PSBB from 3 to 14 June 2020 (Kompas, 2020). After the period, Gorontalo became one of four provinces in Indonesia that would apply the new life order called as new normal (Bialangi, n.d.) Community movements in all sectors were restricted during the PSBB period. Consequently, the province's economy is suffering. It impacts all sectors, including micro, small-medium enterprises (MSME) (Wardiah & Ibrahim, 2013). Concerning the bentor driver, it is impacted their movement as it only allowed to carry only one passenger in one trip. It causes them a higher fixed cost.
However, Islam asked people to obey Ulil Amri (government) as it was stated in the Qur'an and Hadith, such as QS. An-Nisa (4): 59. The social restrictions imposed are viewed as a collective effort to prevent a more significant danger that harms the existence of human beings. This statement is consistent with Prophet's advice which at that time was advised to stay away from people tainted like animals as HR. Ahmad Number 9722 (Hanbal, 1997). Obeying this PSBB, theologically, is a form of obedience to leaders who then, as a form of concrete efforts to suppress the spread of Covid-19, the plague that hit the community. Of course, with arrangements that still consider the survival of all levels of society.
Women and Family Division
Islam divides roles between a man and his wife. It also makes the role complementary in many cases. The right of a party confers an obligation on the other and vice versa. This shows how far Islam respects women in the community of mankind and makes their impact a must-be-felt in society. Men are not given absolute power in their homes. Women also enjoy some power that can check that of the man despite the appellation given to man by Allah (qawwamun), which is the poles. This is epitomized in the life of the Prophet with his wives. He used to help them carry out house works and give them the honor that is rightly due. Marriage contract gives both the husband and wife the ability to satisfy their human desire and gain blessings of almighty Allah, and have children that would help when the parents become old and help in the flow of future generation in a regulated manner. For the success of this contract, the parties have duties to abide by, which the Islamic family highlighted, such as duties of husband, duties of wife, and the duties compulsory on both of them.
It is not strange that many women used to work to support their families, but not as freely as now, where even women can replace men's role. Except within the family system, women can do both their public and domestic tasks without men (Budiman, 1985). These housewives are encouraged to work despite the heavy burden of performing this activity or even seemingly impossible.
3. Women's role in the family economy and housewives' duties Change in family structure and function is currently a subject of much interest to students of the family. Familial change cannot be fully understood without considering the economic role of women. There is a good deal of evidence to support the view that the impetus for these social role changes may be dissatisfaction with and the consequent attempt to reinterpret economic roles, particularly the economic roles of women. Certainly, some of the most distinguishing characteristics of the present social movement are deeply rooted in economic concerns: redistribution of responsibility for household tasks between spouses, the issue of paid versus unpaid productive roles for wives, the interest in day-care as a substitute for home care of children are all expressions of points of view about how economic roles of women may be carried out.
Viewed from a structure-function conceptual framework, change in the family as a social system may result from the need to support change in the family as an economic system. It is a central premise of this paper that the impact of these changes cannot be fully assessed apart from their impact on the economic welfare of families (Fakih, 2001). Nevertheless, the measurement of the economic activity of wives has received little attention, particularly as it is viewed from a lifespan perspective. This may be due to the lack of an adequate conceptual framework for organizing data about the complex set of factors related to the economic role behavior of wives. In his review article on the state of family theory development Broderick reports negligible advancement in this area (Astute, 2007).
A complex range of economic and social factors has driven the changes in women's participation in the paid workforce and help to explain the features of their involvement in paid work. Among the economic factors that have been identified in the now extensive literature on the topic of changing' participation rates of women are the need to supplement the family income and changes in the employment and wage-earning opportunities available to women (Solihatin, 2017).
The family is an institution of society, and every institution needs to have order and discipline, without which it cannot run or even survive. The institution of the family is run with the mutual collaboration and cooperation of husband and wife. In the classical view, the Islamic scheme for a family's management is that the woman should be relieved from all other responsibilities to focus on the family's internal discipline and stability (Zuhdi, 2018). At the same time, the man should take the burden of meeting economic needs. The woman's food, clothing, and shelter are counted among the family's economic needs; if both partners are well-off, a servant or helper for household chores is also included in these needs. The man has to arrange for the expenses of healthcare as well. This is the legal position of Islam on the responsibilities of the husband. In addition, as encouraged in Islam, good moral conduct demands that a man treat his wife as well as he can and do everything possible for her comfort and happiness (Jasruddin & Quraisy, 2017).
Although women have not been born responsible for family finances and primarily tasked with domestic matters, they also have the right to do other things (Al-Naisaburi, 2006). Islamic history shows that, along with paying their duty to family and home, Muslim women have rendered great services outside their homes as well (Al-Thabrani, 1983). They have also been involved in economic activities according to the situations in which they found themselves.
CONCLUSIONS
Generally, the situation where both husband-and-wife works to meet the family's needs is generally seen in the working middle class. However, a highly educated, professionally trained, and skilled woman may find herself in a difficult situation that demands her to work and earn. If she adopts a lawful occupation, she has every right to do this. However, when a situation demands that a wife support her husband in earning for the family. Likewise, being a bentor driver for Gorontalo women was also due to the economic pressure of the family.
The economic conditions they experienced worsened after the government implemented the PSBB in Gorontalo area and urged all citizens to stay at home. Consequently, the economy weakened, and the driver's income dropped dramatically due to the decreased number of passengers. The actions of women in Gorontalo are not against Islamic teaching. Many studies show that Islam is not the culprit in preventing women from being active in the labor market, but rather the cultural attitude shapes labor force participation decisions.
1. Women can adopt any occupation or business according to their situation and circumstances, abilities, and inclinations. They can seek jobs as well as invest in trade, industry, or agriculture. They can manage and supervise the ventures in which they invest or which they own. They can even create new opportunities for themselves. However, this must be done in accordance with Islamic teachings. Thus, finances are necessary, but a woman should not engage herself in economic activities at the cost of the family system and discipline. She should not give herself up to economic struggle at the cost of the warmth of relations. | 6,312.6 | 2021-07-01T00:00:00.000 | [
"Sociology",
"Economics"
] |
Stem cell sources for tooth regeneration: current status and future prospects
Stem cells are capable of renewing themselves through cell division and have the remarkable ability to differentiate into many different types of cells. They therefore have the potential to become a central tool in regenerative medicine. During the last decade, advances in tissue engineering and stem cell-based tooth regeneration have provided realistic and attractive means of replacing lost or damaged teeth. Investigation of embryonic and adult (tissue) stem cells as potential cell sources for tooth regeneration has led to many promising results. However, technical and ethical issues have hindered the availability of these cells for clinical application. The recent discovery of induced pluripotent stem (iPS) cells has provided the possibility to revolutionize the field of regenerative medicine (dentistry) by offering the option of autologous transplantation. In this article, we review the current progress in the field of stem cell-based tooth regeneration and discuss the possibility of using iPS cells for this purpose.
INTRODUCTION
Teeth consist of multiple hard tissues, including enamel, dentin, and cementum, and have an integrated attachment complex with alveolar bone through the periodontal ligament. After completion of tooth formation, the only vascularized tissue containing nerves is the dental pulp, which is encased in mineralized dentin. Because teeth have multiple functions, including feeding, articulation, and esthetics, their loss can cause not only physical but also psychological suffering that compromises an individual's self-esteem and quality of life (Pihlstrom et al., 2005;Polzer et al., 2010;Jung et al., 2011).
With aging, the number of people who lose their teeth increases. Furthermore, the number of people who have more than five congenitally missing adult teeth has increased (Mayama et al., 2001). Thus, tooth loss is a major challenge in contemporary dentistry and accounts for a large part of daily dental practice. Currently, missing teeth are restored using dentures or dental implants prepared from synthetic materials. Although these prostheses serve the purpose, denture therapy is associated with complications such as denture-induced stomatitis and traumatic ulcers (Holm-Pedersen et al., 2008). The use of dental implants may also lead to surgery failure because of many factors that interfere with osseointegration (Esposito et al., 1998). To overcome these shortcomings, the novel approach of stem cellbased tooth regeneration has been suggested as an alternative, considering the advances in tissue engineering and stem cell biology.
Recent progress in tissue engineering techniques and stem cell research has provided important insights for improving tooth regeneration. The main concept in current tooth regeneration is to mimic the natural tooth development process either in vitro or in vivo using stem cells. Because tooth development is characterized by a sequential reciprocal epithelial-mesenchymal interaction between oral epithelial and neural crest (NC)-derived dental ectomesenchymal cells (Thesleff and Sharpe, 1997), numerous studies have attempted to find an optimal source of stem cells that have the potential to differentiate into these cells or their progeny. In particular, the recent discovery of induced pluripotent stem (iPS) cells, which have been genetically reprogrammed to an embryonic stem cell (ESC)-like state, has had a major impact in this field (Takahashi and Yamanaka, 2006). In this review, we focus on the important previous findings in the study of tooth regeneration using stem cells and discuss the potential of iPS cells for tooth regeneration in light of recent results obtained by our group.
CURRENT STEM CELL-BASED TOOTH REGENERATION
Stem cells are unspecialized cells defined as clonogenic cells that have the capacity for self-renewal and the potential to differentiate into one or more specialized cell types. (Weissman, 2000;Slack, 2008). Their microenvironment, composed of heterologous cell types, extracellular matrix, and soluble factors, enables them to maintain their stemness (Watt and Hogan, 2000;Spradling et al., 2001;Scadden, 2006). Because of their unique properties, stem cells have the potential to be important in tissue engineering strategies for the regeneration of diseased, damaged, and missing tissues and even organs. In general, stem cells can be divided into three main types: ESCs that are derived from embryos; adult stem cells that are derived from adult tissue; and iPS cells that are generated artificially by reprogramming adult somatic cells so that they behave like ESCs. In this section, we outline recent results obtained using ESCs and adult stem cells for tooth regeneration.
ESCs
The isolation and expansion of murine ESCs in the 1980s ignited interest in regenerative medicine research (Evans and Kaufman, 1981). ESCs are pluripotent stem cells derived from the undifferentiated inner cell mass of the blastocyst (an early stage of embryonic development) and they continue to grow indefinitely in an undifferentiated diploid state when cultured in optimal conditions in the presence of a feeder layer and leukemia inhibitory factor (LIF). The study of ESCs has gained further interest with the successful establishment of primate and human ESCs (Thomson et al., 1995(Thomson et al., , 1998Shamblott et al., 1998;Reubinoff et al., 2000), which can differentiate into derivatives of all three primary germ layers: ectoderm, endoderm, and mesoderm (Evans and Kaufman, 1981;Thomson et al., 1998). Because of the pluripotency of ESCs, several attempts have been made to use them to functionally regenerate cardiomyocytes, dopaminergic neurons, and pancreatic islets in animal models, keeping in view future clinical applications (Lumelsky et al., 2001;Kim et al., 2002;Laflamme et al., 2007;Van Laake et al., 2008). In dentistry, ESCs have been used for oral and craniofacial regeneration, including mucosa, alveolar bone, and periodontal tissue regeneration (Roh et al., 2008;Inanç et al., 2009;Ning et al., 2010;Shamis et al., 2011). Ohazama et al. (2004) demonstrated that after recombination with embryonic day (E)10 oral epithelium, ESCs expressed the unique set of genes for odontogenic mesenchymal cells, such as Lhx7, Msx1, and Pax9, suggesting that ESCs can respond to inductive signals from embryonic dental epithelium. Although these approaches have the potential to be useful for tooth regeneration and for understanding basic tooth development, it will be necessary to address several major issues before they can be implemented in clinical practice, including possible tumorigenesis (teratoma formation) when transplanted, ethical issues regarding the use of embryos, and allogeneic immune rejection.
ADULT STEM CELLS IN DENTAL TISSUES
Adult stem cells have been identified in many tissues and organs and have been shown to undergo self-renewal, to differentiate for the maintenance of normal tissue, and to repair injured tissues. The first adult stem cells isolated from dental tissues were dental pulp stem cells (DPSCs) . These cells have a typical fibroblast shape and express markers similar to those of mesenchymal stem cells (MSCs). When transplanted with hydroxyapatite/tricalcium phosphate (HA/TCP) powder in immunocompromised mice, they formed a dentin-like structure lined with odontoblast-like cells that surrounded a pulp-like interstitial tissue . DPSCs could differentiate in vitro into other mesenchymal cell derivatives such as odontoblasts (D'Aquino et al., 2008), adipocytes, chondrocytes, and osteoblasts (D'Aquino et al., 2007;Koyama et al., 2009;Yu et al., 2010) and could also differentiate into functionally active neurons (Arthur et al., 2008(Arthur et al., , 2009). MSC-like cells have also been isolated from the dental pulp of human deciduous teeth [stem cells from human exfoliated deciduous teeth (SHEDs)] (Miura et al., 2003). They have the ability to differentiate in vitro to neuron-like cells, odontoblasts, osteoblasts, and adipocytes, show higher proliferation rates and increased numbers of population doublings compared with DPSCs, and can form spherical aggregations. When these cells are transplanted mixed with HA/TCP in vivo, they can form dentin and bone but not dentin-pulp complexes. Comparison of the gene expression profiles of DPSCs and SHEDs demonstrated that 4386 genes were differentially expressed by two-fold or more (Nakamura et al., 2009). In addition to genes that participate in pathways related to cell proliferation and extracellular matrix formation, FGF, transforming growth factor (TGF)-β, and collagen I and III showed a higher level of gene expression in SHEDs than in DPSCs. Cordeiro et al. (2008) suggested that SHEDs could be the ideal source of stem cells for repairing damaged teeth or for induction of bone formation.
Stem cells from the apical papilla (SCAPs) are found in the papilla tissue in the apical part of the roots of developing teeth. The third molars and teeth with open apices are an important source of SCAPs. These cells have the potential to differentiate into osteoblasts, odontoblasts, and adipocytes and show higher rates of proliferation in vitro compared with DPSCs (Sonoyama et al., 2006Huang et al., 2008). Transplantation of SCAPs and periodontal ligament stem cells (PDLSCs) into tooth sockets of minipigs allowed the formation of dentin and periodontal ligament (Sonoyama et al., 2006). Dental follicle stem cells (DFSCs) have also been isolated from the follicles of developing third molars (Morsczeck et al., 2005). They can differentiate into osteoblasts, adipocytes, and nerve-like cells in vitro (Kémoun et al., 2007;Coura et al., 2008;Yao et al., 2008) and form cementum and periodontal ligament in vivo (Handa et al., 2002;Yokoi et al., 2007).
Future therapeutic approaches for the restoration of damaged dentin, pulp, cementum, and periodontal ligaments may make use of autologous stem cells such as DPSCs, SHEDs, SCAPs, and DFSCs that have been stored after removal from the patient.
Dental epithelial stem cells were identified in the continuously growing rodent incisor (Harada et al., 1999). These cells are maintained in the stem cell niche located at the apical end of the incisor, named the "apical bud" region, and they constantly produce enamel-secreting ameloblasts through interaction with mesenchymal cells (Harada et al., 1999). FGF10, Notch, and Sprouty have been suggested to play a role in the continuous growth of rodent incisors and the maintenance of dental epithelial stem cells (Harada et al., 1999;Tummers and Thesleff, 2003;Klein et al., 2006;Yokohama-Tamaki et al., 2006). Although dental epithelial stem cells appear to be attractive for the regeneration of enamel-forming ameloblasts in rodents, this stem cell niche may be specific to rodent incisors; these cells differ from all human teeth, in which dental epithelial stem cells and their progeny are lost after eruption of the tooth.
The epithelial rests of Malassez (ERMs) are quiescent epithelial remnants of Hertwig's root sheath (HERS) that remain in the adult tooth and play a role in cementum repair and regeneration (Rincon et al., 2006). A recent study demonstrated that ERMs contain a unique population of stem cells that are capable of undergoing epithelial-mesenchymal transition and differentiate into diverse lineages indicative of mesodermal and ectodermal origin, including bone, fat, and cartilage as well as neuron-like cells (Xiong et al., 2013). In addition, ERMs can be induced to form enamel-like tissues after transplantation into athymic rat omentum with primary dental pulp cells (Shinmura et al., 2008), suggesting that the stem cells in ERMs may be able to regenerate enamel.
ADULT STEM CELLS IN NON-DENTAL TISSUES
Although most adult stem cells in non-dental tissues have generally been considered to be limited to specific cell fates, recent studies have demonstrated that they have plasticity and can differentiate into cell types derived from different germ layers. In particular, bone marrow-derived adult stem cells have shown considerable capacity to differentiate into diverse cell types such as endothelium, neural tissue, liver, and heart (Asahara et al., 1997;Lagasse et al., 2000;Mezey et al., 2000;Orlic et al., 2001). Notably, MSCs derived from bone marrow can respond to inductive stimulation from dental epithelium and contribute to tooth regeneration (Ohazama et al., 2004). Recombination between odontogenic-inducing epithelium and bone marrow-derived cells has been demonstrated to involve the expression of odontogenic genes such as Pax9, Msx1, and Lhx7 and the formation of a tooth crown with organized enamel, dentin, and pulp surrounded by bone after transplantation under the mouse kidney capsule (Ohazama et al., 2004). Furthermore, c-kit-enriched bone marrow-derived cells were shown to be able to differentiate into ameloblast-like cells (Hu et al., 2006b). The prospective stem cells described above have shown remarkable capability for tooth regeneration. However, with regard to clinical application, they share the common obstacles of ethical concern arising from their embryonic origin, the risk of tumorigenesis, and the possibility of immune rejection after allogeneic transplantation. The development of iPS cells may overcome many of these issues because of their properties, and iPS cell-derived odontogenic cells can be expected to play significant roles in future strategies for clinical translational research on tooth regeneration.
iPS CELLS
Generating patient-specific pluripotent stem cells with properties similar to those of ESCs has long been a central aim in research on stem cell-based regenerative medicine. Through global changes in the epigenetic and transcriptional environment, nuclear reprogramming reverses cell fate, converting differentiated cells back to the undifferentiated state (Jaenisch and Young, 2008). In somatic cell nuclear transfer (SCNT), also referred to as therapeutic cloning, the nucleolus of a somatic cell is transferred to the cytoplasm of an enucleated egg to create a blastocyst genetically identical to the parental source and to derive pluripotent ESC-like stem cells (Hochedlinger and Jaenisch, 2006). However, SCNT still needs the donor oocyte to direct the reprogramming of the somatic cell, and most cloned animals exhibit severe phenotypic and gene expression abnormalities (Humpherys et al., 2002;Ogonuki et al., 2002;Tamashiro et al., 2002). Therefore, SCNT is not a feasible option for cell-based transplantation. Although the mechanism by which transformation occurs and the mediators of nuclear reprogramming are largely undefined, the search for factors that are able to induce complete nuclear reprogramming has provided recent breakthroughs in the development of successful iPS technologies.
In 2006, Takahashi and Yamanaka reported the successful derivation of iPS cells from embryonic and adult mouse fibroblasts through the ectopic co-expression of only four genes: Oct4, Sox2, Klf4, and c-Myc (Takahashi and Yamanaka, 2006). The expression of these genes was sufficient to reprogram somatic cells to an ESC-like pluripotent state. Tissues from different species such as mice (Takahashi and Yamanaka, 2006), rats (Liao et al., 2009), rhesus monkeys , and humans (Takahashi et al., 2007) have been used as source materials for iPS cell line generation. Successful reprogramming also quickly translated to a wide variety of other cell types, including pancreatic β-cells (Stadtfeld et al., 2008), neural stem cells (Eminli et al., 2008) mature B cells , stomach and liver cells (Aoi et al., 2008), melanocytes (Utikal et al., 2009), adipose stem cells (Sun et al., 2009), and keratinocytes (Maherali and Hochedlinger, 2008), demonstrating a universal capacity to alter cellular identity. In dentistry, iPS cells have been generated from many types of dental tissues/cells, including SHEDs, SCAPs, DPSCs, tooth germ progenitor cells (TGPCs), buccal mucosa fibroblasts, gingival fibroblasts, and periodontal ligament fibroblasts (Egusa et al., 2010;Miyoshi et al., 2010;Oda et al., 2010;Tamaoki et al., 2010;Yan et al., 2010;Wada et al., 2011). DPSCs show much higher reprogramming efficiency than the conventionally used dermal fibroblasts and high expression of endogenous reprogramming factors such as c-Myc and KLF4 and/or ESC marker genes (Tamaoki et al., 2010). Because these cells are easily accessible by dentist, iPS cells generated from dental tissues are expected to be a promising cell source for tissue regeneration.
iPS cells have shown pluripotency similar to that of ESCs. They can produce cells from all three germ layers in vitro and form teratomas when injected into immunodeficient mice and can contribute to chimera formation (Takahashi and Yamanaka, 2006). Murine iPS cells also fulfill the strict pluripotency criteria for contribution to the germline (Okita et al., 2007) and tetraploid embryo complementation (Woltjen et al., 2009). Moreover, they can maintain self-renewal when cultured under conditions similar to those used for ESCs. Hence, iPS cells are often described as indistinguishable from ESCs. However, the question of whether iPS cells and ESCs are molecularly and functionally equivalent is raised by the artificial nature of induced pluripotency. Recent analyses have shown a high degree of similarity between ESCs and iPS cells in terms of global gene expression and histone methylation (Maherali et al., 2007;Okita et al., 2007;Wernig et al., 2007;Mikkelsen et al., 2008). However, substantial differences between them have also been reported. In addition, other studies have indicated that iPS cells retain an epigenetic memory of their former phenotype that can limit their differentiation potential (Kim et al., 2010;Polo et al., 2010). Therefore, further study of iPS cells and ESCs is required to determine whether differences between them may affect their differentiation potential and their overall safety and efficiency after transplantation.
There are several advantages of using iPS cells for regenerative medicine. Their use can overcome the ethical and political issues associated with the use of embryonic cells. They can be used as autologous and patient-specific cells, which eliminates issues related to the immune rejection of grafts, and can thus be expected to become the major tool in the advancement of personalized medicine (Ferreira and Mostajo-Radji, 2013). Furthermore, iPS cell production can easily be scaled up, which essentially provides an unlimited source of cells for clinical applications, in contrast with adult stem cells.
In addition to regenerative medicine, newly emerging applications of iPS cells are related to in vitro disease modeling and drug screening (Ebert et al., 2012). Tissue-specific iPS-derived cells generated from patients with complex genetic defects can be used to model diseases in studies to elucidate the complex mechanisms underlying various diseases and to search for new drugs. Primary human cells carrying the disease of interest are usually difficult or impossible to isolate, and even if it is possible to isolate them, in most cases, the cells do not proliferate adequately to produce sufficient numbers of cells for analysis. In contrast, iPS cells derived from the patient can proliferate abundantly and differentiate into cells that represent the pathological character of the disease. Numerous groups have reported the creation of iPS cells specific for various diseases, including Parkinson's disease, amyotrophic lateral sclerosis, and familial dysautonomia, in studies to elucidate the mechanism of their development and progression and to search for suitable drugs (Dimos et al., 2008;Park et al., 2008;Lee et al., 2009).
Another therapeutic potential of iPS cells has been demonstrated in proof-of-principle studies. Hanna et al. (2007) used a humanized mouse model of sickle cell anemia to determine the repair potential of progenitor cells derived from autologous iPS cells. Fibroblasts from the diseased mice were reprogrammed into an iPS clone and the mutant gene was corrected by homologous recombination; the pluripotent cells then differentiated into hematopoietic progenitors and were transplanted back into the mice. This therapy resulted in substantial improvement of symptoms. In another milestone study, healthy iPS-derived dopaminergic neurons were implanted into the brain of a rat model of Parkinson's disease. The implanted cells were functionally integrated and the disease condition was improved .
DIFFERENTIATION OF iPS CELLS INTO EPITHELIAL STEM/PROGENITOR CELLS DURING TERATOMA FORMATION
Teratomas that occur naturally in the ovaries are a useful tool for studying the development of tissues and organs because they consist of a variety of tissue elements derived from two or more embryonic germ layers (Linder et al., 1975). They have been shown to contain ectodermal appendages, such as teeth and hair, and are a unique material for investigating the mechanisms involved in morphogenesis. Therefore, although tumorigenesis may be a critical issue in the clinical application of iPS cells, these teratomas should provide an excellent model for investigating tooth formation and organogenesis and lead to novel bioengineering approaches in regenerative medicine (Gerecht-Nir et al., 2004;Nussbaum et al., 2007). We therefore examined the processes of epithelial histogenesis and the properties of epithelial tissues and whether or not epithelial stem/progenitor cells, which have the capacity to induce tooth organogenesis, were found in iPS-derived teratomas (Kishigami et al., 2012). After mouse iPS cells were transplanted subcutaneously, iPS cell-derived teratomas (days 7, 14, and 21) were evaluated histologically. In terms of the histomorphological features of the epithelium, compact epithelial mass structures composed of non-polarized cells were dominant during early teratoma growth (day 7), whereas mature structures, such as pseudostratified ciliated epithelium and keratinized stratified squamous epithelium, increased as the teratomas developed (days 14-21) (Kishigami et al., 2012). Furthermore, other mature tissues, such as bone and cartilage, became evident in late teratomas (day 21) (Kishigami et al., 2012). These results suggest that the processes observed during epithelial histogenesis in iPS cell-derived teratomas may mimic those occurring in normal embryonic development and provide a useful model for studying the formation of tissue structures during early development.
To study the presence of epithelial stem/progenitor cells in iPS cell-derived teratomas, immunohistochemical analysis was performed using antibodies for the epithelial stem/progenitor cell markers p63 and CD49f and dental epithelial cell marker keratin 14 (K14) (Salmivirta et al., 1996;Pellegrini et al., 2001;Kawano et al., 2004;Laurikkala et al., 2006). K14 and p63 were detected in epithelial masses, basal layer of stratified epithelium, and pseudostratified epithelium. CD49f was detectable in all epithelium types from day 7; in particular, it was strongly expressed in epithelial masses and basal layer of stratified epithelium (Kishigami et al., 2012). These results provide important insights into the development of epithelial tissues during spontaneous differentiation of iPS cells in vivo.
However, regardless of the presence of putative epithelial stem/progenitor cells, iPS cell-derived teratomas that formed in these conditions did not contain teeth, and no tooth germ-like structures could be found (n = 10, unpublished data). This suggests that a specific signaling network for tooth organogenesis is missing.
APPROACHES TO TOOTH REGENERATION USING iPS CELLS
Because of recent advances in tissue engineering technology, functional teeth can be formed from dissociated tooth germ cells. Several groups have demonstrated that it is possible to produce biological teeth similar in appearance to natural teeth on the basis of tissue-cell or cell-cell recombination using embryonic tooth germ cells (Hu et al., 2005(Hu et al., , 2006aNakao et al., 2007;Nait Lechguer et al., 2008. In addition, using tissue/cell recombination techniques, non-dental stem cells such as ESCs, neural stem cells, and bone marrow-derived cells have been shown to respond to inductive signals from embryonic dental epithelium (Ohazama et al., 2004). Depending on the stage,
Frontiers in Physiology | Craniofacial Biology
February 2014 | Volume 5 | Article 36 | 4 dental epithelium or mesenchyme from the tooth germ has an inductive potential for differentiating even non-dental stem cells into odontogenic cells.
To investigate whether dental mesenchymal cells in the tooth germ could induce undifferentiated mouse iPS cells to form dental epithelial cells, DsRed-expressing iPS cells were combined with E14.5 dental mesenchyme and transplanted together with collagen sponges under the kidney capsule in immunodeficient mice. Four weeks after transplantation, tooth germlike structures in iPS cell-derived teratomas were observed, and iPS cells expressed an ameloblast marker, amelogenin, indicating that iPS cells had differentiated into ameloblasts (Figure 1). However, the results of these transplantation experiments had poor reproducibility (<10%) and the numbers of tooth germ-like structures in the teratomas were very small (<2 per teratoma). Therefore, it appeared that more specific and suitable exogenous signals would be necessary to induce undifferentiated iPS cells to acquire odontogenic characteristics.
Tooth development is controlled by reciprocal interactions between dental mesenchymal cells derived from NC and dental epithelial cells derived from ectodermal epithelium (Thesleff and Sharpe, 1997;Jernvall and Thesleff, 2000). Epithelialmesenchymal interactions also control the terminal differentiation of odontoblasts and ameloblasts (Ruch et al., 1995;Imai et al., 1996). Thus, as a new strategy for tooth regeneration, we speculated that ectodermal epithelial cells and NC cells induced from iPS cells could be the optimal cell source for the regeneration of whole teeth (Figure 2).
A protocol for differentiation to NC (Figure 3), originally developed for human ES cells (Lee et al., 2007;Bajpai et al., 2009), efficiently induced mouse iPS cells to differentiate into neural crest-like cells (NCLCs) (Otsu et al., 2012b). These NCLCs expressed several NC cell markers, including AP-2α, Wnt-1, and p75 NTR , and an MSC marker (Stro-1). Pax3, Snail, and Slug (NC-specific transcription factors), as well as human natural killer-1 (HNK-1, also known as CD57 and LEU7; a marker for migrating NC cells) showed higher expression in derived cells than in undifferentiated iPS cells (Otsu et al., 2012b). Importantly, NCLCs did not form teratomas when they were injected subcutaneously together with collagen gel into immunodeficient mice, possibly because of the disappearance of Nanog, which is a marker of undifferentiated iPS cells (Okita et al., 2007) and subtly linked to tumorigenesis (Chiou et al., 2008). This result suggests that NCLCs derived from iPS cells can overcome the critical problem of tumorigenesis in the clinical application of iPS cell transplantation in vivo (Ben-David and Benvenisty, 2011).
When NCLCs were cultured in dental epithelial cellconditioned medium, the expression of DSPP, a precursor protein of dentin sialoprotein (DSP), was significantly increased. Recombinant culture between NCLC and E14.5 dental epithelium in a collagen gel (Otsu et al., 2012a) or an agar-containing semi-solid medium (Hu et al., 2005;Keller et al., 2011) showed that NCLCs expressed the odontoblast marker DSP (Otsu et al., 2012b). Moreover, after transplantation under the kidney capsule in immunodeficient mice, the recombinant demonstrated calcified tooth germ-like structures with bone (Figure 4), indicating that iPS cell-derived NCLCs have the capacity to differentiate into odontoblasts via their reciprocal interaction with dental epithelium.
In addition to our recent results, several reports have demonstrated the potential of iPS cells for odontogenic differentiation. The hanging drop method on a collagen type-I scaffold combined with BMP-4 induced mouse iPS cells to form odontoblast-like cells without epithelial-mesenchymal interaction (Ozeki et al., 2013). These authors further demonstrated that integrin α2 in iPS cells mediated their differentiation into odontoblasts. BMP-4 was also shown to induce iPS cells to form both ameloblast-like and odontoblast-like cells when used with ameloblast serum-free conditioned medium . Moreover, co-culture with an ameloblastin-expressing dental epithelial cell line led to efficient induction of iPS cells into ameloblasts via neurotrophic factor NT-4 and BMP-4 signaling (Arakaki et al., 2012). These results strongly suggest that BMP-4 is a key molecule for odontogenic differentiation from iPS cells.
The ability of iPS cells to form tooth-like structures in vivo has also been confirmed by using recombination with tooth germ cells following transplantation under the kidney capsule (Wen et al., 2012;Cai et al., 2013). Furthermore, combination of iPS cells with enamel matrix derivatives was shown to greatly enhance periodontal tissue regeneration by promoting the formation of cementum, alveolar bone, and periodontal ligaments (Duan et al., 2011), indicating the possibility of iPS cell-based periodontal tissue regeneration.
CONCLUDING REMARKS
In this review, we discuss the potential of stem cell-based tooth regeneration, including the use of iPS cells. This field of research provides an attractive alternative to traditional and current practices for the replacement of missing teeth, such as implants and classic procedures based on synthetic materials. Because of rapidly increasing research efforts and progress, it is anticipated that clinically satisfactory functional tooth regeneration will be available in the near future. In particular, as part of a new technology, patient-specific iPS cells are a highly promising cell source for personalized regenerative dental medicine because of their potential to overcome the shortcomings of adult (tissue) stem cells and embryonic cells. The future establishment of this technique may considerably change therapeutic approaches to dental syndromes and diseases. However, some challenges remain to be addressed before successful tooth regeneration can be achieved. For example, natural tooth development generally takes several years to complete in humans, which is too long to wait for a patient who needs regenerated teeth. Therefore, we should address this issue carefully when considering clinical applications. In addition, because morphology and size differ depending on the tooth type, these aspects need to be addressed. We also need to further develop efficient protocols to induce stem cells to form cell types in vitro that are relevant to the tissues and organs targeted for regeneration. To succeed in these challenges, further basic studies to elucidate the regulatory mechanisms of stem cells and tooth development are needed.
ACKNOWLEDGMENTS
This work was supported by KAKENHI (19562128) and an Open Research Project grant (2007)(2008)(2009)(2010)(2011) from the Ministry of Education, Culture, Sports, Science and Technology of Japan. | 6,435.8 | 2014-02-04T00:00:00.000 | [
"Materials Science",
"Medicine"
] |
Coaxial multi-mode cavities for fundamental SRF research in an unprecedented parameter space
Recent developments in superconducting radio-frequency (SRF) research have focused primarily on high frequency elliptical cavities for electron accelerators. Advances have been made in both reducing RF surface resistance and pushing the readily achievable accelerating gradient by using novel SRF cavity treatments including surface processing, custom heat treatments, and flux expulsion. Despite the global demand for SRF based hadron accelerators, the advancement of TEM mode cavities has lagged behind. To address this, two purpose-built research cavities, one quarter-wave and one half-wave resonator, have been designed and built to allow characterization of TEM-mode cavities with standard and novel surface treatments. The cavities are intended as the TEM mode equivalent to the 1.3GHz single cell cavity, which is the essential tool for high frequency cavity research. Given their coaxial structure, the cavities allow testing at the fundamental mode and higher harmonics, giving unique insight into the role of RF frequency on fundamental loss mechanisms from intrinsic and extrinsic sources. In this paper, the cavities and testing infrastructure are described and the first performance measurements of both cavities are presented.
I. INTRODUCTION
Nuclear physics experiments rely on superconducting radio-frequency (SRF) heavy ion particle accelerators such as the ISAC-II [1] facility at TRIUMF to study the nuclear structure of rare isotopes among other topics of research. New large driver accelerators for hadron facilities such as FRIB [2], RAON [3], PIP-II [4], ESS [5,6], and C-ADS [7] are being installed or developed to support a variety of research interests. To increase the energy of the beam in the velocity regime up to β ≤ 0.6, these accelerators use different types of TEM-mode SRF cavities, such as quarter-wave resonators (QWR) and half-wave resonators (HWR) at frequencies ranging from 80 to 400 MHz. SRF research is essential to advance particle accelerator technology. Higher gradients result in shorter, more economical linear accelerators (LINACs) or higher energies for the same accelerator length. As these SRF cavities are typically cooled with liquid helium at temperatures near 2.0 or 4.2 K, the RF losses in the cavity walls are a major cost driver in capital investment and in operating budget for the cryoplant * Email<EMAIL_ADDRESS>and its infrastructure. Higher quality factors Q 0 mean smaller cryoplants can be used for the same amount of accelerating voltage. Despite the strong interest in TEM mode cavities for new hadron projects, the bulk of the recent developments to enhance cavity performance have been performed on 1.3 GHz β = 1 cavities in support of existing projects such as EU-XFEL [8], LCLS-II [9], and future projects such as the ILC [10,11]. Advances have been made in both enhancing the quality factor Q 0 , which corresponds to a lowered surface resistance R s , but also in pushing the readily achievable accelerating gradient E acc to higher levels by using novel SRF cavity treatments with a focus on improved surface processing, customized heat treatments, and a better understanding of flux expulsion [12][13][14][15][16][17]. Systematic studies have not been reported on low frequency, low β TEM-mode cavities. Much of the research for 1. 3 GHz applications has been done on single-cell cavities. These are compact cavities, not intended for acceleration, but designed with similar features such as RF frequency, peak surface field to accelerating gradient ratios and identical accelerating mode to the typical 9-cell variants designed for on-line acceleration. Single cell cavities are relatively inexpensive and have been duplicated around the world to allow treatment comparison between research centers arXiv:2010.02192v1 [physics.acc-ph] 5 Oct 2020 throughout the SRF community and greatly enhance development progress. For TEM mode cavities such a focused global development is much more difficult since the design space is broader in terms of hadron velocity and RF frequency. Projects typically optimize cavity parameters within the project and design a few unique cavity designs to span the intended velocity range of a particular LINAC. A coaxial test cavity analogous to the 1.3 GHz single cell cavity would serve to shift SRF research away from project driven design to a more focused study of the TEM geometry and frequency range and offer a systematic way to enhance cavity performance. In addition the coaxial geometries can be tested at not only the fundamental eigenmode but also at higher harmonics enabling data sets at several RF frequencies within the same cooldown cycle and for the same cavity treatment, surface roughness, RRR and environmental conditions. This paper reports on the design and first performance characterizations of two coaxial test cavities; a quarter wave resonator and a half wave resonator. The cavity geometries represent the two main structure types used in hadron LINACs to date, namely a QWR and a HWR.. Each cavity is designed to operate in the fundamental and several higher order modes with similar RF characteristics in terms of peak surface field ratio E p /B p . The two resonators are intended to be used for a broad array of fundamental studies. These studies includes the measurement of RF surface resistance as a function of peak magnetic surface field B p and temperature T , and the sensitivity of the geometries to trapped magnetic flux, all as a function of RF frequency and different cavity treatments. The design and implementation of the cavities will be presented, as well as first results. This paper is structured as follows: Section II motivates the development of the presented SRF cavities and tools. Section III will describe the cavity design and goes over details of the surface preparation, testing methodology, and available tools. Section IV shows the cavity performance as function of peak surface field B p for a conventional surface treatment. Also, Q 0 (T ) data collected during the cooldown from 4 K to 2 K is analysed. In addition, performance measurements of the QWR after 120 • C baking are presented, as well as flux sensitivity data is shown for the QWR as an example of characterizations of flux expulsion from TEM mode cavities. Section V presents a summary of the presented work, including an outlook into future work.
II. MOTIVATION
More and more hadron LINACs using SRF technology are being designed and constructed as centerpieces of facilities such as FRIB, RAON, PIP-II, ESS, and C-ADS. Despite this trend, a systematic analysis of the surface resistance in TEM mode cavities globally has not been undertaken due to the broad parameter space in choosing cavity types, geometric β = v/c values, and RF frequencies. Heat treatments developed on 1.3 GHz single cell cavities and rolled out on production nine-cell units have not been employed on TEM mode cavities except for degassing at 650 • -800 • C and the 120 • C in-situ vacuum bake [18]. Flux expulsion studies, that were instrumental at understanding how to achieve the highest quality factors in continuous wave (cw) 1.3 GHz applications, have not been systematically undertaken. Several open questions remain concerning the performance of TEM mode cavities. What is the source of the medium field Q-slope at 4 K that has forced some projects to choose operation at 2 K over 4.2 K, despite the reduced losses that come with low frequency and added technical complexity of 2K operation? What customized heating or doping treatments optimized for 1.3 GHz would help to lower the surface resistance R s at 4 K for low frequency TEM mode cavities? Is there a flux expulsion technique that would benefit TEM mode cavities to lower the residual resistance? Using a dedicated purpose built set of coaxial cavities allows tackling these questions, advancing the understanding of TEM mode cavities, and shedding light on the role of the RF frequency in cavity performance in a systematic way. Two cavities were designed; one QWR and one HWR. The QWR has a fundamental resonance frequency of 217 MHz and the HWR has a fundamental resonance frequency of 389 MHz. The fundamental RF frequencies of the cavities were chosen to be as low as possible to cover commonly used frequencies, and at the same time fit in a pre-existing induction furnace sized for 1.3GHz single cell cavities, to allow customized heat treatments. Both cavities are shown in Fig. 1. The performance of these cavities is characterized via measurement of Q 0 as function of the RF field amplitude expressed in the form of the peak surface fields E p and B p not only in their fundamental eigenmode, but also their higher order modes (HOMs) to determine the dependence of the cavity performance on frequency without changing the cavity or environmental influences. The field distribution of the fundamental mode and HOMs of interest for the two cavities are shown in Fig. 2. Multi-mode performance characterization allows for an expansion of the parameter space in terms of frequency and field amplitude. Combined with the available parameter space in temperature, external magnetic field, and surface treatment, a previously unavailable parameter space is now available without changing intrinsic and extrinsic factors to the cavity. To fully explore this parameter space, several infrastructure developments were done at TRIUMF to be able to determine the dependence of the surface resistance on all before mentioned factors.
A critical part of understanding the SRF cavity performance is the temperature dependence of the surface resistance R s , which can be expressed as with R T d as temperature dependent term and R T i as temperature independent term. R T d can be calculated numerically based on the Mattis-Bardeen theory [19] and is approximated [20] as with ω as resonance frequency, λ as London-penetration depth, ∆ as energy gap, ρ s as normal state conductivity, C 1 ≈ 9/2 and T as temperature. Assuming that these parameters are not frequency dependent, Eq. 2 predicts a frequency dependence of R T d ∝ ω ∼1.87 . One specific goal of the in this paper discussed cavities is the determination of the frequency dependence of the surface resistance and investigate any deviations from theory. Previous research has been done with lead on copper cavities [21] at low power levels. Other studies have used several elliptical cavities of the same shape but different sizes [22]. Here the challenge is to ensure that the surface and environmental conditions are comparable for the different test cavities. Another approach is to use a sample cavity which can be excited at multiple frequencies such as the Quadrupole Resonator (QPR). The QPR has originally been designed for measurements at 400 MHz [23], it was later refurbished for multiple frequencies [24] and optimized by HZB [25].
There are still open questions how to translate results from the QPR to accelerating cavity performance. A HWR type cavity similar to the cavities described here has been developed at Center for Accelerator Science at Old Dominion University [26,27].
While Eq. 2 explicitly shows a frequency dependence of R T d , the field dependence of the overall surface resistance R s and its components is topic of active research. Several models can describe a commonly observed increase of the surface resistance with applied field. For example pair-breaking [28], thermal feedback [29], and impurity scattering [30] models or the so called percolation model [31] predict a R s (B p ) ∝ B 2 p dependence, while other models, for example a weak superconducting layer on top of the bulk material [32] suggest R s (B p ) ∝ exp B p dependence. Another non-linear model [33] tries to include the decreasing surface resistance with increasing RF field, which is observed in nitrogen doped cavities [17].
A. Cavities
The two cavities are used in a similar way as 1.3 GHz single cell cavities, as pure test cavities in a bath cryostat. To avoid perturbations of the TEM-mode field configuration, beam ports have been removed and all RF ports have been moved to one end plate of the cavities. This is possible as the cavities will not be used for beam acceleration. Since the cavity will be submerged in liquid helium in a bath cryostat, a helium jacket is not necessary. A high shunt impedance and low surface field ratios were not design goals as they would be in an accelerating cavity. Instead the design focused on achieving similar peak surface field ratios E p /B p for all the relevant modes as well as the usage of common components such as identically dimensioned outer and inner conductors, identical rinse ports, and the same mechanical components and fixtures. One design limitation was imposed by the size of the induction furnace, which was designed for 1.3 GHz single cell cavities. This determined that the maximum outer dimensions of the cavities are restricted to a diameter of 200 mm and a length of 490 mm. Based on these restrictions, the lowest frequency for the fundamental mode of the QWR was 153 MHz, regardless of the gap between inner conductor and the bottom plate. A choice was made for the fundamental QWR frequency to be around 200 MHz and for the HWR to be around 400 MHz. A straight inner conductor (IC) was chosen to mitigate field distortion in HOMs and at the same time to simplify fabrication. This also allows for a moving T-mapping system to be inserted into the inner conductor. The diameters of the inner and outer conductor were chosen to be 60 mm and 180 mm respectively, matching the ISAC-II QWR cavities [34], allowing for reuse of forming dies. The top and bottom plates are flat, eliminating higher level multi-pacting barriers and simplifying fabrication. Further design choices were made to minimize the peak field ratio E p /B p to push potential field emission onset to higher B p values. For a HWR type cavity of coaxial length L with constant inner and outer conductor radii a and b and peak RF current and voltage of I 0 and V T respectively, the radial electric field E r and the tangential magnetic field B θ at a ≤ r ≤ b and 0 ≤ z ≤ L are given by with ω = pπc/L with p = 1, 2, 3, . . . , and η = µ 0 /ε 0 . Using from Eqs. (4) and (3) follows that the peak fields E p , B p , and their ratio are with c = 1/ √ ε 0 µ 0 as the speed of light. Attention has to be made for the ports, as sharp edges in areas with high surface currents such as the end plates can enhance the magnetic fields, increasing the peak field ratio. The fillet radius at this edge was optimized to mitigate this field enhancement, resulting in no increase of the peak field ratio as can be seen in Tab. I. For a QWR type cavity, the E p /B p values are determined by the geometry of the IC tip and are therefore higher than the HWR values. Optimization of the QWR geometry focused on the IC tip cap, described by the ratio of vertical to horizontal size of the tip, and capacitative gap, with the parameter space shown in Fig. 3. The optimization considered both the fundamental mode at 200 MHz and next higher TEM mode at around 600 MHz and is shown in Fig. 4. A peak field ratio of 0.47 (MV/m)/mT was reached. Both cavities are equipped with four cleaning ports for accessing the RF volume with a wand to high pressure rinse (HPR) the cavity. All four ports are on the same flat plate. From each port, the water jet from a nozzle covers about 1/3 of the cavity surface, providing sufficient overlap between rinse ports to cover the whole cavity. To prevent RF losses on non-niobium parts, the rinse ports are 60 mm long. The cavities are made from pure Niobium to prevent contamination with foreign materials during heat treatments. High residual resistivity ratio (RRR) niobium is used on the main body while the port flanges and QWR bottom plate are made from reactor grade Niobium. Since these components see minimal if any RF fields, the reactor grade niobium can be used without any loss of performance to reduce fabrication costs. Each cavity is a single body with all parts electron-beam welded together without a removable bottom plate. This prevents that the RF field reaches any non-niobium surface, such as vacuum gaskets. Vacuum seals are realized with indium wire seals on the four ports. Further details of the cavity design can be found in [35]. Resonant frequencies, as well as numerically calculated peak field ratios E p /B p and geometric factors with ω as resonant frequency and H as the magnetic field, for the TEM modes of interest are listed in Table I.
B. Available Infrastructure
A crucial part of the novel cavity treatments is high temperature treatment at in the range from 100 • to 1000 • C for a specified amount of time in either a ultra high vacuum or low pressure environment. For this, the TRIUMF induction furnace is used. To study the effects of external magnetic fields and flux expulsion, a set of of 3D Helmholtz coils was designed and built around the cavities and existing cryostat. To control the cavities, the existing RF setup with some modifications is used.
In this section, this infrastructure is described.
Induction Furnace
For high temperature heat treatments such as degassing [13], nitrogen-doping [17], or nitrogen-infusion [16], the TRIUMF induction furnace is used. The design is based on the JLab induction furnace [36] and dedicated to be used only for Nb SRF cavities. In this furnace, a niobium susceptor is heated via RF induction. The heat generated in the susceptor is transferred to the cavity via radiation. Conventional ultra high vacuum (UHV) furnaces pose a potential contamination risk, requiring the use of caps on the cavity ports [13]. In the induction furnace, the RF surface of the cavity has line of sight to only Nb surfaces by design, reducing risk of contamination. In addition, slotted Nb caps (shown in Fig. 5) placed on the ports of the cavity provide additional line of sight cover while allowing gas flow with a defined and reproducible leak between the UHV space and RF volume of the cavity. An advantage of the caps lies in the reduced effort to clean and refresh the surface via BCP, compared to removing and etching the susceptor of the furnace. A residual gas analyser provides data during the degassing. A sample degassing spectrum during the 800 • C treatment of the HWR is shown in Fig. 6 along with the temperature profile.
Helmholtz Coils
Flux trapping and the expulsion of external magnetic fields can be detrimental to the SRF cavity performance [37]. For example, the high Q 0 performance of nitrogendoped cavities is very sensitive to external magnetic fields, so much that the specifications in the LCLS-II cryomodules calls for no more than 5 mG of background field to preserve the high Q 0 of the cavities [38]. To control and manipulate the external magnetic field around the cavity in the TRIUMF cryostat, a set of 3 pairs of Helmholtz coils has been designed and built, shown in Fig. 7. These coils can be used to either cancel or control the external field to a specific value in all three spacial orientations. The current in each pair of coils can be controlled independently, to allow control of the direction of the field. One of the design criteria for the coils was a field uniformity greater than 95% over the cavity surface. To measure the magnetic field, three Bartington Mag F [39] cryogenic flux-gate probes are used. Magnetic field data as well as corresponding temperature data is collected via a Labview [40] program. This setup allows studies of how the performance of the coaxial cavities changes under different external magnetic field configurations and cooldown characteristics. oric acid, nitric acid, and phosphoric acid. Fig. 8 shows the design for the mechanical setup. Acid is supplied through a manifold and pumped to the bottom of the cavity via a diffuser which prevent fast flowing jets of acid. An overflow reservoir at the top of the cavity ensures that all of the RF surface is in contact with the acid. From the reservoir the acid flows back into the acid storage barrel, ensuring a constant flow of fresh acid through the cavity. The whole cavity is strapped into a water cooling jacket to regulate the cavity temperature. The acid temperature in the storage barrel is controlled with a heat exchanger, which draws from the same cooling water. To cool the water a 7 kW chiller from Advantage Engendering [41] is used. Water temperatures of between 10 • to 12 • C are typically used. This results in etching rates of around 1 µm/s. The manifold is also used to pump out the acid and supply the cavity with rinse water once the etch is done.
RF Setup
A self-excited loop (SEL) is utilized in the low level RF (LLRF) control of cryostat cold tests at TRIUMF. The SEL frequency tracks the resonant frequency of the cavity. The frequency control is stable in either open or closed amplitude loop and free of ponderomotive instabilities. SEL, in absence of phase loop feedback, is ideal for cavity performance characteristics, multipacting conditioning and high-power pulse conditioning. The LLRF boards developed for ISAC-II and ARIEL e-Linac projects [42] control at 140 MHz and allow for either pulsed or continuous wave (cw) operation. An intermediate frequency is employed to down-convert the cavity frequency to 140 MHz for input, and to up-convert the output signal to the cavity resonant frequency for driving the RF amplifier. One essential part of the frequency converter is the high-performance bandpass filter. Discrete filters with <-20 dBc at ±30 MHz were chosen for 200 MHz and 400 MHz, while cavity filters with <- Two wide-band solid-state amplifiers from BEXT [43] (70 to 650 MHz, 500 W) and R&K [44] (650 to 2800 MHz, 350 W) are used to cover the frequency spectrum from 70 MHz to 2.8 GHz with up to 500 W of RF power. Two variable RF couplers are used for the two different cavities: one based on an antenna coupler for the QWR to transfer power via the electric field and the other with a loop antenna for the HWR which couples to the magnetic field. Q ext for both couplers varies by 5 orders of magnitude over a travel of 30 mm, while a maximum travel distance of 40 mm is available. The couplers provide a large range of Q ext to enable operation at critical coupling for any RF mode. To accurately measure Q 0 , the coupler is moved to critical coupling. From a decay time measurement at low RF field, coupled with power and frequency measurements, Q 0 , Q pu , and B p are determined. This calibrates the setup for further measurement in continuous wave operation. Measurement uncertainty in both Q 0 and B p is determined by the remaining mismatch between cavity and coupler Q-values during the calibration measurement, expressed as deviation of the standing wave ratio (SWR) from 1. This typically results in relative uncertainties of around 5-10 % for Q 0 and 2-5 % for B p . Other systematic sources of uncertainty like instrument precision of power meters and frequency counters are considerably smaller, and are therefore not considered.
C. Data Preparation and Fitting
Initial analysis is done by converting the quality factor data to the average surface resistance, R * s , through the well known approximation where G is the geometry factor defined as G = ωµ 0 V H 2 dV / S H 2 dS. Field distributions and values for G for all modes have been computed using COM-SOL [45] and are given in Table I. Due to the non-uniform field distribution over the cavity surface and the field dependence of Q 0 , the conversion R * s = G/Q 0 does not reveal the true field dependence of the surface resistance and a correction has to be applied. This correction is especially important in TEM mode cavities, as the fields are significantly less uniform over the cavity surface compared to elliptical cavities. A variety of methods [46][47][48][49] can be used to extract the true surface resistance field dependence. In the methodology [50] adopted here, R * s (B p ) data is first fitted with a power law series with r αi as fit parameter. α i can be any non-negative real value and be chosen to best fit the data. The coefficients r αi then are corrected using parameters β αi , which are derived from the field distribution over the surface of the cavity, resulting in the surface resistance For a fairly uniform field distribution over the surface, such as in elliptical cavities, the factors β(α i ) are close to unity, while for TEM mode cavities, these factors are significant larger than 1. Values for β(α i ) for the QWR and HWR modes have been calculated numerically and are given in Tab. II. Note that the HWR values are consistent for each mode indicating that the field pattern is purely coaxial. The QWR values vary between modes due to the changing field pattern around the tip of the inner conductor. An example of the conversion from Q 0 to R * s to R s at three different temperatures is shown in Fig. 9 for the 217 MHz mode of the QWR. To extract the temperature dependence of R s (B p ), Q 0 is repeatedly measured during the cooldown from 4.2 K to 2 K for a number of fixed peak field amplitudes B p in 10 mT intervals up to a maximum field of B max . Each ramp up of the RF field up to B max is considered as a set, measured roughly at the same temperature T with differences of around 50 mK between the first and last measurement point in each set. All sets are converted into R * s using Eq. (10) and fitted to Eq. (11) to extract the parameters r αi . In the investigated case, using a polynomial of second order was determined to be sufficient to describe the field dependence in the range of the available data accurately with very small residuals, well within measurement uncertainty. The parameters r αi are then multiplied by the corresponding β i to determine R s at the measured field and temperature. All sets are combined, sorted, and split by field amplitude to create new sets of R s (T ). These are fitted using the WinSuper-Fit [51] code v1.1 for each value of B p individually to a parametrized version of Eq. (2) in form of with a 0 , a 1 (T ), a 2 as free fit parameters, and T c = 9.25 K as the critical temperature. a 1 (T ) represents the superconducting gap ∆ with its temperature dependence R T d and R T i are the extracted temperature dependent and independent components of the surface resistance respectively. Fit uncertainties in a 0 , a 1 , and a 2 are propagated into R T d and R T i . Shifts in ω during the cooldown from 4.2 K to 2 K, which are primarily caused by the pressure and Lorentz-force detuning sensitivity of the cavity, are of small order compared to the frequency, and are therefore ignored for the analysis. A collection of these R s (T ) fits for the QWR 648 MHz mode at fields up to 60 mT is shown in Fig. 10. This is done for all measured modes to extract not only field dependence, but also frequency dependence of these fit parameters and the derived values of R T d at temperatures of interest. The quality of the fits is generally acceptable with R 2 vales above 0.99, producing fits well within the determined measurement uncertainty. At higher field amplitudes a distinct step in R s is observed at the λ-point of liquid helium of 2.17 K. This is assumed to be an effect caused by a change in cooling capabilities between the normal and superfluid helium. A thorough analysis of this effect is in progress, but beyond the scope of this paper. Further data fitting is done in the Origin 2020 suite [52], which directly provides uncertainties for the fit parameters as well as R 2 values.
A. Cavity Performance Characterization
The baseline surface treatment for both cavities presented in this paper includes a bulk surface removal via BCP of 120 µm, 800 • C degassing in the TRIUMF induction furnace (6 h for the QWR, 3.5 h for the HWR; the difference is due to a larger hydrogen content in the QWR) to remove hydrogen from the cavities to prevent Q disease, and a final 15 µm BCP surface etch to remove final contaminants. The cavity is then rinsed via high pressure rinsing (HPR) with ultrapure water, dried, and equipped with its pick-up probe, variable coupler, and vacuum connections in a class 10 clean room environment. Initial measurements with the QWR were done in a horizontal orientation (with coaxial axis horizontal). Subsequent QWR tests were done in a vertical orientation. All HWR tests were done in a vertical orientation. Once installed in the cryostat, the quality factor Q 0 as a function of peak surface field B p is characterized at 4.2 K and 2 K at critical coupling with the movable coupler. Combined QWR and HWR performance characterizations of the initial treatment are shown in Fig. 11 for 4 K and Fig. 12 for 2 K. The presented data is for the uncorrected surface resistance R * s . The data for the two QWR modes was collected during a single cooldown of the QWR, same as with the data for the three HWR modes. No field emissions were observed during the measurements, indicating a cavity surface free from particulate contamination. For the presented data, the background field was compensated as close to zero as possible (< 1 µT) using the Helmholtz coils.
In the 4 K measurements, the surface resistance both increases with increasing field amplitude and with increasing frequency. The overall field dependence follows a similar behaviour in all cavity modes. The field amplitude is limited by quench, except for the 1166 MHz mode, which is limited by available amplifier power. The QWR has a reduced quench field compared to the HWR due to a different cavity orientation for the initial tests. The horizontal test orientation of the QWR reduced the liquid helium requirement in the dewar, but produced early cw quenches at 4.2 K due to limited cooling/He-gas buildup in the inner conductor. Maximum quench field in the QWR was 100 mT (E p = 47 MV/m). The HWR reached 115 mT (E p =35 MV/m). At 2 K, the average surface resistance is decreased significantly compared to the results at 4.2 K, from 100's of Measured, uncorrected surface resistance R * s (∝ 1/Q0) of the QWR and HWR at 4.2 K after degassing and 15 µm surface removal. The measurement was free of detectable field emissions. The amplitude was limited by quench, except at 1166 MHz where the amplifier power limit was reached.
FIG. 12.
Measured, uncorrected surface resistance R * s of the QWR and HWR at 2.1 K after degassing and 15 µm surface removal. The measurement was free of detectable field emissions.
nΩ to single digit nΩ in the lowest frequency mode. In medium fields up to 100 mT, the field dependence of especially the HWR modes seems reduced significantly as well. In the QWR, features in the R * s curve can be seen at around 60-75 mT, especially in the 648 MHz mode. These could indicate insufficiently removed surface contamina-tion after heat treatment. Above 100 mT peak surface field a strong increase in R * s is measured without any measured field emission, which is characteristic of high field Q slope (HFQS) [53]. The quench field was determined to be at 150 mT (E P = 71 MV/m) for the QWR and 130 mT (E P = 40 MV/m) for the HWR. In the following section, the results of the field distribution corrected R s (T ) fits is presented in terms of the temperature dependent resistance R T d at 4.2 and 2.0 K, and the temperature independent resistance R T i based on Eq. (13). These components of R s are analysed regarding their field and frequency dependence.
B. Temperature Dependent Surface Resistance
Field Dependence
Shown in Figs. 13 and 14 are calculated values for temperature dependent component R T d as a function of peak surface field B p at 4.2 K and 2 K respectively. At both temperatures, an accelerated increase of R T d is observed as the RF field increases. The two investigated field dependencies used to describe the increase of R T d are expressed as following: a simple exponential growth with R 0,e as zero-field resistance, γ e as dimensionless growth rate parameter, and B 0 as normalizing parameter, which can be freely chosen; and a quadratic increase following with R 0,q as zero field resistance and γ q as dimensionless slope parameter. Within the determined uncertainty of R T d , both Eqs. (16) and (17) describe the data fairly well as can be seen in Figs. 13 and 14, where dashed lines represent Eq. (16) while dash-dot lines visualizes Eq. (17). R 2 values for all fits are above 0.90, with most aggregating above 0.97. Residual differences between the data and the two fit functions are generally of similar magnitude, but slightly lower for the exponential fit. Both describe the data within the determined uncertainties. Thus a definitive statement on the most appropriate field dependence cannot be made by the presented data alone. Supplemental measurements, for example with material science probes such as the β-SRF beamline [54] at TRIUMF, would be needed to determine the physics behind the field dependence. At 2 K, shown in Fig. 14, R T d is, as expected, significant lower than at 4.2 K. This is unsurprisingly expressed in a lower zero-field resistance R 0 . Both γ e/q parameters (16) and (17), with B0 = 100 mT. R 0,e/q in nΩ.
Mode T [K]
R0,e γe R0,q γq QWR -217 MHz 4. on the other hand do not show a clear trend in difference between the two temperatures, indicating that the perceived reduced field dependence at lower temperature is a result of the overall reduced magnitude of the zero-field resistance R 0,e/q . The fit results for all modes in both temperatures are listed in Tab. III.
Frequency Dependence
Equation (2) in the form of with A 0 and x as free fit parameters. Eq. (18) will show up in the log-log plots as straight line with a slope equal to the exponent x. Based on the fitlines in Figs. 15 and 16, the exponent x seems to have a field dependence. Fig. 17 shows x as a function of RF field for both temperatures. At low field and 4.2 K, the exponent is determined as 1.9(1), which matches well with the predicted value of 1.87. At 2 K and low field the exponent is lowered to 1.80 (7), also matching the predicted value. While there seems to be a downward trend of x with increasing RF amplitude, due to the fairly substantial uncertainty in the fits at higher fields or 2 K it is difficult to determine any trend with certainty. Examination of this trend is subject to further studies.
C. Temperature independent resistance Figures 18 and 19 show the field and frequency dependency of R T i respectively. The sharp increase of R T i at 648 MHz at fields of 40 mT and higher may be attributed to insufficient removal of contaminants after the heat treatment. Otherwise a fairly field independent trend is observed for the lower frequency modes, while a decrease in R T i is observed for the high frequency modes. Regarding the frequency dependence, an overall increasing trend is extracted out of the cooldown data. Averaged over the measured RF field amplitudes, R T i is ∝ ω 0.6 . This is close to the frequency dependence of normal conducting losses in the anomalous limit of ω 2/3 . 19. Combined QWR and HWR data for RT i(ω) reveals an increasing trend with increasing frequency for all field amplitudes, although there is a large scatter in the data.
D. QWR 120 • C baking
A common cavity preparation is 120 • C baking for 48 h. In the presented case, the baking is done with resistive heaters strapped to the cavity, while the cavity is installed in the cryostat. During the bake both sides of the cavity wall, RF space and helium space surrounding the cavity, are under vacuum. The effect of this bake on R * s of the QWR is shown in Figs. 20 and 21 for 4.2 and 2 K respectively. A clear decrease in both amplitude and field dependence of R * s is shown at 4.2 K, while at 2 K a slight increase in R * s is visible. A conclusion can be made that the 120 • C/48 h treatment reduces R T d , which dominates at 4.2 K, while slightly increasing R T i . The reduction of R T d at 2 K is insignificant compared to the increase of R T i . At the time of writing the HWR is in preparation for this surface treatment and once completed, a full analysis with frequency dependence will be done.
E. Helmholtz Coil Demonstration
The capabilities of the Helmholtz coils, were demonstrated with the QWR. The cavity was first cooled down in a fully compensated external field, with the current in all coils tuned to an external field in all three spacial dimensions of <0.5 µT. After characterization, the cavity was warmed up above transition to around 20 K, the vertical coils tuned to 10 µT at the geometric center of both the cavity and coils, and then cooled down again below transition. This thermal cycle has been repeated with a field of 20 µT. The resulting surface resistance R * S as function of peak surface field is shown in Figs. 22 and 23 for the 217 and 648 MHz modes of the QWR at around 2.1 K. The slopes in the medium field range up to 80 mT are identical between the different external fields but do have an offset to each other, suggesting a constant addition to the surface resistance caused by the external field. This amounts to a sensitivity S of ∼0.5 nΩ/µT at 217 MHz and ∼1.5 nΩ/µT at 648 MHz. In [55], the magnetic field sensitivity S is specified in Eq.
with f as resonant frequency in GHz. Using this, a sensitivity of around 1.4 and 2.4 nΩ/µT at 217 and 648 MHz respectively would be expected. The difference between textbook and measured value suggests that either not all the field is trapped in the cavity walls, or the flux is trapped in locations that do not contribute strongly to the surface resistance. Those would be areas with low magnetic surface fields, like the tip of the inner conductor, as these areas contribute to the losses significantly less than high field areas. At 4.2 K, the additional surface resistance is too small to be significant. Following similar measurements with the HWR, a full analysis including frequency dependence will be done. Further studies are needed and planned to explore the role of trapped magnetic flux in TEM mode cavities and specific techniques to mitigate reduced performance due to flux trapping.
V. SUMMARY
The TRIUMF multi-mode coaxial SRF cavities are an excellent tool to study TEM mode cavities. In particular, the dependence of the surface resistance on temperature, surface treatment, RF frequency, external magnetic field, and RF field amplitude are available to study, opening an unprecedented parameter space to be explored. The presented infrastructure in place at TRIUMF allows for exploration of this large parameter space. RF amplitudes of up to 150 mT peak surface magnetic field have been reached. In the presented data, some early conclusions are drawn on the field and frequency dependence. Characterization of both the QWR and HWR after degassing at 800 • C and a flash BCP surface removal show excellent performance both at 4.2 K and 2 K, on par with performances of 1.3 GHz single cell elliptical cavities with the same surface treatment. The cavities show a low surface resistance and high quench field. Data collected during the cooldown at several RF field amplitudes and multiple resonant modes allows to separate R s into its components R T d (T, B p , ω) and R T i (B p , ω) and analyse the frequency and field dependence of these parameters. The data reveals that the temperature dependent term R T d at low RF fields is ∝ ω 1.9(1) at 4.2 K and ∝ ω 1.80(7) at 2 K, matching with the predicted dependence of ω 1.87 . The RF field dependence of R T d matches both a quadratic and an exponential growth model in the investigated range of field amplitudes. The change in slope between 4.2 and 2 K is dominated by the reduction of the zero field resistance R 0 , rather than the slope parameter γ. The temperature independent component R T i gives a less clear picture due to a large scatter in the data. An overall increasing trend with increasing frequency ∝ ω ∼0.6 is found, which matches with anomalous losses. No clear conclusion can be drawn on the RF field dependence. Capabilities to bake the QWR at 120 • C have also been demonstrated, which resulted in a significant higher Q 0 at 4.2 K, and a small decrease in Q 0 at 2 K. This is attributed to a strong decrease in R T d , which is the dominant term at 4.2 K, and a small increase in R T i , which is of comparable order to R T d at 2 K. The functionality of the Helmholtz-coils has been demonstrated and a first estimation of the external magnetic field sensitivity for a vertical field orientation measures a sensitivity of the QWR of 0.5 nΩ/µT at 217 MHz and 1.5 nΩ/µT at 648 MHz.
Future Work
This paper shows the possibilities of the research areas covered by the coaxial multi-mode cavities at TRIUMF with examples of early performance measurement results. Future work will include comprehensive studies of the effects of various surface treatments, as well as changes of the background magnetic field, on the surface resistance. A further step in data preparation will also include corrections between the measured helium bath temperature and the RF surface temperature of the cavity, as well as measurements at lower temperatures to fully observe the expected levelling off of the surface resistance at low temperatures. Further planned infrastructure improvements are electro-polishing for surface treatments, and a temperature mapping system to further advance understanding of the details of the surface resistance. | 10,089.8 | 2020-10-05T00:00:00.000 | [
"Physics"
] |
Long-time stability of the quantum hydrodynamic system on irrational tori
We consider the quantum hydrodynamic system on a $d$-dimensional irrational torus with $d=2,3$. We discuss the behaviour, over a"non trivial"time interval, of the $H^s$-Sobolev norms of solutions. More precisely we prove that, for generic irrational tori, the solutions, evolving from $\varepsilon$-small initial conditions, remain bounded in $H^s$ for a time scale of order $O(\varepsilon^{-1-1/(d-1)+})$, which is strictly larger with respect to the time-scale provided by local theory. We exploit a Madelung transformation to rewrite the system as a nonlinear Schr\"odinger equation. We therefore implement a Birkhoff normal form procedure involving small divisors arising from three waves interactions. The main difficulty is to control the loss of derivatives coming from the exchange of energy between high Fourier modes.This is due to the irrationality of the torus which prevent to have"good separation"properties of the eigenvalues of the linearized operator at zero. The main steps of the proof are: (i) to prove precise lower bounds on small divisors; (ii) to construct a modified energy by means of a suitable \emph{high/low} frequencies analysis, which gives an \emph{a priori} estimate on the solutions.
INTRODUCTION
We consider the quantum hydrodynamic system on an irrational torus of dimension 2 or 3 ∂ t ρ = −m∆φ − div(ρ∇φ) where m > 0, κ > 0, the function g belongs to C ∞ (R + ; R) and g(m) = 0. The function ρ(t, x) is such that ρ(t, x) + m > 0 and it has zero average in x. The space variable x belongs to the irrational torus T d ν := (R/2πν 1 Z) × · · · × (R/2πν d Z) , d = 2, 3 , (1.1) with ν = (ν 1 , . . . , ν d ) ∈ [1,2] d . We assume the strong ellipticity condition g ′ (m) > 0. (1.2) We shall consider an initial condition (ρ 0 , φ 0 ) having small size ε ≪ 1 in the standard Sobolev space H s (T d ν ) with s ≫ 1. Since the equation has a quadratic nonlinear term, the local existence Felice Iandoli has been supported by ERC grant ANADEL 757996. theory (which may be obtained in the spirit of [7,13]) implies that the solution of (QHD) remains of size ε for times of magnitude O(ε −1 ). The aim of this paper is to prove that, for generic irrational tori, the solution remains of size ε for longer times.
For φ ∈ H s (T d ν ) we define Our main result is the following.
Phase space and notation. In the paper we work with functions belonging to the Sobolev space of functions with zero average. Despite this fact we prefer to work with a couple of variable (ρ, φ) ∈ H s 0 (T d ν ) × H s (T d ν ) but at the end we control only the norm which in fact is the relevant quantity for (QHD). To lighten the notation we shall write · H s ν to denote · H s (T d ν ) . In the following we will use the notation A B to denote A ≤ CB where C is a positive constant depending on parameters fixed once for all, for instance d and s. We will emphasize by writing q when the constant C depends on some other parameter q.
Ideas of the proof. The general (EK) is a system of quasi-linear equations. The case (QHD), i.e. the system (EK) with the particular choice (1.6), reduces, for small solutions, to a semi-linear equation, more precisely to a nonlinear Schrödinger equation. This is a consequence of the fact that the Madelung transform (introduced for the first time in the seminal work by Madelung [18]) is well defined for small solutions. In other words one can introduce the new variable ψ := √ m + ρe iφ/ (see Section 2 for details), where = 2 √ k, obtaining the equation Since g(m) = 0, such an equation has an equilibrium point at ψ = √ m. The study of the stability of small solutions for (QHD) is equivalent to the study of the stability of the variable z = ψ − √ m. The equation for the variable z reads where f is a smooth function having a zero of order 2 at z = 0, i.e. |f (z)| |z| 2 , and |D| 2 ν is the Fourier multiplier with symbol The aim is to use a Birkhoff normal form/modified energy technique in order to reduce the size of the nonlinearity f (z). To do that, it is convenient to perform some preliminary reductions. First of all we want to eliminate the addendum −i mg ′ (m) z. In other words we want to diagonalize the matrix . (1.12) To achieve the diagonalization of this matrix it is necessary to rewrite the equation in a system of coordinates which does not involve the zero mode. We perform this reduction in Section 2.2: we use the gauge invariance of the equation as well as the L 2 norm preservation to eliminate the dynamics of the zero mode. This idea has been introduced for the first time in [11]. After the diagonalization of the matrix in (1.12) we end up with a diagonal, quadratic, semi-linear equation with dispersion law where j is a vector in Z d \ {0}. At this point we are ready to define a suitable modified energy. Our primary aim is to control the derivative of the H s -norm of the solution d dt wherez is the variable of the diagonalized system, for the longest time possible. Using the equation, such a quantity may be rewritten as the sum of trilinear expressions inz. We perturb the Sobolev energy by expressions homogeneous of degree at least 3 such that their time derivatives cancel out the main contribution (i.e. the one coming from cubic terms) in (1.13), up to remainders of higher order. In trying to do this small dividers appear, i.e. denominators of the form It is fundamental that the perturbations we define is bounded by some power of z H s , with the same s in (1.13), otherwise we obtain an estimate with loss of derivatives. Therefore we need to impose some lower bounds on the small dividers. Here it enters in the game the irrationality of the torus ν. We prove indeed that for almost any ν ∈ [1, 2] d , there exists γ > 0 such that if ±j 1 ± j 2 ± j 3 = 0, we denoted by M(d) a positive constant depending on the dimension d and µ i the i-st largest integer among |j 1 |, |j 2 | and |j 3 |. It is nowadays well known, see for instance [3,5], that the power of µ 3 is not dangerous if we work in H s with s big enough. Unfortunately we have also a power of the highest frequency µ 1 which represents, in principle, a loss of derivatives. However, this loss of derivatives may be transformed in a loss of length of the lifespan through partition of frequencies, as done for instance in [10,17,12,6].
Some comments. As already mentioned, an estimate on small divisors involving only powers of µ 3 is not dangerous. We may obtain such an estimate when the equation is considered on the squared torus T d , using as a parameter the mass m. In this case, indeed, one can obtain better estimates by following the proof in [11]. This is a consequence of the fact that the set of differences of eigenvalues is discrete. This is not the case of irrational tori with fixed mass, where the set of eigenvalues is not discrete. Having estimates involving only µ 3 one could actually prove an almost-global stability. More precisely one can prove, for instance, that there exists a zero Lebesgue measure set N ⊂ [1, +∞), such that if m is in [1, +∞) \ N , then for any N ≥ 1 if the initial condition is sufficiently regular (w.r.t. N) and of size ε sufficiently small (w.r.t. N) then the solution stays of size ε for a time of order ε −N . The proof follows the lines of classical papers such as [3,5,4] by using the Hamiltonian structure of the equation. More precisely, the system (QHD) can be written in the form where ∂ denotes the L 2 -gradient and H(ρ, φ) is the Hamiltonian function We do not know if the solution of (QHD) are globally defined. There are positive answers in the case that the equation is posed on the Euclidean space R d with d ≥ 3, see for instance [2]. Here the dispersive character of the equation is taken into account. For recent developments in this direction see [1] and reference therein. It is worth mentioning also the scattering result for the Gross-Pitaevsii equation [16]. Since we are considering the equation on a compact manifold, the dispersive estimates are not available. It would be interesting to obtain a long time stability result also for solutions of the general system (EK). In this case the equation may be not recasted as a semi-linear Schrödinger equation. Being a quasi-linear system, we expect that a para-differential approach, in the spirit of [17,12] should be applied. However, in this case, the quasi-linear term is quadratic, hence big. In [17,12] the quasi-linear term is smaller. Therefore new ideas have to be introduced in order to improve the local existence theorem. By using para-compositions (in the spirit of [9,14,15]), in the case d = 1, i.e. on the torus T 1 , it is possible to obtain stronger results. This is the argument of a future work of one of us with other collaborators [8].
2. FROM (QHD) TO NONLINEAR SCHRÖDINGER 2.1. Madelung transform. For λ ∈ R + , we define the change of variable (Madelung transform) Notice that the inverse map has the form In the following lemma we provide a well-posedness result for the Madelung transform.
The following holds.
(i) Let s > 1 and Proof. The bound (2.3) follows by (M) and classical estimates on composition operators on Sobolev spaces (see for instance [19]). Let us check the (2.4). By the first of (2.1), for any σ ∈ T, we have Therefore, by the arbitrariness of σ and using that ( Moreover we note that Then by the second in (2.1), (2.2), composition estimates on Sobolev spaces and the smallness condition We now rewrite equation (QHD) in the variable (ψ, ψ).
for some ε > 0 small enough. Then the function ψ defined in (M) solves We remark that the assumption of Lemma 2.2 can be verified in the same spirit of the local well-posedness results in [13] and [7].
Notice that the (2.8) is an Hamiltonian equation of the form Elimination of the zero mode. In the following it would be convenient to rescale the space variables x ∈ T d ν ν · x with x ∈ T d and work with functions belonging to the Sobolev space the Sobolev space in (1.10) with ν = (1, . . . , 1). By using the notation ψ = (2π) − d 2 j∈Z d ψ j e ij·x , we introduce the set of variables which are the polar coordinates for j = 0 and a phase translation for j = 0. Rewriting (2.14) in Fourier coordinates one has where H is defined in (2.14). We define also the zero mean variable By (2.16) and (2.18) one has ψ = (α + z)e iθ , (2.19) and it is easy to prove that the quantity is a constant of motion for (2.8). Using (2.16), one can completely recover the real variable α in terms of {z j } j∈Z d \{0} as (2.20) Note also that the (ρ, φ) variables in (2.1) do not depend on the angular variable θ defined above. This implies that system (QHD) is completely described by the complex variable z. On the other hand, using Taking the real part of the first equation in (2.21) we obtain We resume the above discussion in the following lemma.
There is C = C(s) > 1 such that, if C(s)δ ≤ 1, then the function z in (2.18) satisfies near the equilibrium z = 0. Note also that the natural phase-space for (2.28) is the complex Sobolev space H s 0 (T d ; C), s ∈ R, of complex Sobolev functions with zero mean. 2.3. Taylor expansion of the Hamiltonian. In order to study the stability of z = 0 for (2.28) it is useful to expand K m at z = 0. We have where for any r = 3, · · · , N − 1, K m (z, z) is an homogeneous multilinear Hamiltonian function of degree r of the form The vector field of the Hamiltonian in (2.29) has the form (recall (1.14)) (2.36) Therefore system (2.32) becomes (2.37)
SMALL DIVISORS
As explained in the introduction we shall study the long time behaviour of solutions of (2.37) by means of Birkhoff normal form approach. Therefore we have to provide suitable non resonance conditions among linear frequencies of oscillations ω(j) in (2.33). This is actually the aim of this section.
Throughout this section we assume, without loss of generality, |j 1 | a ≥ |j 2 | a ≥ |j 3 | a > 0, for any j i in Z d , moreover, in order to lighten the notation, we adopt the convention ω i := ω(j i ) for any i = 1, 2, 3. The main result is the following.
The proof of this proposition is divided in several steps and it is postponed to the end of the section. The main ingredient is the following standard proposition which follows the lines of [3,6]. Here we give weak lower bounds of the small divisors, these estimates will be improved later.
Proposition 3.2. Consider I and J two bounded intervals of R + \ {0}; r ≥ 2 and j 1 , . . . , j r ∈ Z d such that j i = ±j k if i = k, n 1 , . . . n r ∈ Z \ {0} and h : J d−1 → R measurable. Then for any γ > 0 we have Remark 3.3. We shall apply this general proposition only in the case r = 3, however we preferred to write it in general for possible future applications.
Proof of Prop. 3.2.
For simplicity in the proof we assume |j 1 | (1,b) > . . . > |j r | (1,b) . Since by assumption we have j i = j k for any i = k then one could easily prove that for any η > 0 (later it will be chosen in function of γ) we have We define P η = ∪ i =k P i,k η , and We have to estimate from above the measure of the last set. We define the function For any ℓ ≥ 1 we have Therefore we can write the system of equations . . .
We denote by V the Vandermonde matrix above. We have that V is invertible since where in the penultimate passage we have used that b / ∈ P η and j 1 2 · · · j r 2 r,n η r j 1 2 · · · j r 2 .
At this point we are ready to use Lemma 7 in appendix A of the paper [20], we obtain Summarizing we obtained we may optimize by choosing η = γ 1 2r ( j 1 · · · j r ) 1 r and we obtain the thesis.
As a consequence of the preceding proposition we have the following.
Corollary 3.4. Let r ≥ 1, consider j 1 , . . . , j r ∈ Z d such that j k = j i if i = k and n 1 , . . . , n k ∈ Z \ {0}. For any γ > 0 we have where we have set The determinant of its inverse is bounded by a constant depending only on d. Therefore the result follows by applying Prop. 3.2 and the change of coordinates (a 1 , . . . , a d ) → ( 1 a 1 , b). Owing to the corollary above we may reduce in the following to the study of the small dividers when we have 2 frequencies much larger then the other. (3.4). If there exists i ∈ {1, . . . , d} such that
5)
then for anyγ > 0 we have Proof. We give a lower bound for the derivative of the functionΛ with respect to the parameter a i .
Therefore a i →Λ is a diffeomorphism and applying this change of variable we get the thesis.
Proposition 3.6. There exists a set of full Lebesgue measure A 3 ⊂ (1, 4) d such that for any a in A 3 there exists γ > 0 such that for any σ ∈ ±1, for any j 1 , j 2 , j 3 in Z d satisfying |j 1 | a > |j 2 | a ≥ |j 3 | a , the momentum condition σj 3 + j 2 − j 1 = 0 and
We are now in position to prove Prop. 3.1.
Proof of Prop. 3.1. The case σ 1 σ 2 = 1 is trivial, we give the proof if σ 1 σ 2 = −1. From Prop. 3.6 we know that there exists a full Lebesgue measure set A 3 and γ > 0 such that the statement is proven if |j 3 | ≤ J(j 1 , γ). Let us now assume |j 3 | > J(j 1 , γ). Let us define whereγ will be chosen in function of γ and M(d) big enough w.r.t. d. (3.6)) and Corollary 3.4 with r = 3, we have
Let us set
If the exponent M(d) (and hence p) is chosen large enough we get the summability in the r.h.s. of the inequality above. We now chooseγ 1/6 γ −p = γ m , we eventually obtain µ(B γ ) γ m . If one can reason similarly. The wanted set of full Lebesgue measure is therefore obtained by choosing A := A 3 ∩ (∪ γ>0 B c γ ).
ENERGY ESTIMATES
In this section we construct a modified energy for the Hamiltonian K m in (2.36). We first need some convenient notation.
We define G Proof. Fix c 0 > 0. By (4.5) and Lemma 4.5, we deduce Proof of Lemma 4.8. We study how the equivalent energy norm E s (w) defined in (4.20) evolves along the flow of (4.17). Notice that ∂ t E s (w) = −{E s , H}(w) .
Proof of Theorem 1.1. In the same spirit of [13], [7] we have that for any initial condition (ρ 0 , φ 0 ) as in (1.4) there exists a solution of (QHD) satisfying for some T > 0 possibly small. | 4,517.8 | 2021-05-15T00:00:00.000 | [
"Mathematics"
] |
Thermal Conductivity of Saturated Liquid Toluene by Use of Anodized Tantalum Hot Wires at High Temperatures
Absolute measurements of the thermal conductivity of a distilled and dried sample of toluene near saturation are reported. The transient hot-wire technique with an anodized tantalum hot wire was used. The thermal conductivities were measured at temperatures from 300 K to 550 K at different applied power levels to assess the uncertainty with which it is possible to measure liquid thermal conductivity over wide temperature ranges with an anodized tantalum wire. The wire resistance versus temperature was monitored throughout the measurements to study the stability of the wire calibration. The relative expanded uncertainty of the resulting data at the level of 2 standard deviations (coverage factor k = 2) is 0.5 % up to 480 K and 1.5 % between 480 K and 550 K, and is limited by drift in the wire calibration at temperatures above 450 K. Significant thermal-radiation effects are observed at the highest temperatures. The radiation-corrected results agree well with data from transient hot-wire measurements with bare platinum hot wires as well as with data derived from thermal diffusivities obtained using light-scattering techniques.
Introduction
Saturated liquid toluene has been widely studied and is recommended by the International Union of Applied Chemistry (IUPAC) as a reference standard for thermal conductivity from 189 K to 360 K [1]. Efforts to extend this temperature range to 553 K have been recently reported by Ramires et al. [2]. The barriers to obtaining reliable high-temperature reference standards for thermal conductivity are a lack of data from multiple experimental techniques, and increased uncertainty due to the effects of thermal radiation. Both transient and steady-state measurement techniques for the determination of thermal conductivity are susceptible to errors due to thermal-radiative heat transfer at high temperatures since temperature gradients are imposed during the measurement, and fluids such as toluene absorb and emit the associated thermal radiation [3][4][5]. Even though thermal-radiation errors may be present in data from transient and steady-state techniques, agreement between radiation-corrected data from these two different techniques would provide evidence of the accuracy of the data. Unfortunately, the relative uncertainty (at the level of 2 standard deviations) of available steady-state thermal conductivity data that have been corrected for radiation exceeds 1 %, which is desired for the development of reference standards [2]. As a result, only data from a single transient hot-wire instrument [4] were designated as primary data during the development of the previous reference standard by Ramires et al. [2].
Recently, thermal diffusivity has been measured for saturated liquid toluene from 293 K to 523 K using light scattering [6]. These light-scattering data have a relative uncertainty of 2.5 %; and because there are no significant thermal gradients in the sample, thermal radiation errors are not present. The thermal conductivity can be calculated from the thermal diffusivity a by using where is the fluid mass density and C p is the isobaric specific heat. An accurate equation of state is available for toluene [7] to calculate and C p , but uncertainties in C p must be considered during this process. The lightscattering data were not available during the development of the previous reference standard of Ramires et al. [2]. The present measurements are made using the transient hot-wire technique as used previously [4,5]. The previous measurements were made with bare 12.7 m diameter platinum wires and were corrected for thermal radiation. The present measurements were made with anodized 25 m diameter tantalum wires. Anodized tantalum wires have the advantage of being electrically insulated from the fluid under study. This anodized coating allows measurements of electrically conducting fluids such as water. The highest temperature at which toluene has been previously studied with an anodized-tantalum hot-wire instrument is 370 K [8]. The present measurements extend the temperature range at which anodized tantalum hot wires have been used to measure thermal conductivity from 370 K to 550 K. The use of anodized-tantalum hot wires allows some evaluation of the reliability of the thermal radiative correction for absorbing media since the anodized tantalum wires have a different emissivity (that of tantalum pentoxide) than those of the previous platinum wires, and the diameter of the tantalum wire is twice that of the previous platinum wires.
Experimental
The transient hot-wire technique is widely recognized as an accurate method to measure the thermal conductivity and thermal diffusivity of fluids. The present measurements are absolute and require only knowledge of the geometry of the hot wires, the applied power, the resistance-versus-temperature characteristics of the wires, and time. The ideal working equation is based on the heat transfer from an infinitely long line source into an infinite medium. The temperature rise of the fluid at the surface of the wire, where r = r 0 , is given [9] by ⌬T ideal (r 0 ,t )= q 4 lnͩ 4at where q is the power divided by the length of the wire, t is the elapsed time, and C = e ␥ = 1.781 . . . is the exponential of Euler's constant. The ideal temperature rise of the wire is linear with respect to the logarithm of elapsed time, as shown in Eq. (2). The thermal conductivity is obtained from the slope, and the thermal diffusivity is obtained from the intercept using linear regression [10]. The temperature associated with a given thermal conductivity data point is given by where ⌬T initial and ⌬T final are the temperature rise at the start time and the end time of the linear region selected for the regression. The thermal diffusivity is associated with the initial cell temperature T 0 and is obtained from a calibrated reference-standard platinum resistance thermometer (PRT). All temperatures in this work are reported according to the 1990 International Temperature Scale (ITS 90), and all uncertainties are expanded uncertainties at the level of 2 standard deviations (coverage factor k = 2, 95 % level of confidence). The experimental cell is designed to approximate this ideal model as closely as possible. There are, however, a number of corrections that account for deviations between the ideal line-source solution and the actual experimental heat transfer. The ideal temperature rise is obtained by adding a number of corrections ␦T i to the experimental temperature rise according to These temperature-rise corrections are described in detail for our case of a coated wire in Refs. [3,11,12]. Our implementation of the corrections follows these references with the following exceptions. The compression work correction ␦T 3 and the radial convection correction ␦T 4 are set to zero following the recommendations of Assael et al. [13]. The thermal-radiation correction ␦T 5 is described for absorbing fluids by Nieto de Castro et al. [5]. The data-acquisition system used in this work has been described previously [4], and consists of a microcomputer with a 16 bit analog-to-digital converter, three digital voltmeters, a digital power supply, and a Wheatstone bridge which contains two hot wires in opposing legs of the bridge. The two hot wires have different lengths, and the Wheatstone bridge, which is initially balanced, subtracts the resistance change of the short hot wire from the resistance change of the long hot wire. Thus, if both wires are immersed in the same fluid, the bridge response behaves as for a finite length (the difference between the wire lengths) of an infinitely long wire and the end effects arising from axial conduction are eliminated. Heating voltage is applied to the wires through the Wheatstone bridge, and the bridge imbalance is measured in 250 equal time increments. The total time for the measurements may be varied from 1 s to 40 s, allowing one to verify that the data are obtained prior to the onset of convection. The computer checks the bridge balance prior to each experiment and records the temperature of the reference thermometer and the resistance of each hot wire for calibration purposes. The cell temperature is measured with an uncertainty of 1 mK by use of a current source and a standard resistor in series with the reference-standard PRT. The cell pressure is measured with a quartz pressure transducer from 0 MPa to 70 MPa with an uncertainty of 0.007 MPa.
Hot-Wire Cell
The hot-wire cell used in these measurements was designed for measurements on corrosive solutions at temperatures from 300 K to 550 K at pressures up to 70 MPa. The design of the pressure vessel and temperature control system is the same as for our previous hightemperature cells [4], so it will only be briefly described here. The pressure vessel and the internal components of the hot-wire cell are constructed of nickel alloy UNS-N10276, which is particularly resistant to halides and halide salts. The cell contains long and short hot wires located within long and short cavities with diameters of 9.5 mm. The total volume of the cell and supporting pressure system is relatively small, about 50 cm 3 , to facilitate measurements on scarce or hazardous materials. There are separate voltage and current leads to each end of each hot wire to eliminate the effects of lead resistance during the calibration process. All electrical connections in the cell are spot welded. In the assembly of the tantalum hot wires (25.4 m diameter), polytetrafluoroethane-insulated nickel/chromium alloy wire (254 m diameter) and polyimide-insulated platinum (76 m diameter) wire were used to make electrical connections inside the pressure system. The connections were welded, then all the bare leads were coated three times with a polyimide/polytetrafluoroethane resin, and the assembly was baked at 550 K for several hours. The baking both cured the polymer resin and annealed the tantalum hot wires from their initial hard-drawn condition. The tantalum hot wires were then anodized in aqueous citric acid with up to 50 V to produce a film of tantalum pentoxide with an estimated thickness of 70 nm. Although the temperature control and pressure systems are rated to 750 K, the upper operating temperature of the present tantalum hot-wire cell assembly is limited to 550 K because of the melting point of polytetrafluoroethane used to electrically insulate the lead wires.
Sample Purification
The toluene sample used in these measurements was prepared from spectroscopic-grade toluene. The toluene was further purified by distillation over calcium hydride. Calcium hydride reacts irreversibly with any water in the sample to form calcium hydroxide precipitate, which remains in the distillation flask. The principal impurity is benzene, which has a lower boiling temperature, so the initial condensate is discarded. The sample used for measurement is then collected when the inlet to the condenser is stable at the boiling temperature of toluene. The purified sample was analyzed by gas chromatography and found to have less than 50 ng/g of benzene and less than 100 ng/g of water. The sample preparation procedure was the same as in our previous measurements [4] using bare platinum hot wires.
Wire Calibration
Platinum is preferred for use in resistance thermometry when it is properly annealed and is free from stress because its resistance is very stable for prolonged periods of time. Tantalum has not been used widely for resistance thermometry, so the stability of its resistance must be carefully examined and characterized. The situation is further complicated since the present tantalum wires have been anodized to form a protective layer of tantalum pentoxide. The electrical resistivity of tantalum from 273 K to 1273 K has been shown to increase in proportion to the concentration of oxygen in the sample [14]. Since the concentration of oxygen is not uniform through the wire's cross section, there is a possibility that oxygen from the anodized layer might diffuse into the bulk tantalum wire and alter its resistance. Any oxygen-diffusion process would be enhanced at higher temperatures. The present system is ideally suited for characterizing the anodized tantalum wires since the wire resistance is measured for each wire, along with the temperature from the reference PRT and the pressure from the quartz pressure transducer, during the balance cycle for each measurement. The instrument also maintains a record of the time and date of each measurement to allow examination of the stability of the wire calibration. Since the present measurements were made along the saturation line of toluene, at pressures less than 3.3 MPa, there is not enough pressure range to allow characterization of the pressure dependence. The electrical resistivity of tantalum is known to decrease in proportion to the pressure on the sample [15].
To eliminate uncertainty due to the resistance of the lead wires, there are separate current-supply and voltage-sensing leads to each end of each hot wire. The resistance is measured during the balance cycle by measuring the voltage drop across standard resistors in each leg of the Wheatstone bridge containing the hot wires, together with the voltage drop across each hot wire. The measured value is the average of five readings with a forward current of about 0.3 mA and five readings with the current reversed. This process minimizes uncertainty due to thermoelectric voltages at weld junctions and electrical connectors. The uncertainty of the resistance measurement is about 0.003 ⍀. The measurements were made first with temperature increasing from 300 K to 550 K. Then, measurements were made at 550 K in the morning and evening for a period of 4 days. Finally, measurements were made with temperature decreasing from 550 K to 300 K so that hysteresis effects could be examined. The electrical resistance of the long (188.08 mm) and short (49.07 mm) hot wires are shown in Fig. 1 during this temperature cycle. It is apparent in the figure that the resistances of both the long and short hot wires increased with the elapsed time at 550 K. Although the resistance of the long wire increased more than that of the short wire, the increase was not in proportion to the wire lengths, as would be expected if the process were uniform over the entire length of each wire.
Since both the long and short hot wires come from the same sample of tantalum wire, the resistance of each wire should scale with the length of each wire. This is a requirement for use of these wires in the transient hot-wire experiment. To insure uniform heat generation over the length of each hot wire, the resistance divided by the length of the long and short hot wires respectfully must be very nearly equal. It is desirable to characterize both the uniformity of the power generation in both hot wires and the adequacy of compensation for the end effects by using two wires in different arms of the measuring bridge. This can be measured by taking the ratio lw / sw between the resistance divided by the length of the long wire, lw = R sw /L sw , and the resistance divided by the length of the short wire, sw = R sw /L sw , where R is the wire resistance, L is the wire length, and the subscripts lw and sw designate the long wire and short wire, respectively. Following a previous recommendation by Kestin and Wakeham [16], lw / sw must not deviate from unity by more than 2 %, in order to assume a correct end-effect compensation with nearly identical wires. With the definition Figure 2 shows this percentage difference as a function of elapsed time. It can be seen that this deviation was quite stable and nearly zero during the experiments at increasing temperature from 300 K to the start of the 550 K isotherm but began to drift as the resistance of both wires increased. There was a decrease of 2 % by the end of the 550 K isotherm. This behavior is compatible with previous calculations by Kestin and Wakeham [16]. However, the increase in resistance during the 4 days at 550 K is quite dramatic and unexpected, based on our previous experience with pure platinum hot wires [4]. With pure platinum the normal behavior is a slight decrease in resistance due to annealing and stress release in the hot wires if the wires have not been at this temperature recently. Figure 2 clearly shows that the increase in resistance was not consistent with wire length. Thus, resistance increases at the welds must be considered a possibility. The only weld locations that can contribute to the measured resistance during a four terminal measurement occur where the ends of the 25 m diameter tantalum wires are joined to the 254 m diameter nickel-chromium alloy lead wires. Each hot wire has two welds which could potentially contribute to the measured resistance, but it is not possible to separate contributions due to the welds from those due to changes in the wires. The use of all-tantalum lead wires should be examined in the future to see whether this resistance increases within the welds or within the wires themselves.
Based on Fig. 1 and the requirement that ⌬ Յ 2 %, it was decided that only the data at increasing temperatures, including the first few hours at 550 K, should be considered for the wire-resistance calibration. The wire's resistance was fit to a quadratic polynomial in temperature of the form R (T ) = A 1 +A 2 T+A 3 T 2 over four regions, 300 K to 450 K, 300 K to 480 K, 300 K to 515 K, and 300 K to 550 K. The results of these fits are given in Table 1. The resistance of a pure metal such as tantalum is known to be well approximated by such a quadratic expression [17], and the sign of A 3 should be small and negative over this temperature range. Table 1 shows that the sign of the quadratic coefficients change if the resistance data above 480 K are included in the fits. This is a good indication that the increase in resistance with time becomes significant at temperatures above 480 K.
Given that the increase in resistance with time is significant at 515 K, the best possible calibration must be determined, and the influence of uncertainty in the Given that the increase in resistance with time is significant at 515 K, the best possible calibration must be determined, and the influence of uncertainty in the calibration on the thermal conductivity results must be assessed. The rise in temperature at any elapsed time during a measurement is obtained from the change in resistance of the hot wires using the derivative of the calibration curve for resistance versus temperature: Thus, the uncertainty of the wire's temperature rise, and of the measured thermal conductivity, is directly related to the uncertainty of the derivative of wire resistance with respect to temperature. This resistance derivative divided by length is plotted as a function of temperature in Fig. 3, which shows the effect of the change in sign of A 3 when resistance data at temperatures above 480 K are included in the fit. Based on data below 480 K, where there is confidence that the calibration is stable, the resistance derivative decreases by 1 % for a temperature increase of 180 K. Since this temperature dependence is quite small, and the region of extrapolation (480 K to 550 K) is less than half this temperature range, it is anticipated that extrapolation errors should be less than 1 % if only data below 480 K are used for the calibration. If resistance data from temperatures above 480 K are used in the calibration, the thermal conductivity results at 550 K will be about 4 % higher at 550 K and results at 300 K will be 2 % too low. Based on these considerations, the wire calibration from 300 K to 480 K is used in the subsequent data analysis, with the estimate that the relative uncertainty of the measured thermal conductivity increases above 480 K by the additive amount of 1.0 % due to extrapolation in the wire calibration.
Results
The thermal conductivity results for the purified toluene sample are given in Table 2 and are shown in Fig. 4. The results in Table 2 have been corrected for thermal radiation, as were our previous results using bare platinum hot wires [4], and with the same empirical optical parameters that were found for the fluid using the previous platinum hot wires [5]. Since the power divided by length was not equal for the long and short wires after the isotherm at 550 K, as shown in Fig. 2, the data from the decreasing temperature portion of the temperature cycle are not reported. There are 184 thermal conductivity data points at temperatures from 300 K to 550 K.
Repeatability at High Temperatures
The isotherm at 550 K is quite interesting since it includes several replications, with a wide range of power levels, over the course of four days. During this time, the resistance of the long hot wire increased by 5.6 ⍀ and that of the short hot wire increased by 1.9 ⍀. In addition, since this increase in resistance was not proportional to the wire lengths, the power generation of the two wires differed by up to 1.6 %. Dispite these complications, there seems to be little additional scatter in the thermal conductivity results measured during this 4 day period. In Fig. 5, deviations between the results for the 550 K isotherm are plotted relative to the reference standard of Ramires et al. [2] as a function of applied power level at 550 K. The mean deviation of the data (solid line) is 1.45 % higher than the earlier reference standard, and the scatter (dashed lines) is Ϯ0.6 % at the level of 95 % confidence. No trend is noted with respect to time throughout this four-day period. Convection in the sample occurs at shorter times for experiments with higher powers and correspondingly larger temperature rises. Consistency between thermal conductivity results at different power levels is considered a good indication that there was no significant convection during the measurements and that compensation for the wire's end effects was achieved. Data at power levels above 0.5 Wm -1 appear to have some influence due to the onset of convection since this isotherm is relatively close to the critical point of toluene at 593.95 K. The contribution of convection on the apparent thermal conductivity appears to be less than 0.5 % at even the highest power levels.
Uncertainty Assessment
The contribution of thermal radiation to measurements of the thermal conductivity of fluids such as toluene has been a topic of debate for many years. An empirical technique has been described for correcting for thermal radiation in transient hot-wire measurements [5]. Empirical optical parameters have been reported for toluene [5] at these same conditions based on measurements with platinum hot wires 12.7 m in diameter. If this radiation correction is valid, then these same optical parameters should apply to the present case of a tantalum wire 25 m in diameter. The radiation correction assumes that thermal emission from the wire is small compared to emission from the expanding thermal front in the fluid, so there should be little effect due to changing the emissivity of the wire material. The contribution of thermal radiation is insignificant at 300 K, increases as T 3 and is estimated to be 3 % at 550 K. The experimental thermal conductivity data with and without this radiative correction are shown in Fig. 4.
Deviations between the reference standard of Ramires et al. [2] and the present data are shown in Fig. 6. High-temperature data sets from light scattering [6] as well as transient hot-wire experiments using bare platinum [4,18] and anodized tantalum [8] hot wires are also compared in this figure. The data of Kraft et al. [6] were obtained from thermal diffusivities by light scattering using the equation of state of Goodwin [7]. These light-scattering data are free from uncertainty due to thermal radiation but are believed to have a relative uncertainty of 2.5 % in thermal diffusivity. The uncertainty in the thermal conductivity obtained from the expression = aC p is estimated to vary between 3 % and 4.2 % since the uncertainties in and C p from the equation of state [7] are 0.2 % for and varies from 0.2 % to 3 % for C p . The thermal conductivity obtained from light scattering is offset from the direct thermal conductivity measurements by about 2 % to 3 %, but agreement is still within the combined uncertainty of the data sets. It is unlikely that the offset is due to thermal radiation since it is nearly constant and thermal-radiative errors should increase as T 3 .
No systematic difference between transient hot-wire measurements using bare and anodized tantalum hot wires is apparent in Fig. 6. The present tantalum hotwire data are lower than the other transient data [4,8,18] at 300 K by about 2 %. This is partly due to larger cell fluctuations of temperature in the furnace containing the hot-wire cell near ambient temperature. The remainder of this difference is likely due to the drift in calibration of the tantalum hot wires at elevated temperatures, as shown in Fig. 3. The uncertainty of the present tantalum measurements are evaluated as 1 % at 300 K, 0.5 % from 369 K to 480 K, and 1.5 % from 480 K to 550 K. The transient hot-wire data using platinum or anodized tantalum wires agree within their combined uncertainties over the entire temperature range, with the exception of the data of Yamada et al. [18] at temperatures above 400 K. The purity of the toluene sample used in this case [18] was stated to be 99.7 % by the supplier. The purity of the toluene sample used in the light scattering study [6] was stated to be 99.9 % by the supplier. The transient hot-wire data of Perkins et al. [4] and Ramires et al. [8] were made on purified samples of toluene as described in the present work.
Conclusions
The present measurements demonstrate that anodized-tantalum hot wires can be used to make absolute measurements of the thermal conductivity a liquid from 300 K to 550 K. Previous studies with anodized-tantalum hot wires have been limited to temperatures below 370 K [8]. The present transient hot-wire measurements using anodized-tantalum hot wires have a larger uncertainty in the temperature extremes than our previous measurements using bare tantalum hot wires [4] over the same temperature range. This is primarily due to drift in the resistance calibration of the anodized tantalum hot wires at high temperatures. Use of tantalum lead wires may reduce or eliminate this problem in the future and allow accurate measurements of the fluid thermal diffusivity. It was also noted during the experiments that convection occurs earlier, and at lower power levels, as the wire diameter increases. Thus, experiments must be done with lower levels of applied power (smaller temperature rises) when the larger tantalum hot wires are used.
The big advantage of anodized-tantalum hot wires is for measurement of electrically conducting fluids, and this is not a problem in the case of toluene. The anodized tantalum hot wires have a geometry and emissivity different from those of platinum, so the present measurements support the validity of the thermal-radiation correction for absorbing fluids [5]. There is no significant temperature trend in deviations between the present radiation-corrected thermal-conductivity data and thermal-diffusivity data derived from light-scattering measurements of thermal diffusivity. This again supports the validity of the radiation correction [5], since the contribution of thermal radiation is expected to increase with T 3 . | 6,402.2 | 2000-03-01T00:00:00.000 | [
"Physics"
] |
Ultra stable all-fiber telecom-band entangled photon-pair source for turnkey quantum communication applications
We demonstrate a novel alignment-free all-fiber source for generating telecom-band polarization-entangled photon pairs. Polarization entanglement is created by injecting two relatively delayed, orthogonally polarized pump pulses into a piece of dispersion-shifted fiber, where each one independently engages in four-photon scattering, and then removing any distinguishability between the correlated photon-pairs produced by each pulse at the fiber output. Our scheme uses a Michelson-interferometer configuration with Faraday mirrors to achieve practically desirable features such as ultra-stable performance and turnkey operation. Up to 91.7% twophoton-interference visibility is observed without subtracting the accidental coincidences that arise from background photons while operating the source at room temperature. ©2005 Optical Society of America OCIS codes: (270.0270) Quantum optics; (190.4370) Nonlinear optics, fibers; (999.9999) Quantum communications; (060.0060) Fiber optics and optical communications. References and links 1. N. Gisin, G. Ribordy, W. Tittel, and H. Zbinden, “Quantum cryptography,” Rev. Mod. Phys. 74, 145–195 (2001). 2. D. Bouwmeester, J.-W. Pan, K. Mattle, M. Eibl, H. Weinfurter, and A. Zeilinger, “Experimental quantum teleportation,” Nature 390, 575–578 (1997). 3. P. G. Kwiat, K. Mattle, H. Weinfurter, A. Zeilinger, A. V. Sergienko, and Y. H. Shih, “New High-Intensity Source of Polarization-Entangled Photon Pairs,” Phys. Rev. Lett. 75, 4337–4340 (1995). 4. M. Fiorentino, P. L. Voss, J. E. Sharping, and P. Kumar, “All-fiber photon-pair source for quantum communication,” Photon. Technol. Lett. 14, 983–985 (2002). 5. X. Li, P. L. Voss, J. E. Sharping, and P. Kumar, “Optical-Fiber Source of Polarization-Entangled Photons in the 1550 nm Telecom Band,” Phys. Rev. Lett. 94, 053601 (2005). 6. H. Takesue and K. Inoue, “Generation of polarization-entangled photon pairs and violation of Bell’s inequality using spontaneous four-wave mixing in a fiber loop,” Phys. Rev. A 70, 031802 (2004). 7. X. Li, J. Chen, P. L. Voss, J. Sharping, and P. Kumar, “All-fiber photon-pair source for quantum communications: improved generation of correlated photons,” Opt. Express 12, 3737–3744 (2004). 8. J. G. Rarity, J. Fulconis, J. Duligall, W. J. Wadsworth, and P. S. J. Russell, “Photonic crystal fiber source of correlated photon pairs,” Opt. Express 13, 534–544 (2005). 9. J. E. Sharping, J. Chen, X. Li, and P. Kumar, “Quantum-correlated twin photons from microstructure fiber,” Opt. Express 12, 3086–3094 (2004). 10. J. Fan, A. Dogariu, and L. J. Wang, “Efficient generation of correlated photon pairs in a microstructure fiber,” Opt. Lett. 30, 1530–1532 (2005). 11. P. Kumar, M. Fiorentino, P. L. Voss, and J. E. Sharping, “All-fiber photon-pair source for quantum communications,” U. S. Patent No. 6,897,434 (2005). 12. X. Li, C. Liang, K. F. Lee, J. Chen, P. L. Voss, and P. Kumar, “An integrable optical-fiber source of polarization-entangled photon pairs in the telecom band,” Phys. Rev. A 73, 052301 (2006). 13. Y. Takushima, S. Yamashita, K. Kikuchi, and K. Hotate, “Single-frequency and polarization-stable oscillation of Fabry-Perot fiber laser using a nonpolarization-maintaining fiber and an intracavity etalon,” Photon. Technol. Lett. 8, 1468–1470 (1996). #70561 $15.00 USD Received 3 May 2006; accepted 10 July 2006 (C) 2006 OSA 24 July 2006 / Vol. 14, No. 15 / OPTICS EXPRESS 6936 14. A. Muller, T. Herzog, B. Huttner, W. Tittel, H. Zbinden, and N. Gisin, “Plug and play systems for quantum cryptography,” Appl. Phys. Lett. 70, 793–795 (1997). 15. X. Li, P. L. Voss, J. Chen, J. E. Sharping, and P. Kumar, “Storage and long-distance distribution of telecommunications-band polarization entanglement generated in an optical fiber,” Opt. Lett. 30, 1201– 1203 (2005). 16. C. Liang, K. F. Lee, J. Chen, and P. Kumar, “Distribution of fiber-generated polarization entangled photonpairs over 100 km of standard fiber in OC-192 WDM environment,” postdeadline paper, Optical Fiber Communications Conference (OFC’2006), paper PDP35. 17. H. Takesue, “Long-distance distribution of time-bin entanglement generated in a cooled fiber,” http://xxx.lanl.gov/abs/quant-ph/0512163. 18. K. F. Lee, J. Chen, C. Liang, X. Li, P. L. Voss, and P. Kumar, “Generation of high-purity telecom-band entangled-photon pairs in dispersion-shifted fiber,” Opt. Lett. 31, 1905–1907 (2006).
Introduction
Quantum entanglement is a key resource in most quantum-communication protocols such as quantum cryptography [1] and quantum teleportation [2].Therefore, an efficient, stable, and easy-to-operate source of entanglement is highly desirable for practical quantum communications.Spontaneous parametric down conversion in χ (2) crystals [3] is widely used for generating entangled photon pairs in laboratory environments.However, its free-space and non-telecom-band nature greatly limits its practical applicability in long-distance quantum communications, although those might be overcome by dedicated coupling and frequencyconversion techniques at the cost of source efficiency.
Recently, a new method of creating entangled photon pairs has been proposed and demonstrated [4][5][6][7][8][9][10][11][12], which utilizes four-photon scattering (FPS) in fused silica fiber through its χ (3) (Kerr) nonlinearity.The fiber nature of this source brings advantages such as no coupling loss to the transmission fiber, spatial purity of the guided modes, and telecom-band operation.Thus this type of source is particularly suitable for long-distance quantumcommunication applications and has attracted a lot of research interest.Here is a brief description of the method with an example for creating polarization-entangled photon pairs.Two orthogonally polarized pump pulses are prepared with a relative delay and fixed relative phase ϕ p .They are injected into a piece of optical fiber where each pump pulse independently engages in FPS.In this process two pump photons at frequency ω p scatter through the Kerr nonlinearity to create time-energy entangled daughter signal-idler photon pairs at frequencies ω s and ω i , respectively, such that 2ω p =ω s +ω i .The pump wavelength is chosen close to the zero-dispersion wavelength of the fiber in order to achieve phase matching for best FPS efficiency.The generated signal-idler photon pair inherits polarization property of the parent pump due to the isotropic nature of the χ (3) nonlinearity in silica fiber.Before leaving the source fiber, the FPS processes originating from each pump pulse with definite polarization are coherently combined while removing information such as the relative time delay.This makes the created photon pairs from the two FPS processes indistinguishable from each other and polarization entanglement is established.As in other entangled-photon sources, pump power can be tuned to a level such that with high probability only one photon pair is emitted from the two FPS processes.
There are two major schemes in the literature for realizing such fiber-based polarization entanglement.In Ref. 5 a free-space Michelson interferometer is employed for preparing 30 ps relative delay between the two orthogonally polarized 5-ps-duration pump pulses.This time delay is erased by passing the signal-idler photon pairs, which are generated in a 300-mlong fiber Sagnac loop made of dispersion-shifted fiber (DSF), through 20 m of polarization maintaining fiber.To maintain a fixed relative phase between the pump pulses, sophisticated phase tracking and locking is needed.In addition, fiber-birefringence induced polarization fluctuations need to be carefully compensated before sending the photon pairs through the polarization maintaining fiber for accurately removing the time delay.Hence this scheme is too complicated for practical usage.A counter-propagating scheme is proposed in Ref. 11 and demonstrated in Refs.6 and 12 wherein a piece of DSF is connected to two output ports of a polarization beam splitter (PBS).Linearly polarized pump pulses at 45° are injected through one input port of the PBS so that the DSF is simultaneously pumped with the decomposed orthogonally polarized pump pulses in both clockwise and counter-clockwise directions.Photon pairs generated from the two counter-propagating FPS processes are recombined upon arrival at the PBS, thus establishing polarization entanglement in the photon pairs that emerge from the other port of the PBS.No phase tracking is needed for proper operation.However, a polarization control paddle is still needed to compensate the environmentally driven polarization fluctuations that occur inside the DSF on time scales of hours.Thus this scheme is still not suitable for real-world practice where reliable turnkey operation is highly desired.
Here we propose and demonstrate a new alignment-free fiber-based scheme for creating telecom-band polarization-entangled photon pairs.Long-term polarization and/or phase fluctuations are automatically compensated.The scheme utilizes fiber-pigtailed optical components which have been fully integrated for turnkey operation.Our scheme is shown in Fig. 1(a).The system's principal axes are defined by a four-port fiber PBS.An optical pump-pulse train is prepared at 45° linearly polarized relative to the principal axes and is injected into a fiber circulator which has the basic property of transmitting incident light between its three ports in a successive fashion: light incident on port one is passed to port two and light that returns back into port two is then passed to port three.To avoid unfavorable polarization rotation of the pump pulses, a polarization-maintaining fiber circulator can be used for bringing the pump pulses to the input port of the PBS.At the output ports of the PBS, each 45° linearly-polarized pump pulse is decomposed into horizontally-and verticallypolarized components P H and P V , respectively, with a relative phase ϕ p equal to zero.P V is reflected and guided to a Faraday mirror (FM1) at the start of its itinerary: PBS→FM1→PBS→FM3→PBS→FM2→PBS.A Faraday mirror is a non-reciprocal optical element which produces a reflection with a state-of-polarization (SOP) that is orthogonal to the input SOP.Thus, uncontrolled polarization rotations occurring along the incident path are automatically compensated on the return path.Such Faraday mirrors have previously been used in classical as well as quantum applications.See, for example, Ref. 13 for a classical application and Ref. 14 for a quantum application.In our case, P V becomes horizontally polarized upon returning to the PBS after reflection from FM1.It then passes through the PBS and enters a 400-m piece of DSF where the FPS process takes place.Similarly, P H starts it journey PBS→FM2→PBS→FM3→PBS→FM1→PBS by going to FM2.It then arrives back at the PBS vertically polarized and reflects to enter the DSF.A small length difference between the paths to FM1 and FM2 are intentionally designed so that a time delay, which is much greater than the pump-pulse duration, is introduced between the arrival times of P H and P V at the DSF.
Details of the Scheme and Experimental Configuration
In the DSF, P H and P V independently engage in FPS processes and create signal-idler photon pairs.Note that P H and P V experience almost the same insertion loss before entering into the DSF; thus the two FPS processes have almost the same efficiency.As mentioned above, the daughter signal-idler photon pairs are co-polarized with the parent pump pulses.With the use of FM3, birefringence-induced polarization fluctuations on the light in the DSF between PBS and FM3 are also automatically compensated.In addition, the polarization states of the two pump pulses and the daughter photon pairs are rotated by 90° when they arrive back at the PBS.Hence, P V and the photon pairs generated from it are directed towards FM2 while P H and its daughter photon pairs travel to FM1.It is easy to see that the information on the birth time of the newly-born photon pairs is automatically erased when P H and P V , along with their daughter photon pairs, are recombined at the PBS traveling backward through the input port.Upon emerging from the PBS the circulator then directs them away from the pump towards the receivers.With a properly set pump power, the signal-idler photon pairs are emitted in one of the maximally-entangled states: |H〉 s |H〉 i +|V〉 s |V〉 i .It is also important to point out that the pump pulses travel through the DSF twice in this scheme because of FM3.Both P H and P V follow a symmetric path through the system.It is this symmetry and the use of Faraday mirrors which automatically compensate any long-term polarization and/or phase fluctuations that enable an ultra-stable and alignment-free source of polarization entanglement.After passing through the circulator, the signal-idler photon pairs are separated from the residual pump by fiber interference filters, which are then ready for use in various quantum communication applications [15][16][17].
We have built a proof-of-principle prototype to demonstrate this scheme.It uses a 400-m piece of DSF with a zero-dispersion wavelength near 1556 nm.A 50-MHz repeated ∼5-psduration pump pulse train is obtained by sending ~100 fs optical pulses emitted by a fiber laser through cascaded 200-GHz-spacing ITU-grid fiber interference filters centered at 1555.95 nm.A fiber polarizer is used to set the incident polarization at 45° relative to the principal axes.Signal/idler photon pairs are collected by 200-GHz-spacing filters centered at 1550.92 nm and 1561.01 nm, respectively.We evaluate the quality of this photon-pair source by examining two-photon interference (TPI) with joint photon-counting measurements on the generated signal-idler photon pairs.Figure 1(b) depicts details of the measurement apparatus.The signal and idler photons are spatially separated and then directed to different polarization analyzers.Each analyzer set contains a HWP-QWP-HWP polarization rotator, PBS, and a single-photon detector (SPD).After calibrating the system with the first HWP and the QWP, we measure TPI by recording coincidence counts as a function of the relative angle θ between orientations of the 2 nd HWP in each analyzer.The created polarization entanglement will give rise to a TPI fringe on the coincidence counting rate in the form of cos 2 θ, while the number of photon counts in individual analyzers is a constant for different angle settings.To detect the telecom-band photons, we operate Epitaxx EPM239BA InGaAs/InP avalanche photo diodes in gated Geiger mode (electrically reverse biased slightly above the breakdown voltage) as single-photon detectors (SPDs) (see Ref. 4).Although the entangled-photon source is operated at 50 MHz, the effective sampling rate of the whole system is limited by the SPDs, which have quantum efficiencies of about 20% and dark count probabilities of ~10 -3 counts/gate, to a gate-pulse rate of 785 kHz.
Experiment Results
The results of TPI measurements are shown in Fig. 2 (a)-(c), where the solid lines are the cos 2 θ fitting curves.Solid diamonds are the number of measured coincidences between the signal and idler photon counts, where only the contributions from the detector dark counts have been subtracted.Empty squares are the number of signal-photon counts and the shaded triangles are the number of idler-photon counts.The relative polarization angle θ is changed by rotating the 2 nd HWP in the idler's path while fixing the orientation of the HWP in the signal's path.Performances for different pump powers in the DSF are recorded.An average pump power of about 110 μW in the DSF, which corresponds to approximately 440 mW peak power, is used for the set of data shown in Fig. 2(a), where each data point is acquired by sampling for 10 s at 785 kHz.Due to the high peak power used, the number of photon-pairs produced inside the DSF during each pump pulse is around 0.376.This number is usually termed as the production rate for estimating the efficiency of the FPS process inside the DSF.For quantum communication applications, the number of photon pairs actually emitted from the source, for instance after the filters in Fig. 1(a), is of more practical relevance.We name this number as the emission rate, which is 0.165 for the set of data in Fig. 1(a).As shown, a TPI fringe with visibility of 47.3% arises in the joint measurements, while almost constant numbers of photon counts are obtained in individual measurements of either the signal or idler photons.The difference in the signal and idler photon counts mainly comes from the different detection efficiencies in the two analyzers.Similarly, in Fig. 2(b), a TPI visibility of 75.9% is observed at a 0.032 pairs/pulse emission rate when the average pump power is adjusted to 42.3 μW (169 mW peak power) inside the DSF.The integration time is increased to 30 s for acquiring each point in this data set.As shown in Fig. 2(c), a TPI visibility of 91.7% is measured at a 0.012 pairs/pulse emission rate when the average pump power in the DSF is further reduced to 27.5 μW (110 mW peak power).Due to further reduction of the pump power, a longer sampling time of 60 s is used for each point in this set of data.We note from Fig. 2(a-c) that the ratio between the signal and idler photon counts changes slightly with pump power.This may be due to the slightly different nonlinear coefficients at the signal and idler wavelengths together with the quadratic vs. linear pumppower dependence of the FPS and Raman gains, respectively.
The dependence of the TPI visibility with emission rate is summarized in Fig. 2(d).At higher pump powers, the probability of creating multiple photon pairs per pulse is high.In addition, there is higher probability for the pump photons to leak through the filters.Both these effects lead to increased accidental-coincidence counts, thus reducing the TPI visibility as observed in Fig. 2(a).At low pump powers we believe it is the spontaneous Raman scattering that is responsible for the majority of background photons which prevent us from observing perfect TPI visibility.Studies have shown that the contribution of Raman-scattered background photons can be reduced by cooling the fiber [17,18].Similar behavior as the trend line for the TPI visibility versus the emission rate shown in Fig. 2(d) is also reported in Ref. 18, where significantly more data taken using the counter-propagating scheme was presented in the low emission-rate regime.The only difference is that in Ref. 18, the curves are plotted as coincidence to accidental-coincidence ratio versus the pump power.In summary, the TPI results obtained with this turnkey prototype are comparable with those obtained previously using other fiber-based schemes.Finally, no alignment adjustment was needed during the course of the above measurements.To further ascertain the stability of the source, we intentionally disturbed the DSF and the fibers between the PBS and FM1/FM2 during the experiment.No performance penalty was observed.
Conclusion
In conclusion, we have introduced a novel all-fiber scheme for producing telecom-band polarization-entangled photon pairs.A double-pass Michelson-interferometer configuration is used with Faraday mirrors that automatically compensate system drifts such as polarization and phase fluctuations introduced by environmental perturbations on the fibers.With minimal engineering effort, this scheme could lead to an ultra-stable turnkey polarization-entangled photon-pair source which can be fully integrated for practical use.With our prototype source, up to 91.7% TPI visibility is demonstrated without subtracting the Raman-induced accidental coincidences under room-temperature operation.This much TPI visibility would lead to a violation of the CHSH form of Bell's inequality as shown in Ref. 18. Finally, we note that this scheme also has the advantages present in the previously reported counter-propagating scheme [12]: i) the cross-polarized spontaneous Raman scattering is automatically suppressed and ii) all the polarization-entangled photon pairs created by the pump are collected.
Fig. 2 .
Fig. 2. Experimental results with different average pump powers coupled in the DSF: (a) 111 μW; (b) 42.3 μW; (c) 27.5 μW.(d): TPI visibility versus photonpair emission rate.The curve has no meaning; it is shown only to guide the eye. | 4,307.8 | 2006-07-24T00:00:00.000 | [
"Physics"
] |
Training Gaussian boson sampling by quantum machine learning
We use neural networks to represent the characteristic function of many-body Gaussian states in the quantum phase space. By a pullback mechanism, we model transformations due to unitary operators as linear layers that can be cascaded to simulate complex multi-particle processes. We use the layered neural networks for non-classical light propagation in random interferometers, and compute boson pattern probabilities by automatic differentiation. This is a viable strategy for training Gaussian boson sampling. We demonstrate that multi-particle events in Gaussian boson sampling can be optimized by a proper design and training of the neural network weights. The results are potentially useful to the creation of new sources and complex circuits for quantum technologies.
I. INTRODUCTION
The development of new models and tools for machine learning (ML) is surprisingly affecting the study of manybody quantum systems and quantum optics [1].Neural networks (NN) enable representations of high-dimensional systems and furnish a universal ansatz for many purposes, like finding the ground state of many-body Hamiltonians [2], including dissipative systems [3,4].
The impact of ML in quantum optics and many-body physics is related to the versatile representation that the NN models furnish for functions of an arbitrary number of variables.Also, the powerful application programming interfaces (APIs), as TensorFlow, enable many new features and tools to compute and design many-body Hamiltonians or large-scale quantum gates [18].
Here we show that NN models are also useful when considering representations in the phase space, as the characteristic functions χ or the Q-representation [19].Unitary operators, as squeezers or displacers, act on the phase-space as variable transformations that correspond to layers in the NN model.Hence, a multilayer NN may encode phase-space representations of complex many-body states.This encoding has two main advantages: on the one hand, one can quickly build complex quantum states by combining NN layers; on the other hand, one can use the automatic graph building and API differentiation technology to compute observables.Also, graphical and tensor processing units (GPU and TPU) may speed up the computation.
In the following, we show how to compute the probability of multi-particle patterns when Gaussian states propagate in a system made of squeezers and interferometers.This problem corresponds to the renowned Gaussian Boson sampling [20,21], which recently demonstrated the quantum advantage at an impressing scale [22], following earlier realizations [23][24][25][26][27][28] of the original proposal by Aharanson and Arkhipov [29].The theory of Gaussian Boson sampling (GBS) heavily relies on phase-space methods [30], making it an exciting NN test-bed supported by recently reported trainable hardware [31][32][33].
A notable outcome of adopting NN models in the phase space is the possibility of training multi-particle statistics [34] and other features as the degree of entanglement.Indeed, most of the reported investigations in quantum ML, focus either on using NN models as a variational ansatz or tailoring the input/output response of a quantum gate.On the contrary, ML in the phase space permits optimizing many-particle features, for example, to increase the probability of multi-photon events.NN may open new strategies to generate non-classical light or enhance the probability of observing large-scale entanglement with relevance in many applications.Here, we derive the NN representing the characteristic function of the Gaussian boson sampling setup.Proper NN training increases the photon-pair probability by orders of magnitude.
Fig. 1 shows the general workflow of the proposed methodology, the different steps enable to define a trainable model for optmizing Gaussian boson sampling.In Section II, we introduce the way we adopt a neural network to compute the characteristic function.In Sec.III, we detail how to compute the observable as derivatives of the characteristic function neural network.In Sec.IV, we show how to compute the Gaussian boson sampling patterns.In Sec.V, we introduce the loss function and describe the training of the model to optimize specific patterns.Conclusions are drawn in Sec.VI.
II. CHARACTERISTIC FUNCTION AS A NEURAL NETWORK
In the phase space, we represent a n-body state by complex characteristic function χ(x) = χ R (x) + ıχ I (x) of a real vector x [19,35].x has dimension 1 × N with N = 2n.For Gaussian states [36] with g the real covariance N × N matrix, and d the real displacement N × 1 vector.In our notation, we omit the symbols of the dot product such that xd and xgx ⊤ are scalars.One has (j, k = 0, 1, 2 . . ., N − 1) and [36].In Eq. ( 2), the canonical variables, qj = R2j and pj = R2j+1 , with j = 0, 1, . . ., n − 1, are organized in the N × 1 operator array R. As shown in Fig. 2a, the characteristic function is a NN layer with two real outputs χ R and χ I .The χ layer has two inputs: x, and a auxiliary bias N × 1 vector a, for later convenience.
The vacuum state is a Gaussian state with g = 1 and d = 0. From the vacuum, one can generate specific states by unitary operators, as displacement or squeezing operators.These transform the canonical variables as ˆ R = M R + d ′ , where the symplectic matrix M and the vector d ′ depend on the specific operator (detailed, e.g., in [36]).The characteristic function changes as We represent the linear transformation as a NN layer with two inputs x and a and two outputs xM and M −1 (d ′ + a) (Fig. 2b).By this definition, Eq. ( 4) is as a two-layer NN. Figure 2c shows χ as the "pullback" of the linear layer from the χ layer.The two layers form a NN that can be implemented with common APIs.[? ].Given the vacuum state with characteristic function χ, one can build the NN model of an arbitrary state by multiple pullbacks.Indeed, we defined the linear layers in a way that they can be cascaded.Figure 4a below shows a n-mode squeezed vacuum as a multiple pullback of single mode squezers, each acting on a different mode.
III. OBSERVABLES
Observables are computed as derivatives of the NN model.For example, the mean photon number per mode is related to the derivatives of the characteristic function.The mean photon number for mode j, is being ∇ 2 j = ∂ 2 qj + ∂ 2 pj and q j = x 2j and p j = x 2j+1 .The differential photon number of modes j and k is Automatic differentiation packages enables an efficient computations of the derivatives of the NN model.
IV. GAUSSIAN BOSON SAMPLING WITH THE NEURAL NETWORK MODEL
In the GBS protocol, one considers a many-body squeezed vacuum state propagating in an Haar inteferometer, which distributes the photons in the output modes.For modelling GBS, we hence need squeezing layers and a layer representing the transmission through random interferometers.The squeezing layers are realized by a proper design of the corresponding symplectic matrices M with d = 0. We implement the Haar matrix operator by QuTiP software [37].Figure 3 shows a pseudo-code to build the neural network model by composing different layer.
We introduce the N × 1 real vector k as and we have with ∇2 j = ∂ 2 /∂k 2j + ∂ 2 /∂k 2j+1 and nT = n−1 j=0 nj .Q ρ in Eq. ( 8) can be evaluated explicitly as a multidimensional Gaussian integral: with (p, q = 0, 1, .., N − 1) being A pq = 1 2 (g pq + δ pq ).Eq. ( 9) and ( 10) can be implemented as further layers of the NN, and the probability of a given pattern computed by running the model.Figure 5a shows an example of the pattern probability distribution with n = 6, obtained by using the NN model in Fig. 4b with squeezing parameters r j = 0.88 and φ j = π/4, such that all the single mode squeezers are identical, each with mean photon number sinh(r j ) 2 ≃ 1.As in [20], we consider patterns with nj = {0, 1}.
V. TRAINING GAUSSIAN BOSON SAMPLING
Our interest is understanding if we can train the model to maximize the generation of specific patterns, e.g., a photon pair in modes 0 and 1.Using complex media to tailor linear systems is a well renowned technique as, for example, to synthesize specific gates [38,39] or taming entanglement [40].Here, we use the NN model in the phase space to optimize multi-particle events.
One could use the squeezing parameters in the model in Fig. 4b as training parameters.However, the degree of squeezing affects the number of particles per mode, and we want to alter the statistical properties of states without changing the average number of particles.We hence consider a GBS setup with an additional trainable interferometer as in Fig. 4c, which is typically realized by amplitude or phase modulators.
In Fig. 4c, n squeezed vacuum modes impinge on a trainable interferometer and then travel through a Haar interferometer.Instead of two distinct interferometers, one could use a single device (i.e., combine the Haar interferometer with the trainable interferometer), but we prefer to distinguish the trainable part from the mode-mixing Haar unitary operator.
Given n modes, our goal is to maximize the probability of patterns that contains a pair of photons in the mode 0 or 1.For example, for n = 6, this means maximizing the probability of n = (1, 1, 0, 0, 0, 0) with respect to n = (1, 0, 0, 1, 0, 0).We use as loss function which is minimal when the expected differential number of photons in mode 0 and mode 1 vanishes.This is the case when the state has a particle pair in mode 0 and mode 1.We stress the difference in using other cost functions, which involve the expected number of photons per mode as, e.g., The linear interferometer does not affect the average number of photons (which are mixed by the Haar layer).Correspondingly, training using L 0 Eq.V is not be effective to generate entangled pairs.On the contrary, L in Eq. ( 11) contains n0 n1 , which is maximal with a photon pair in modes 0 and 1.(a) Fig. 5a shows the computed probabilities of pairs for the model in Fig. 4c, with a random instance of the Haar and the linear inteferometers.Training strongly alters this statistical distribution, as shown in Fig. 5b.Fig. 5c shows the trend during the training epochs of (n 2 0 − n2 1 ) , which goes to zero while the mean photon numbers n0 and n1 remain unaltered.
Observables
Training also maximizes higher photon events, as in the pattern n = (1, 1, 1, 1, 0, 0) with 4 photons and n = 6.Fig. 6a shows the pattern probability with 4 photons.After training with the loss function in Eq. (11), Pr(n) substantially increases for the patterns with four photons containing 1 pair in modes 0 and 1 (Fig. 6b).
VI. CONCLUSIONS
We have shown that a many-body characteristic function may be reformulated as a layered neural network.This approach enables to build complex states for various applications, as gate design or boson sampling.
A common argument in criticizing quantum neural networks is that the linear quantum mechanics does not match with the nonlinearity-eager NN models.However, recent investigations show that nonlinearity may be introduced in quantum neural networks [43].Our remark is that if we formulate quantum mechanics in the phase space, nonlinearity arises in the characteristic function (or other representation).We analyzed this strategy in the simplest case of Gaussian states.The resulting model is universal and may be trained for different purposes.For this reason, phase space models allow naturally in dealing with non-classical states and computing observables by derivatives.This formulation opens many opportunities.For example, the optimization of multi-particle events can be extended to fermionic fields.As a drawback, computing boson patterns probabilities by NN APIs is not expected to be competitive with highly optimized algorithms running on large-scale clusters [41,42].Still, it appears to be a versatile and straightforward methodology.
Here, we have shown many-body quantum state design and engineering by TensorFlow.We have demonstrated how to enhance multi-particle generation, with many potential applications in quantum technologies.In addition, the proposed method enables training Boson sampling without explicitly computing derivatives of the Hafnian [18,34], but resorting to automatic computational packages.We have tested the algorithm with a conventional workstation with a single commercial GPU (NVIDIA QUADRO RTX 4000), with a computational time of the order of few minutes with 6 modes.
The method can be generalized to other boson sampling setups, as including Glauber layers and multi-mode squeezers.Also, it readily allows to test different loss functions for tailoring the boson sampling patterns.Extension beyond Gaussian states can be envisaged by using a general machine learning networks with an arbitrary number of layers and different nonlinearity.
FIG. 1 .
FIG. 1. Workflow of the proposed methodology to train Boson sampling by representing the characteristic function as a neural network.
FIG. 2 .
FIG. 2. (a)A neural network model for the characteristic function.Two inputs, a data vector x with shape 1 × N and a bias vector a with shape N × 1 seed the model that compute χ and returns the real and imaginary parts of χ(x)e ıxa .(b) A layer representing a linear transformation of the state by a unitary operator represented by a symplectic N × N matrix M and a displacement N × 1 vector d ′ .With such a definition layers can be cascaded, and one can represent single mode squeezers, interferometers, and other unitary operators.(c) A model representing a state with characteristic function χ, subject to a unitary transformation.This is a pullback of a linear transform from the original state, which produces a new state with characteristic function χ [see Eq. (4)].
FIG. 3. Pseudo-code for the creation of a neural network representing a Gaussian boson sampling experiment
FIG. 5 .
FIG. 5. (a) Probability distribution of patterns with two photons for n = 6 in the model in Fig. 4c, before training.The insets detail the particle distribution in the patterns.(b) As in (a) after training, the probability of finding a pair in mode 0 and 1 is enhanced by more than one order of magnitude.(c) Mean photon number in mode 0 and 1 during the training epochs (green), and expected differential photon number (n0 − n1)2 in the two modes, which vanishes after thousands of epochs.The statistical distribution of pairs changes at a constant photon number per mode.Data generated by the code in https://github.com/nonlinearxwaves/BosonSampling.
FIG. 6 .
FIG. 6.(a) Probability distribution of patterns with 4 photons (n = 6) in the model in Fig. 4c before training.The insets detail the particles in each pattern.(b) As in (a) after training; the probability of patterns with two photons in modes 0 and 1 is maximized.Data generated by the code in https://github.com/nonlinearxwaves/BosonSampling. | 3,416 | 2021-02-24T00:00:00.000 | [
"Physics",
"Computer Science"
] |
The transmission characteristic for the improved wind turbine gearbox
The structure of an improved wind turbine gearbox is presented for meeting the operation of the optimized wind turbine power‐wind speed curve (P‐v curve). When the wind speed is lower than the cut‐in wind speed, the operation mode of the wind turbine is changed by the extra power, which is supplied by the motor excited source to keep the wind turbine running. Moreover, the transmission principle of the improved wind turbine gearbox is discussed. Various motor power impacts on the transmission characteristic for the improved transmission structure are investigated and results are compared with the professional software. Results indicate that as the motor power increases, the transverse vibration of sun gears and meshing forces of the low‐speed and medium‐speed planetary stages decreases. The transverse vibration for the pinion gear of the high‐speed stage enhances with the increase of the motor power. Load‐sharing coefficients of the planetary gear stages are augmented with the enlargement of the motor power. It is found that meshing forces of the torque‐implement parallel stage are increased with augmentation of the motor power.
| INTRODUCTION
The wind energy, as one of the most important sustainable energy sources for the future world, has some challenges, such as uneven distribution and low energy density of the wind. The wind energy is suitable to develop and use in the nearby areas. China possesses rich low-wind speed resources close to high-energy consumption areas. Therefore, the development of the wind turbine gearbox for capturing the low-wind speed resources is the research focus of the wind energy.
Many researchers focused on the design and the transmission characteristics of the conventional wind turbine gearbox for the double-fed type wind turbine. The conventional wind turbine gearbox of the double-fed type wind turbine has different transmission structures, such as two planetary gear transmission stages plus one parallel gear transmission stage, 1 one planetary gear transmission stage plus two parallel gear transmission stages 2 and flexible-pins type transmission structure. 3,4 Some wind turbine gearboxes have only two planetary gear transmission stages. 5 Generally, the flexible-pins type transmission structure provides the best load-sharing performance. In order to investigate the transmission characteristics of the conventional wind turbine gearbox, many factors, such as uncertainties, 6 the flexibility of the internal ring gear, 7 and assemble errors of the pin shafts 8 investigated the effect of variable input loads 9 and the gravity 10 on the wind turbine gearbox. A huge wave was existed in the dynamic load-sharing coefficient curve because of the gravity of the parts. The parameter sensitivity 11 of the planetary gear was also investigated. For investigating the nonlinear dynamic and the exact dynamic response, the flexible multibody modeling 12,13 and the backlash 14 were considered in the dynamic modeling of the wind turbine gearbox. Meanwhile, for capturing the wind energy of the low-wind speed areas, numerous papers investigated the structure of the blades and many researchers adopted the traditional P-v curve to control the operation of the wind turbine. Larwood et al [15][16][17] designed a swept wind turbine blade, and developed a dynamic analysis tool for the blade. Pourrajabian et al 18 developed an aerodynamic design and optimization of the blades for the low-wind speed wind turbine. Asl et al 19 designed a feedback control for the wind turbine gearbox by establishing a classic dynamic model for the wind turbine. Song et al 20 adapted the adaptive algorithms to design a control strategy for a variable-speed wind turbine. Wang et al 21,22 proposed a linear feedback control strategy by building a two-degree freedom dynamic model of the wind turbine. Fan et al 23 presented an optimized wind turbine power-wind speed curve for capturing more wind energy in the low-wind speed areas.
The structure design and the transmission characteristic of the traditional wind turbine gearbox are investigated. For capturing the wind energy of the low-wind speed areas, the design of the blades and the control of the wind turbine are discussed extensively, based on the traditional P-v curve. Few studies are found to discuss the structure design and the transmission characteristic of the improved wind turbine gearbox, based on the optimized P-v curve. In the present study, the optimized P-v curve is adopted for capturing the wind energy of low-wind speed areas and it is investigated, compared with the conventional P-v curve. An improved transmission structure of the wind turbine gearbox is presented for the low-wind speed areas, based on the optimized P-v curve of the variable-speed double-fed wind turbine.
Transmission characteristics of the improved transmission system are analyzed.
OF THE TRANSMISSION STRUCTURE FOR THE IMPROVED WIND TURBINE GEARBOX
In order to maximize the efficiency and ensure the safe operating of the wind turbine, it is operated according to the wind turbine power-wind speed curve (P-v curve) also called the operation mode of the wind turbine. Figure 1 shows the power-wind speed curves of conventional and optimized wind turbines. 23 Figure 2 illustrates the improved transmission structure for the wind turbine, based on the optimized P-v curve. Figure 2 shows that the first, second, and third transmission stages of the conventional wind turbine gearbox are the low-speed, medium-speed, and high-speed planetary transmission stages. Moreover, the fourth transmission stage is the torque-implement parallel transmission stage. The transmission system of the improved wind turbine F I G U R E 1 The conventional and optimized P-v curve of the variable-speed wind turbine F I G U R E 2 The improved transmission structure of the wind turbine gearbox consists of two parts, the speed-implement, and the torque-implement transmission systems. The speed-implement transmission system is formed by two planetary transmission stages and a parallel transmission stage, while the torque-implement transmission system is constituted by two parallel stages, the high-speed stage and the torqueimplement stage. The pinion gear of the torque-implement parallel stage is connected to the motor by an electromagnetic clutch. It should be indicated that the wheel gear of the torque-implement stage is identical with the wheel gear of the high-speed stage.
If the wind speed is equal or greater than the original cut-in wind speed v in0 , the wind turbine operates in the region 3 and the clutch is disconnected. On the other hand, if the wind speed is less than the original cut-in wind speed, the wind turbine operates in the region 2 and the clutch is connected.
AND PARAMETERS OF THE TRANSMISSION SYSTEM
The wind turbine in this research is a variable-speed fixedpitch wind turbine. The pitch angle is zero. The optimized tip-speed ratio λ opt is 6.3. The blade radius is 63 m. The original cut-in wind speed v in0 is 5 m/s. The new cut-in wind speed v in1 is 2.86 m/s. Table 1 shows specific parameters of the transmission system for the wind turbine gearbox. Module (mm) 16 16 16 Helix angle (°) 10 10 10
T A B L E 1 Specific parameters
Pressure angle (°) 20 20 20
| Dynamic model of the speedimplement transmission system
Moreover, Figure 3 illustrates the dynamic model of the speedimplement transmission system for the wind turbine gearbox. The dynamic model of the planetary and parallel stage is shown in the left picture and the right picture, respectively. Zhai et al 8 derived and analyzed dynamic equations of the speed-implement transmission system. Dynamic equations of the wheel and pinion gears of the speed-implement transmission system are expressed in Equations (1) and (2), respectively.
And the mesh displacement between the wheel gear and the pinion gear of the speed-implement system is expressed as Figure 4 shows the dynamic model of the torque-implement stage. It should be indicated that the wheel gear of the torque-implement stage is identical with the wheel gear of the high-speed stage. The corner mark of the pinion gear for the torque-implement stage is ppT, reference 1 listed the meaning of other corner marks of the parallel stage. Damping terms are neglected. The dynamic equation of the torque-implement stage is shown in the Equation (3).
| Dynamic model of the torque-implement stage
And the mesh displacement between the wheel gear and the pinion gear of the torque-implement system is expressed as
| Dynamic model of the overall transmission system
The dynamic equation of the transmission system for the wind turbine gearbox is shown as The format of the damping matrix and the stiffness matrix was shown in Figure 5. C, R, P, S represent the carrier, the ring gear, the planet gear, and the sun gear of the planetary transmission stage for the wind turbine gearbox, respectively. CP, RP, SP represent the coupling terms. W, PP, ppT represent the wheel gear and the pinion gear of the high-speed stage and the pinion gear of the torque-implement stage, respectively. WPP and WppT are the coupling terms. (1) (4) MẌ + CẊ + KX = F. Figure 6 shows the power coefficient of the wind turbine and describes the capacity of the wind turbine to obtain the wind energy. 21 The coefficient is a function of the blade pitch angle (β) and the tip-speed ratio (λ). From the viewpoint of the Betz limit, the maximum of wind turbine power coefficient can be achieved in the region b and the region 3.
CHARACTERISTIC FOR THE TRANSMISSION SYSTEM
The generator power is expressed as The wind power at the new cut-in wind speed is expressed as The max motor power is expressed as The max motor torque is expressed as The lowest blade speed and the tip-speed ratio are expressed as
| Speed ratio characteristic of the transmission system
The transmission ratios of the low-speed stage, mediumspeed stage, and high-speed stage are expressed as The transmission ratio of the torque-implement stage is expressed as The transmission ratio of the speed-implement system is expressed as The transmission ratio of the torque-implement system is expressed as Z R is the tooth number of the ring gear for the low-wind stage. Z S is the tooth number of the sun gear for the low-wind stage. Z Med R is the tooth number of the ring gear for the medium-wind stage. Z Med S is the tooth number of the sun gear for the medium-wind stage. Z W is the tooth number of the wheel gear for the high-speed and the torque-implement stages. Z PP is the tooth number of the pinion gear for the highspeed stage. Z PPT is the tooth number of the pinion gear for the torque-implement stage. Based on the Table 1, the transmission ratio of the speed-implement transmission system is 106.427 and the transmission ratio of the torque-implement transmission system is 0.704. Figure 7 illustrates the motor power of the various wind speeds.
| Dynamic characteristic of the transmission system
100% represent the motor power was 100% P M (max), 50% represent the motor power is 50% P M (max), 0% represent the motor power is 0% P M (max). Moreover, Figure 8A,B show effects of the various motor powers on the peak values of the transverse vibration for the planetary and parallel transmission stages, respectively. Furthermore, Figure 8C,D illustrate vibration tracks of the sun gear for the low-speed and medium-speed planetary stages at 0% P M (max), respectively.
It is observed that for a steady state system, the peak value of the transverse vibration for the planetary and parallel stages is the biggest value of the gear vibration at the x and y directions. The sun gear is chosen for the planetary stages. For the track diagram, the horizontal ordinate presents the vibration of the sun gear along the x direction in the time domain, while the vertical ordinate presents the vibration of the sun gear along the y direction in the time domain.
Maximal peak values of the transverse vibration for the sun gears of the planetary transmission stages are appeared at the 0% P M (max). Maximal peak values of the transverse vibration for sun gears of the low-speed planetary stage and the medium-speed planetary stage are 0.31 and 0.34 μm, respectively. When the motor power increases and the wind power decreases, the acting torque on the planetary stages decreases and the peak value of the transverse vibration for the sun gears of the planetary stages decreases. It is observed that the maximal peak value of the transverse vibration for the parallel transmission stages appeared at the 100% P M (max). Maximal peak values of the transverse vibration for the pinion gear of the high-speed planetary stage, the wheel gear and the pinion gear of the torque-implement stage are 0.064, 0.105, and 0.107 μm, respectively. When the motor power increases, the acting torque on the torque implement stage increases and the peak value of the transverse vibration for gears of the torque implement stage increases. The peak value of the transverse vibration for the pinion gear of the high-speed stage increases because of the vibration of the torque implement stage. It should be indicated that the scope of the vibration track for the sun gear of the low-speed planetary stage is the same with that for the sun gear of the medium-speed planetary stage. Track diagrams of sun gears for two planetary stages are smooth and regular and the transmission system is steady.
For verifying the accuracy and the validity of the numerical results, the results from Masta software are added. Meanwhile, load-sharing coefficients, natural frequency, and the dynamic mesh forces are calculated and compared with the results from the Masta software. Figure 9 shows the effect of the various motor powers on load-sharing coefficients of the low-speed and medium-speed planetary stages for the wind turbine gearbox.
The equation of the load-sharing coefficient K is expressed as Figure 9 indicates that the minimal load-sharing coefficient for the speed-implement planetary stage appears at the 0% P M (max). Moreover, the minimal load-sharing coefficients of the low-speed and medium-speed planetary stages are 1.0158 and 1.003, respectively. It is observed that the load-sharing coefficients of the speed-implement planetary stage are heightened, as the motor power increases. When the (12) The natural frequency of the transmission system for the wind turbine gearbox is shown in the Table 2.
Five natural frequency orders are between 0 and 100 Hz. Seven orders are between 100 and 200 Hz. Seven orders are between 200 and 300 Hz. Five orders are between 300 and 400 Hz. Four orders are between 400 and 500 Hz. The Campbell chart of the transmission system for the wind turbine gearbox is shown in the Figure 10.
The left picture is the Campbell chart between 0 and 250 Hz and the right picture is the Campbell chart between 250 and 500 Hz. The blue dotted line is the mesh frequency of the lowspeed planetary stage (1st). The black dotted line is the mesh frequency of the medium-speed planetary stage (2nd). The red dotted line is the mesh frequency of the high-speed parallel stage (3rd) and the torque-implement parallel stage (4th). The largest numbers of the cross points between the natural frequency lines and the mesh frequency line of the high-speed parallel stage (3rd) and the torque-implement parallel stage (4th) are existed. Figure 11 shows the effects of the high-speed mesh frequency and the various motor powers on meshing forces for the low-and medium-speed planetary stages.
When the motor operates at the 0% P M (max), meshing forces of the low-speed planetary stage vary between 229. 15 Figure 12 shows effects of the high-speed mesh frequency and various motor powers on meshing forces for the high-speed and torque-implement stages. When the motor operates at 0% P M (max), meshing forces of the high-speed parallel stage are between 53.29 and 54.77 kN. On the other hand, when the motor operates at 50% P M (max), meshing forces of the high-speed parallel stage vary from 53.56 to 54.93 kN, while meshing forces of the torque-implement parallel stage vary from 25.12 to 25.23 kN. Moreover, when the motor operates at 100% P M (max), meshing forces of the high-speed parallel stage are between 53.44 and 55.22 kN, while meshing forces of the torque-implement parallel stage are between 49.58 and 51.88 kN. It is found that meshing forces of the high-speed and torque-implement parallel stages increase as the motor power increases.
| CONCLUSION
An improved transmission structure of the wind turbine gearbox for low-wind speed areas is presented in this study. The effect of the various motor powers on transmission characteristics of the transmission system for the wind turbine gearbox is investigated. The numerical results are compared with the ones from a professional software. The transverse vibration for sun gears of the planetary stages gradually declines as the motor power increases. It is concluded that the transverse vibration for the pinion gear of the high-speed stage and gears of the torque-implement stage are enlarged with the enhancement of the motor power. Moreover, it is found that as the motor power increases, load-sharing coefficients of the speed-implement planetary stage are heightened. Meshing forces of the low-and medium-speed planetary stages decrease as the motor power increases. Meshing forces of the high-speed and torque-implement parallel stages increase with the augmentation of the motor power. | 3,987.2 | 2019-05-17T00:00:00.000 | [
"Engineering",
"Physics"
] |
The impact of genetic manipulation of laminin and integrins at the blood–brain barrier
Blood vessels in the central nervous system (CNS) are unique in having high electrical resistance and low permeability, which creates a selective barrier protecting sensitive neural cells within the CNS from potentially harmful components in the blood. The molecular basis of this blood–brain barrier (BBB) is found at the level of endothelial adherens and tight junction protein complexes, extracellular matrix (ECM) components of the vascular basement membrane (BM), and the influence of adjacent pericytes and astrocyte endfeet. Current evidence supports the concept that instructive cues from the BBB ECM are not only important for the development and maturation of CNS blood vessels, but they are also essential for the maintenance of vascular stability and BBB integrity. In this review, we examine the contributions of one of the most abundant ECM proteins, laminin to BBB integrity, and summarize how genetic deletions of different laminin isoforms or their integrin receptors impact BBB development, maturation, and stability.
Introduction
For cells in certain types of tissue, a basement membrane (BM) is an essential requirement because it directs cell organization within the tissue, as well as imparting strong tissue integrity [1][2][3]. For this reason, it comes as no surprise that cell types exposed to high levels of physical shear stress, such as the skin epidermis, the gut epithelium and blood vessels, all closely attach to a BM. These structures form early in life as an integral part of the developmental program and disruption of any of the key components of BMs result in catastrophic failure of embryogenesis, thus highlighting the essential role of these structures [4][5][6]. Blood vessel development and maturation absolutely requires the presence of a BM and absence of one of the abundant BM proteins, laminin, results in failure of vasculogenesis and embryonic lethality [6].
Blood vessels in different organs exhibit different levels of vascular integrity according to local functional requirements. For instance, blood vessels in the glomerulus of the Bowman's capsule in the kidney have a fenestrated, relatively leaky phenotype to allow the blood contents of the afferent arterioles to pass easily into the renal tubule before being selectively reabsorbed at different stages of the nephron [7]. By contrast, blood vessels in the CNS are very specialized in exhibiting very high levels of vascular integrity and low levels of permeability [8][9][10]. In this manner, CNS blood vessels carefully regulate the passage of blood-borne agents into the CNS, thus protecting the delicate CNS neural cells from any harmful agents present in blood. This property of CNS blood vessels is known as the blood-brain barrier (BBB).
The BM of CNS blood vessels is a composite of several extracellular matrix (ECM) proteins, the most abundant of which are the laminins, collagen IV, fibronectin and
Open Access
Fluids and Barriers of the CNS heparan sulphate proteoglycan (HSPG) [8][9][10]. Laminin has attracted a lot of interest because it is expressed at very high levels in the vascular BM and because in many different cell types, it promotes differentiation and stabilization of cellular behavior [11][12][13][14][15]. In addition, laminin expression within cerebral blood vessels is dynamically altered in many different physiological and pathological conditions. For instance, in ischemic stroke, vascular BM laminin expression is reduced, commensurate with loss of BBB integrity [16][17][18][19], but in sub-clinical exposure to chronic mild hypoxia (CMH), laminin expression is actually increased, which correlates with enhanced endothelial tight junction protein expression [20][21][22][23]. These observations suggest that by understanding more about the relationship between laminin expression and BBB integrity, we might be better placed to positively impact BBB integrity when it could be clinically beneficial. The purpose of this review is to describe what is known about the outcome of genetic deletion of the different isoforms of laminin and their integrin cell surface receptors on BBB structure and function, and then raise some outstanding questions that need to be addressed in this field.
The blood-brain barrier
Compared to blood vessels in other organs, those in the CNS are unique in having high electrical resistance and low permeability, which protects sensitive neural cells in the brain parenchyma from the potentially harmful impact of blood components [8,9,[24][25][26][27]. This low permeability of CNS blood vessels is referred to as the BBB, which is composed of several different cell types, including endothelial cells, pericytes, and the endfeet of astrocytes ( Fig. 1). Recent studies have also highlighted the role of adjacent microglia in contributing to a tighter BBB under different challenging stimuli [28][29][30][31]. The BBB occurs at the level of CNS capillaries, which comprise endothelial cells attached to a vascular BM composed of different ECM proteins [24,25,32]. Pericytes form an integral part of these capillaries and are in close contact with endothelial cells and the vascular BM [33][34][35]. In addition, a vast network of astrocyte endfeet originating from within the brain parenchyma contact the vascular BM [8,[36][37][38]. What makes CNS blood vessels so much tighter (50-100-fold greater) than blood vessels in peripheral organs? We now know that three main types of molecular mechanism account for these properties. First, the highly organized expression of endothelial tight junction and adherens proteins make exceedingly tight connections between adjacent endothelial cells. These tight junction proteins (TJPs) include claudins, occludin, and junctional adhesion molecules (JAMs), which attach to the cellular actin cytoskeleton via zonula occludens proteins such as ZO-1 [39][40][41]. Adherens proteins such as VE-cadherin further act to strengthen the bonds between neighboring endothelial cells [42,43]. Second, astrocytes and pericytes positively impact BBB integrity. The influence of astrocytes to increases the tightness of blood vessels has been known for many years since the seminal work of Janzer and Raff [37], but more recently, the role of pericytes has gained more attention [33][34][35]. Third, and sometimes underestimated, is the impact of the vascular BM to which the endothelial cells and other cells attach [36].
The BBB is disrupted in many different neurological conditions, including meningitis, ischemic stroke, multiple sclerosis, and CNS tumors [44][45][46][47][48]. Accumulating evidence suggest that it also deteriorates as part of the aging process and may be an important contributory Fig. 1 The cellular composition of the blood-brain barrier (BBB). The barrier is formed by a continuum of endothelial cells which line the interior of blood vessels (brown), and which are firmly attached to a basement membrane (BM) composed of a mix of extracellular matrix (ECM) proteins (pink). Pericytes (yellow) are located within the vascular BM. Astrocyte (green) endfeet contact the vascular BM, thus connecting blood vessels to neurons within the brain parenchyma. Microglia (blue) are also in close contact with astrocytes and blood vessels factor in the pathogenesis of vascular dementia by triggering neuronal dysfunction and neurodegeneration [49][50][51][52][53]. Further studies have shown that BBB integrity is also transiently disrupted by low oxygen levels (hypoxia) [29,54,55]. It is interesting that several of these neurological conditions are accompanied by alterations in ECM components of the vascular BM [18,[56][57][58], raising the possibility that disordered expression of ECM proteins may itself trigger altered behavior in endothelial cells, resulting in reduced BBB integrity. Based on this knowledge, it becomes clear that if we can identify factors that positively regulate BBB integrity, we might be able to therapeutically delay or attenuate the pathogenesis of many neurological diseases.
Laminins and other vascular basement membrane extracellular matrix proteins
The vascular BM is composed of several different ECM proteins, of which the laminins, collagen IV, fibronectin and heparan sulphate proteoglycan (HSPG) are the most abundant [59][60][61][62][63]. Endothelial cells form strong adhesive attachments to the BM but it must be remembered that this is not just an adhesive carpet which the endothelial cells attach to, but its components also provide important instructive cues that regulate many aspects of cell behavior, including cell adhesion, survival, proliferation, migration and differentiation [60,64]. Cells bind to ECM proteins primarily via cell surface receptors called integrins and some also use dystroglycan [65][66][67][68][69].
Different laminin isoforms
One of the most abundant ECM proteins found in the vascular basal lamina is laminin, or more correctly the laminin group of molecules, as there are several different laminin isoforms [61,62,70]. Laminins are heterotrimers consisting of α, β and γ subunits (Fig. 2), of which 5 α, 4 β and 3 γ have been so far defined allowing the generation of up to 20 different laminin isoforms [71]. The current nomenclature dictates that laminins are named according to their subunit make-up, so for instance, laminin-111 is composed of the subunits α1, β1 and γ1.
Differential expression and functions of different laminin isoforms in the CNS
Histological studies of the adult CNS reveal that laminin staining is found exclusively in a vascular pattern, with negligible levels in the tissue parenchyma [72][73][74]. The contributions to this vascular BM laminin originate from several different cell types, each of which synthesize different laminin isoforms. More than 20 years ago, Sixt et al. performed an elegant immunohistochemical study in an experimental autoimmune encephalomyelitis (EAE) animal model of multiple sclerosis, to show that during infiltration of inflammatory leukocytes, the two layers of the vascular BM (endothelial and parenchymal) of CNS blood vessels get separated by infiltrating leukocytes which congregate in the gap in-between the two layers, in the so-called perivascular space [74]. In this system it was demonstrated that the inner endothelial layer of BM contains laminin-411 and -511, while the parenchymal layer of BM contains laminin-211 synthesized by astrocytes, and laminin-111 produced by leptomeningeal cells. They also noted that while endothelial cells synthesize both laminin-411 and -511, laminin-411 was widely expressed throughout the vascular BM of blood vessels, while expression of the 511 isoform was discontinuous. Interestingly, at sites of inflammatory cell extravasation in the EAE model, it was noted that leukocytes tend to breach the BBB at sites expressing laminin-411 but not in regions expressing the 511 isoform, suggesting permissive and inhibitory functions for these distinct laminin isoforms, respectively. Recent studies have demonstrated that pericytes express the same laminin isoforms as endothelial cells, laminin-411 and -511 [75]. Interestingly, developmental studies revealed that while laminin-411 is detected in capillaries as early as embryonic day 11 (E11) [76], laminin-511 appears much later, around 3-4 weeks after birth [77,78]. It is also notable that laminin-111, which is produced by leptomeningeal cells, does not surround all types of CNS vessel, but is restricted exclusively to arterioles and venules, and is absent from capillaries [74]. The reason for this is that laminin-111 expression is limited to leptomeningeal fibroblasts within meningeal blood vessels, which early in development are present only on the surface of the brain, Fig. 2 The molecular structure of laminin. Laminins are heterotrimers consisting of α (purple), β, (bright blue) and γ (grey blue) subunits of which 5α, 4β and 3 γ have been so far defined. Laminins are named according to their subunit composition; for instance, laminin-111 comprises the subunits α1, β1 and γ1 but during neurodevelopment these vessels invaginate deep within the brain to make contributions to the arteriolar and venular circulation.
Deletion of different laminin isoforms present unique phenotypes
As members of the laminin family are composed of three subunits, α, β and γ, the impact of deletion of any one subunit depends largely on how widely that subunit is expressed amongst different tissues and cell types [60-62, 70, 71]. This is well illustrated by global deletion of the γ1 laminin subunit which shows an embryonic lethal phenotype due to total loss of the ability to develop basement membranes and thus blood vessels [6]. Global knockouts of other laminin subunits also impact mortality but tend to affect later stages of development, primarily in the postnatal period [79,80].
Global deletion of the laminin α4 subunit
Global deletion of the α4 subunit, which contributes one of the endothelial laminins (411) leads to disrupted vascular development, comprising vascular BM defects due to reduced synthesis of other ECM components, vessel dilation, disordered angiogenesis and reduced vessel integrity [80]. While laminin α4 mutant show increased mortality, the majority of mice survive, most likely due to compensatory increases in the other endothelial laminin, laminin-511 [81]. As CNS blood vessels in wild type mice show a continuous expression of the α4 subunit throughout the vascular BM, but a patchy discontinuous expression of the α5 subunit [74], Wu et al. exploited the finding that in α4 subunit-deficient mice, the laminin α5 subunit becomes ubiquitously expressed throughout the BM by showing that this correlates with a marked reduction in leukocyte infiltration across the BBB in the inflammatory EAE model [81]. These findings supported previous work from the same lab, that leukocytes tend to breach the BBB at laminin-411 expressing sites but not where laminin-511 is present [74]. This study therefore presents an interesting paradox in that while loss of laminin-411 appear to disrupt vessel stability and integrity early during development, in those mice that survive this period, the compensatory upregulation of laminin-511 in α4 subunit deficient mice actually enhances vessel integrity by reducing leukocyte extravasation.
Deletion of the laminin α2 subunit
Mice globally deficient in the laminin α2 subunit exhibit growth retardation and severe muscular dystrophic symptoms and die by 5 weeks of age [82]. In the CNS, the α2 laminin subunit is expressed predominantly by astrocytes [74]. Analysis of laminin α2 KO mice at an earlier (3-week) timepoint showed that they had defective BBB integrity, indicated by enhanced vascular leak of Evans Blue tracer, which correlated with delayed vascular maturation, indicated by enhanced expression of MECA-32, a marker of immature BBB, as well as reduced levels of VE-cadherin and the tight junction protein claudin-5 [79]. Global loss of the laminin α2 subunit also had profound effects on the behavior of two other CNS vascular cell types, astrocytes and pericytes, such that astrocytes in these mice showed hypertrophic endfeet which expressed higher levels of GFAP and lacked polarized aquaporin-4 channels. At the same time, pericyte coverage of blood vessels was also substantially reduced [79].
Astrocyte-specific laminin deletion
Chen et al. targeted the role of astrocyte laminins specifically by deleting the laminin γ1 subunit specifically in astrocytes, which effectively removes all laminin expression [83]. This astrocyte-specific knockout mouse strain had a milder phenotype than the global or brain-specific α2 laminin subunit knockout in that they survived beyond the 4-week timepoint, but after 2-3 months of age, transgenic mice presented with spontaneous intracerebral hemorrhage (ICH), which became more pronounced with age, such that by 6 months of age, more than 60% of mice had ICH. Most hemorrhages occurred in small arterioles in deep brain regions including the basal ganglia, thalamus and hypothalamus. Closer analysis revealed that while in wild type mice, astrocyte endfeet co-localized strongly with α-SMA positive smooth muscle cells (SMCs) in small arterioles, in the astrocyte-specific laminin KO strain, the number of α-SMApositive cells was markedly reduced. Based on this, the authors concluded that lack of astrocyte laminin disrupts the function and proliferation of vascular SMCs, resulting in weakening of the arteriolar wall and eventually vessel rupture.
In a complimentary study, the authors used the same mouse strain to show that astrocyte-specific deletion of laminins resulted in BBB breakdown, most likely as a consequence of altering the behavior of pericytes, from a BBB-stabilizing to BBB-disruptive phenotype [84]. In their model, they suggest that astrocyte laminin promotes the BBB-stabilizing phenotype via signaling through the pericyte α2β1 integrin, but loss of this signaling leads to a contractile BBB-disruptive pericyte phenotype. They also showed that loss of astrocyte laminin led to reduced AQP-4 expression by astrocyte endfeet, as well as reduced levels of endothelial tight junction protein expression. These last two events are consistent with the phenotype of the brain-specific-laminin α2 subunit knockout described above [79].
Pericyte-specific laminin deletion
More recently, laminins have also been deleted specifically from pericytes by crossing PDGFRβ-Cre mice with floxed laminin γ1 subunit mice [85]. All transgenic mice typically die by 4 months of age due to a severe muscular dystrophy (MD) phenotype. In addition to the MD phenotype, a small percentage (approximately 11%) of progeny also displayed a hydrocephalus phenotype, which became manifest 2 weeks after birth. Interestingly, all mice displaying hydrocephalus also showed BBB disruption, which was associated with reduced expression of the tight junction protein ZO-1 and vascular AQP-4 expression, in addition to dramatically reduced pericyte coverage. Of note, when these transgenic mice were crossed from a C57BL6-FVB mixed background onto a pure C57BL6 background, they failed to develop hydrocephalus and while BBB integrity was normal at 4 months of age, by 8 months old, mild BBB disruption became apparent [75]. To examine whether this pericyte-specific laminin KO strain shows worse pathology in disease models, the authors compared responses with wild type mice in a collagenase-induced ICH model. This revealed that the KO strain showed greater pathology as shown by increased size of hematoma, worse neurological function, reduced BBB integrity and increased neuronal death [86].
Deletion of the laminin α5 subunit specifically in endothelial cells or pericytes reveal opposite effects on BBB integrity
As global knockout of the laminin α5 subunit shows an embryonic lethal phenotype, it has been challenging to study the role of this subunit in BBB regulation [87]. Within the BBB, both endothelial cells [74] and pericytes [85] have been shown to express this laminin subunit, making it important to determine the relative contributions of each cell type's laminin α5 to BBB stability. To address this question, the Yao lab recently generated distinct transgenic mice strains in which laminin α5 was specifically deleted in endothelial cells [88] or pericytes [89]. In both strains of knockout mice, under homeostatic control conditions, loss of laminin α5 had no obvious effect on BBB integrity. In keeping with other studies demonstrating BBB-enhancing effects of laminins, compared with wild type littermates, the EC-laminin α5-KO strain displayed increased BBB permeability in an ICH model, correlating with greater injury volume, leukocyte intravasation and gliosis [88]. However, in contrast, in an ischemic stroke model, the PC-laminin α5-KO strain showed milder BBB disruption compared to wild type controls, which correlated with reduced infarct volume, reduced leukocyte infiltration and improved neurological score [89]. These findings imply that while endothelial laminin α5 positively contributes to BBB integrity, surprisingly, pericyte laminin α5 appears to have detrimental effects on BBB stability, at least in these separate disease models under these conditions. When all outcomes of the various laminin knockout strains are studied, some common themes stand out. The most important is that generally speaking, loss of laminin, whether it be global or cell-type specific in any of the BBB cell types examined (endothelial cells, astrocytes or pericytes), all result in reduced levels of BBB integrity [79,[83][84][85][86]88]. In the same vein, reduced vascular integrity in all these knockouts is associated with disrupted vascular BM composition, meaning that loss of laminin also negatively impacts the synthesis of other ECM components within the BM. At the same time, reduced expression of endothelial tight junction proteins is also common, as is disordered expression of astrocyte AQP-4 clustering on astrocyte endfeet, reduced pericyte coverage, and in the case of astrocyte-specific laminin deletion, loss of arteriolar SMCs. In conclusion, deletion of most laminin isoforms has deleterious effects on BBB integrity and function. The two notable exceptions to this rule are first, the PC-laminin α5-KO strain, which displays reduced BBB breakdown in an ischemic stroke model, and second, the global α4 laminin subunit KO, because despite showing reduced vascular integrity and disordered angiogenesis early in life, the work of Wu et al. demonstrates that in global α4 laminin subunit KO mice that survive the postnatal period, loss of expression of the α4 laminin subunit leads to compensatory ubiquitous expression of the laminin α5 subunit on all blood vessels, which closely correlates with reduced infiltration of inflammatory leukocytes in the EAE model [81].
Integrins
Cells interact with the ECM by way of cell surface receptors called integrins [65,90]. Once thought of as merely adhesion proteins that cells bind to within tissues, over the last 30 years it has become clear that ECM-integrin interactions play a key instructive role in directing many aspects of cell behavior including cell survival, proliferation, migration, and differentiation [65,[90][91][92][93]. Almost all cells in the body express integrins (red blood cells are a rare exception) and the different cell types making up blood vessels in the CNS are no exception. Broadly speaking, integrins achieve their effects by two means (Fig. 3). First, they provide a strong physical connection between the ECM proteins and the actin cytoskeleton. Specifically, the cytoplasmic domains of integrin β subunits interact with cytoplasmic adaptor proteins such as talin, vinculin and α-actinin to form strong bonds with the actin cytoskeleton. Second, by interacting with different cytoplasmic adaptor proteins, they transduce bi-directional signals (outside-in and inside-out) across the cell membrane [65,91,94]. In this way, integrin cytoplasmic domains interact with signaling proteins such as focal adhesion kinase (FAK), or integrin-linked kinase (ILK) to stimulate additional intracellular signaling cascades.
Differential expression in cerebral blood vessels
It is well established that ECM-integrin interactions play a major role in directing blood vessel formation, maturation, and homeostatic function throughout life. More than 20 years ago, several studies showed that in the adult CNS, β1 integrin expression occurs at the highest levels on blood vessels, with barely detectable signal within the brain parenchyma, suggesting an important role for this class of molecules in the regulation of blood vessel behavior [72,[101][102][103]. Furthermore, we described a developmental switch in the expression of ECM-integrin proteins during blood vessel development in the CNS [72]. Early in the postnatal period when angiogenesis is ongoing, angiogenic blood vessels express high levels of fibronectin and the fibronectin-binding integrin receptors α5β1 and α4β1 integrins, but maturation of the CNS is accompanied by downregulation of these molecules and upregulation of laminin and the laminin receptor α6β1 integrin as well as the collagen receptor α1β1 integrin. Blood vessels in the normal adult CNS express significant levels of the integrins α1β1 (collagen receptor) and α3β1, α6β1 and α6β4 (laminin receptors) in keeping with laminin and collagen IV being the dominant ECM proteins expressed in the vascular BM [101,[103][104][105]. By contrast to the other integrins expressed, α6β4 integrin shows an unusual expression pattern on CNS blood vessels, being restricted to arterioles [106]. The exception to this rule is that during acute neuroinflammation such as that seen in the EAE model, α6β4 integrin is induced by other types of blood vessel including capillaries, Fig. 3 Schematic of laminin-integrin interactions. Laminins in the ECM bind to their cognate cell surface receptors, integrins which are transmembrane proteins consisting of non-covalently linked αβ heterodimers. Integrins perform two vital functions: (i) the cytoplasmic domain of β subunits bind to the cytoplasmic adaptor proteins talin, vinculin and α-actinin to form a transmembrane link between the ECM and the actin cytoskeleton, and (ii) the cytoplasmic domain of β subunits also bind several different cytoplasmic signaling proteins including focal adhesion kinase (FAK) and integrin-linked kinase (ILK) to trigger intracellular signaling cascades prompting us to suggest that this induction may be part of a protective adaptive response to enhance BBB integrity in times of extreme insult [107]. In addition, along with several others, we have also shown that during vascular remodeling situations that occur during hypoxia or ischemia, angiogenic CNS blood vessels show marked upregulation of fibronectin and two fibronectin receptors α5β1 and αvβ3 integrins [18,56,108,109].
Deletion of endothelial integrins reveals different functions α6β4: a unique integrin with several functions
α6β4 is an unusual integrin because in all organs examined including the CNS, its expression is limited to endothelial cells lining arterioles [106]. Endothelialspecific deletion of this integrin resulted in total loss of vascular β4 integrin expression, but β4-EC-KO mice are viable and fertile and show no obvious defects in vascular development or BBB integrity [106]. However, when challenged by chronic mild hypoxia (CMH, 8% O 2 for periods up to 14 days), in contrast to wild type mice, which mount a strong arteriogenic remodeling response resulting in increased arteriolar density, β4-EC-KO mice showed reduced arteriolar remodeling that correlated with attenuated transforming growth factor (TGF)-β signaling. Of note, in the epidermis, α6β4 integrin plays an essential role in providing structural support to protect skin cells from high shear stress, and deletion of this integrin leads to epithelial detachment in mice [110] and the skin blistering condition junctional epidermolysis bullosa in humans [111,112]. By comparison, as arterioles are exposed to the highest levels of shear stress, it seems likely that α6β4 integrin is induced at this location in order to confer structural support for the endothelial cells exposed to turbulent forces. In addition to playing this protective role, our observations in β4-EC-KO mice also imply that α6β4 integrin may transduce changes in arteriolar shear stress to facilitate arteriolar remodeling under hypoxic conditions, via interactions with the TGF-β signaling pathway [106]. Fig. 4 The "green light-red light" model explaining ECM regulation of endothelial cell behavior at the BBB. According to this model, fibronectin-α5β1 integrin and laminin-α6β1 integrin interactions play opposing roles in directing endothelial behavior within CNS blood vessels. Fibronectin-α5 integrin mediated signaling (green arrow) drives vascular remodeling behaviors including endothelial proliferation and migration, while in contrast, laminin-α6β1 integrin mediated signaling (red arrow) promotes endothelial differentiation, resulting in BBB maturation and stabilization While in the normal CNS, α6β4 integrin expression is restricted to arterioles [106], in the neuroinflammatory CNS such as that seen in EAE or in transgenic mice overexpressing the pro-inflammatory cytokines interleukin (IL)-6 or interferon-α in the CNS, α6β4 integrin expression is also induced on other types of blood vessels including capillaries [73,107]. Based on these observations, we wondered if endothelial α6β4 integrin induction within brain capillaries is part of an endogenous protective response to stabilize BBB integrity under inflammatory conditions. This was tested by comparing EAE severity in β4-EC-KO and wild type littermate mice, which showed that β4-EC-KO mice had worse clinical disease that was underpinned by greater levels of BBB breakdown, leukocyte infiltration, and loss of endothelial tight junction protein expression [107]. Thus, our work defines two distinct roles for the endothelial α6β4 integrin in cerebral blood vessels: (i) to promote arteriogenic remodeling under hypoxic conditions, and (ii) to enhance vascular integrity under neuroinflammatory conditions.
Reduced expression of endothelial β1 integrins leads to increased BBB leak
It is technically challenging to examine the impact of β1 integrin deletion at the BBB because β1 integrin deletion in endothelial cells results in embryonic lethality, highlighting the essential role of these molecules in vascular development [113,114]. However, using an inducible Cre-Lox approach, the del Zoppo lab showed that 50% reduction in β1 integrin expression on cerebral vessels led to commensurate decreases in vascular expression of the cognate ECM ligands laminin and collagen IV as well as marked decrease in BBB integrity as indicated by increased IgG leakage [115]. This work supported previous findings from the same group showing that intracerebral injection of function-blocking anti-β1 integrin antibodies resulted in greater vascular cerebrovascular leak [116]. Taken together, these studies highlight an important BBB stabilizing role for β1 integrins in CNS blood vessels.
α5β1 integrin drives cerebral angiogenesis and lack of this integrin leads to delayed vascular remodeling
Building on our observation that cerebral endothelial cells show a developmental switch in their use of β1 integrins, from α5β1 and α4β1 during developmental angiogenesis to α1β1 and α6β1 in mature blood vessels [72], we went on to demonstrate that angiogenic vessels in the adult CNS also strongly upregulate fibronectin and the α5β1 integrin [18,23,117,118]. As the fibronectin-α5β1 integrin axis has been shown to be important for driving angiogenesis in tumors [119,120], we then tested in the chronic mild hypoxia (CMH) model whether deletion of endothelial α5β1 integrin impacts hypoxia-driven angiogenesis. This showed that α5-EC-KO mice displayed an attenuated angiogenic response which correlated with delayed endothelial proliferation and CNS vascularization [121].
As CNS blood vessels also upregulate fibronectin and the α5β1 integrin under inflammatory conditions [117], in a separate project we evaluated how absence of α5β1 integrin impacts vascular remodeling and integrity, and clinical disease in the EAE neuroinflammatory model. Interestingly, this revealed that α5-EC-KO mice showed earlier onset and faster progression of clinical EAE disease, though with time, peak disease and chronic disease severity were no different from wild type littermates [122]. Concordant with the accelerated clinical disease, at this early stage of EAE progression, α5-EC-KO mice displayed greater BBB breakdown and enhanced leukocyte infiltration. Consistent with our previous findings [121], α5-EC-KO mice also showed reduced endothelial proliferation, culminating in reduced levels of vascularity as compared with wild type littermates. These findings suggest that α5β1 integrin-mediated vascular remodeling represents an important vascular repair mechanism that counterbalances vascular disruption at an early stage of EAE development.
In a different disease model, the Bix lab recently reported that α5-EC-KO mice are highly resistant to experimental ischemic stroke, displaying smaller ischemic infarcts and reduced levels of BBB breakdown compared to wild type littermates [48]. This suggests that in this model, absence of α5β1 integrin leads to more stable blood vessels that are less likely to undergo vascular remodeling and thus transient BBB disruption. So how can absence of endothelial α5β1 integrin be harmful in EAE but protective in ischemic stroke? We believe that the answer may lie in the fact that stroke is a much more acute and severe insult, while in EAE, the insult is mild and more chronic. In stroke, the vascular disruption is so fast and severe that any repair that α5β1 integrin promotes is quickly overwhelmed, and so silencing α5β1 integrin function acts to ameliorate the vascular breakdown process. By contrast, the milder, more chronic vascular challenge seen in EAE can be opposed by the vascular remodeling events promoted by α5β1 integrin, resulting in faster vascular repair and vessel stabilization. Interestingly, recent studies have shown that endothelial α5β1 integrin is also upregulated on cerebral blood vessels in the bilateral carotid artery stenosis (BCAS) animal model of vascular dementia, raising the possibility that deletion or inhibition of this integrin may help to stabilize BBB integrity and protect against neurological sequelae [57], as demonstrated in the stroke model [48].
Conclusions and future directions
An intact BBB is an essential prerequisite for the maintenance of good cerebral health, and yet BBB disruption occurs during many neurological conditions [44][45][46][47][48]. Recent evidence indicates it also slowly deteriorates as part of the normal aging process [49][50][51][52][53]. Overwhelming studies support the concept that ECM-integrin interactions are not only important for the development and maturation of cerebral blood vessels; they are also essential for the maintenance of vascular stability and BBB integrity under normal conditions. It is no coincidence that the laminin family of molecules are expressed at such high levels in the vascular BM of CNS blood vessels because genetic deletions of specific laminin isoforms in specific BBB cell types all confirm the importance of this class of molecules in conferring BBB properties in the endothelial cells, pericytes and astrocytes making up the BBB. Indeed, most laminin KOs show common phenotypes that include reduced vascular integrity, disrupted vascular BM composition of other ECM components, reduced expression of endothelial tight junction proteins, disordered expression of astrocyte AQP-4 clustering on astrocyte endfeet and reduced pericyte coverage [79,[83][84][85][86]88]. While genetic deletion studies examining roles of specific integrins at the BBB currently lag behind the laminin field, recent work has demonstrated the importance of the β1 class of integrins in maintaining a tight BBB [115], as well as a key role for α5β1 integrin in promoting endothelial proliferation and cerebral angiogenesis under mild hypoxia and chronic inflammatory disease conditions [121,122]. In addition, genetic studies have defined two separate roles for the α6β4 integrin both in promoting arteriogenesis and in conferring enhanced BBB stability under chronic inflammatory conditions [106,107]. Based on current evidence, we propose a simple "green light-red light" model to explain how the fibronectin-α5β1 integrin and laminin-α6β1 integrin axes play important but opposing roles in directing endothelial behavior within CNS blood vessels (Fig. 4). In this model, fibronectin via its interactions with α5β1 integrin (green arrow) drives endothelial proliferation and migration during vascular remodeling scenarios including hypoxia, ischemia and inflammation. By contrast, the laminins, via interactions with α6β1 integrin (red arrow), promote endothelial differentiation, resulting in maturation and stabilization of the BBB.
In future studies it will be interesting to address several outstanding questions. First, as both α4 and α5 laminin subunits are expressed by endothelial cells and pericytes [74,85], and deletion of the laminin α5 subunit seems to have opposite effects on BBB integrity, depending on which cell type it is removed from [88,89], it will be important to confirm these antagonistic roles by comparing both knockout strains in the same disease models. Second, define which of the specific β1 integrins expressed at the BBB are responsible for conferring high vascular integrity; do they all contribute or does one e.g.: α3β1 or α6β1 play the major role? Third, determine whether enhanced expression of specific β1 integrins (e.g.: α6β1) can reinforce BBB integrity, and evaluate how this impacts vascular integrity and clinical progression in different animal disease models of hypoxia, ischemia, and inflammatory demyelinating disease. Fourth, define the contribution of the collagen IV-binding integrin α1β1 to vascular integrity, angiogenesis and cellular phenotype. Fifth, determine whether forced expression of the angiogenic α5β1 integrin in remodeling endothelial cells can amplify new vessel growth at the border of an ischemic insult. | 7,722.4 | 2022-06-11T00:00:00.000 | [
"Biology"
] |
Influence of Scanning Strategies on Processing of Aluminum Alloy EN AW 2618 Using Selective Laser Melting
This paper deals with various selective laser melting (SLM) processing strategies for aluminum 2618 powder in order to get material densities and properties close to conventionally-produced, high-strength 2618 alloy. To evaluate the influence of laser scanning strategies on the resulting porosity and mechanical properties a row of experiments was done. Three types of samples were used: single-track welds, bulk samples and samples for tensile testing. Single-track welds were used to find the appropriate processing parameters for achieving continuous and well-shaped welds. The bulk samples were built with different scanning strategies with the aim of reaching a low relative porosity of the material. The combination of the chessboard strategy with a 2 × 2 mm field size fabricated with an out-in spiral order was found to eliminate a major lack of fusion defects. However, small cracks in the material structure were found over the complete range of tested parameters. The decisive criteria was the elimination of small cracks that drastically reduced mechanical properties. Reduction of the thermal gradient using support structures or fabrication under elevated temperatures shows a promising approach to eliminating the cracks. Mechanical properties of samples produced by SLM were compared with the properties of extruded material. The results showed that the SLM-processed 2618 alloy could only reach one half of the yield strength and tensile strength of extruded material. This is mainly due to the occurrence of small cracks in the structure of the built material.
Introduction
Selective laser melting (SLM) is a progressive method of additive manufacturing, mainly used for the rapid production of prototypes and lightweight components with complex geometry. For the latter, alloys with a good strength-to-weight ratio, like high strength aluminum (series 2000 and 7000) are best suited [1]. These alloys are usually considered difficult to weld. Due to its ability to maintain mechanical properties under temperatures of up to 300 • C the aluminum alloy EN AW 2618 is typically 280 HL machine from SLM Solutions with a maximum laser power of 400 W was used. Experiments comprised of single-track welds and volume samples (cubes 5 × 5 × 5 mm), all with a layer thickness of 50 µm. A wide range of processing parameters (laser power, laser speed, and hatch distance) was studied. For cube tests, a relative density above 99% was achieved with LP 200 W, LS 200 mm/s and a hatch distance (HD) of 110 µm. These results correspond with the other studies of Al-Cu alloy mentioned above. However, low surface roughness was observed with parameters LP 400 W and LS 1400 mm/s. In all of the above mentioned articles focused on aluminum alloy EN AW 2618 [17,18,20], the authors describe the presence of a large number of cracks in the samples.
The aim of this study is a detailed examination of the process parameter window found in previous studies [24,25]. Larger cube samples which have been built and evaluated to explore the influence of different scanning strategies and other SLM process parameters on relative density and mechanical properties, have not yet been investigated.
Powder Characterization
The metal powder used in all experiments was fabricated by an inert gas atomization process and supplied by TLS Technik GmbH. The particle distribution specified by the vendor was 20-63 µm. The powder was not additionally sieved and was used as received from the vendor. Several analyses were made of the powder samples.
A chemical analysis was made with the inductively coupled plasma-optical emission spectrometer iCAP 6500 ICP-OES (Thermo Fisher Scientific, Cambridge, UK); the results are given in Table 1. The chemical composition of the analyzed powder corresponds to the EN 573-3 standard [26] of Al-Cu alloy EN AW 2618. To determine the morphology of particles a scanning electron microscopy (SEM) analysis using the Zeiss Ultra-Plus 50 analytical system was carried out. It can be seen that most particles have a spherical shape satisfactory for SLM processing (Figure 1a). The powder size distribution was measured by laser particle size analyzer Horriba LA-960. The results can be seen in the distribution chart ( Figure 1b). The powder meets the distribution specified by the supplier. The particle mean size is 40.1 µm and median size is 39.1 µm. A total of 90% of particles have a size between 22-59 µm, therefore a layer thickness of 50 µm was used.
the Zeiss Ultra-Plus 50 analytical system was carried out. It can be seen that most particles have a spherical shape satisfactory for SLM processing (Figure 1a). The powder size distribution was measured by laser particle size analyzer Horriba LA-960. The results can be seen in the distribution chart ( Figure 1b). The powder meets the distribution specified by the supplier. The particle mean size is 40.1 µm and median size is 39.1 µm. A total of 90% of particles have a size between 22-59 µm, therefore a layer thickness of 50 µm was used.
Fabrication of Samples
All samples were fabricated on the SLM 280 HL machine (SLM Solutions Group AG, Lubeck, Germany) equipped with a 400 W ytterbium fiber laser YLR-400-WC-Y11 (IPG Photonics, Oxford, MA, USA) working with continuous wave. The laser beam was focused to a diameter of 82 µm and had a gaussian shape. Nitrogen was used as the protective atmosphere. The overpressure in the building chamber during the build process was 10-12 mbar and the oxygen level was kept below 0.2%. The temperature of the building platform was 80 • C.
To achieve the best mechanical properties of EN AW 2618 in the SLM state, several processing strategies were examined. The evaluation process consisted of three steps:
Volumetric cube samples The aim of single-track welds is to find the areas with a suitable combination of main process parameters that would be prospective for the building of low porosity volumetric samples. Because of non-consistent results in the initial study [24], single-track experiments were performed again for better accuracy. To ensure uniformity of the coated powder layer, a manual recoating device was used. Images of individual tracks from the top view were made for the visual evaluation of the continuity and uniformity of the weld tracks. In the case of a bumpy surface or balling effect, the evaluation of the track was penalized. A cross-section of samples was also analyzed and measurements of the main dimensions were made. The main evaluation was based on a weld height (h) to weld width (w) ratio combined with height above substrate (a) to weld height. The symmetrical welds (h/w~1) with a symmetrical position on the substrate (a/h~0.5) and optimal height above material (a/50 > 0.5) were preferred. A wide range of process parameters was evaluated. LP varied in range from 100 W to 400 W in 50 W steps and LS from 50 mm/s to 1700 mm/s in 50 mm/s steps. Tracks with a lower energy input (LP below 300 W and LS above 1200 mm/s) were excluded from experiments.
Volumetric Cube Samples
The general approach of SLM is layer-based ( Figure 2a) and regularly the rotation of scanning patterns between layers is used to minimize material porosity. Thus, for all experiments with volumetric samples, a rotation angle of 73 • was used. In the case that different process parameters within a single layer are beneficial for the component processing, the hull and core strategy can be used. In this strategy the sample is divided into two areas. On the sample's outer contour, there is the hull area and in the center of the sample is the core area. For each of these areas different contour and hatch parameters can be set. This allows even better distribution of energy in one layer of the sample.
Also, a double layer exposure can be used to eliminate defects. This approach is referred to as re-melting or pre-sintering, according to the energy used in the first exposure. Since the effect of remelting has already been described earlier in a separate article [27], the effect of pre-sintering is studied. The first exposure is treated with lower laser power to preheat the layer while maintaining a uniform powder height. It is expected that it would be beneficial for the situation during a second exposure with full power, where this could result in lower temperature gradients, and energy entering the material could be higher while maintaining the same scanning speed.
The samples were mechanically ground on emery paper 1-2 mm bellow the top surface. If necessary, even polished with 3 µm and 1 µm diamond slurry. Fuss reagent was used as an etchant to differentiate grains and phases. Images of the polished surface were captured by the OLYMPUS SZX7 (Olympus Corporation, Tokyo, Japan). Images were then analyzed in software ImageJ (1.51 g, National Institutes of Health, MD, USA), where a threshold function was used to determine the relative porosity of samples.
Tensile Testing
The combination of parameters evaluated as the best for each of the tested processing strategies was used for the fabrication of larger samples for the testing of mechanical properties. SLM billets of prismatic shape with a square cross section (13 × 13 mm) and length of 83 mm were fabricated. The axes of SLM billets were parallel with the plane of the building platform, i.e., the loading axis during the tensile test was perpendicular to the building direction. Cylindrical testing samples with a nominal diameter of 8 mm and gauge length of 40 mm (according to DIN 50125) were machined out of the SLM processed material.
Tensile tests were made on a Zwick Z250 testing machine with a loading speed 2 mm/s at room temperature. For hardness measurement the HV 0.3 device LECO LM 274 AT (LECO Corporation, Saint Joseph, MI, USA) was used. For each tested strategy a set of three samples was fabricated, machined and tested. Average values and range of yield strength (YS) and ultimate tensile strength (UTS) were evaluated unless otherwise specified.
For comparison, tensile tests of standard wrought material (supplier Strojmetal Aluminium Forging s.r.o., Kamenice, Czech Republic) in states without heat treatment and with heat treatment T6 were performed. The meander strategy is the basic strategy generally used for hatching. Neighboring vectors of the laser path are scanned with a constant hatching distance in the opposite direction over the entire layer. The scanning of larger areas with a meander strategy could induce higher residual stresses due to the high thermal difference in the opposite ends of the scanning vectors ( Figure 2b).
Splitting the entire area into numerous local areas combined with a different scanning order is considered beneficial to eliminate stresses and overheating of local areas. This approach is known as island scanning or the chessboard strategy. In this strategy (Figure 2c), a cross-section area of the sample is dived to several sub-areas, where each sub-area (one square of a chessboard) is scanned with the meander strategy. The hatching angle between "black" and "white" fields is 90 • (Figure 2c).
In the case that different process parameters within a single layer are beneficial for the component processing, the hull and core strategy can be used. In this strategy the sample is divided into two areas. On the sample's outer contour, there is the hull area and in the center of the sample is the core area. For each of these areas different contour and hatch parameters can be set. This allows even better distribution of energy in one layer of the sample.
Also, a double layer exposure can be used to eliminate defects. This approach is referred to as re-melting or pre-sintering, according to the energy used in the first exposure. Since the effect of re-melting has already been described earlier in a separate article [27], the effect of pre-sintering is studied. The first exposure is treated with lower laser power to preheat the layer while maintaining a uniform powder height. It is expected that it would be beneficial for the situation during a second exposure with full power, where this could result in lower temperature gradients, and energy entering the material could be higher while maintaining the same scanning speed.
The samples were mechanically ground on emery paper 1-2 mm bellow the top surface. If necessary, even polished with 3 µm and 1 µm diamond slurry. Fuss reagent was used as an etchant to differentiate grains and phases. Images of the polished surface were captured by the OLYMPUS SZX7 (Olympus Corporation, Tokyo, Japan). Images were then analyzed in software ImageJ (1.51 g, National Institutes of Health, MD, USA), where a threshold function was used to determine the relative porosity of samples.
Tensile Testing
The combination of parameters evaluated as the best for each of the tested processing strategies was used for the fabrication of larger samples for the testing of mechanical properties. SLM billets of prismatic shape with a square cross section (13 × 13 mm) and length of 83 mm were fabricated. The axes of SLM billets were parallel with the plane of the building platform, i.e., the loading axis
Single Track Welds
Single-track welds were used for the evaluation of a wide range of processing parameters ( Figure 3). Because there were only two laser-related parameter variables, laser power and laser speed, the influence of other parameters was excluded or minimized. Weld tracks were sorted into three groups according to the chosen criteria ( Figure 4). Group 1 welds are characterized as too deep and prone to cracking. In advance, deep welds can easily cause keyhole pores. Thus, this type of weld is not considered optimal for the fabrication of volumetric samples with low porosity. Group 2 welds are considered as too wide, with low depth and low height above the substrate that could result in being too large and often re-melting. Group 3 welds are those with a good height to width ratio suitable for producing low porosity material. Figure 5 is complementary to Figure 4 and shows the representative samples of weld tracks for each group of welds with top view, cross section, measured parameters and evaluation comments. Two promising processing windows with optimal shape of the track were found. The first one is in the area of lower LS (100-400 mm/s) and LP (200-250 W). The second one is in the area of higher LS (1200-1500 mm/s) and higher LP (350-400 W). These results correspond to the results of the cube samples within the initial study [24,25]. While in studies [17,19] the optimal values for aluminum alloys 2219 and 2024 were found only in the range of low scanning speeds, below 200 mm/s.
Minimization of the influence of other parameters has certain advantages and disadvantages. From the overview of weld track proportions and continuity, the process windows were found. Thus, the hatch distance could be set according to the desired overlap of the weld tracks. The optimum overlapping was considered as 60% which resulted in the use of a 110 µm hatch distance for the Group 3 welds with lower speeds and a 65 µm hatch distance for the Group 3 welds with lower speeds.
Single Track Welds
Single-track welds were used for the evaluation of a wide range of processing parameters ( Figure 3). Because there were only two laser-related parameter variables, laser power and laser speed, the influence of other parameters was excluded or minimized. Weld tracks were sorted into three groups according to the chosen criteria ( Figure 4). Group 1 welds are characterized as too deep and prone to cracking. In advance, deep welds can easily cause keyhole pores. Thus, this type of weld is not considered optimal for the fabrication of volumetric samples with low porosity. Group 2 welds are considered as too wide, with low depth and low height above the substrate that could result in being too large and often re-melting. Group 3 welds are those with a good height to width ratio suitable for producing low porosity material. Figure 5 is complementary to Figure 4 and shows the representative samples of weld tracks for each group of welds with top view, cross section, measured parameters and evaluation comments. Two promising processing windows with optimal shape of the track were found. The first one is in the area of lower LS (100-400 mm/s) and LP (200-250 W). The second one is in the area of higher LS (1200-1500 mm/s) and higher LP (350-400 W). These results correspond to the results of the cube samples within the initial study [24,25]. While in studies [17,19] the optimal values for aluminum alloys 2219 and 2024 were found only in the range of low scanning speeds, below 200 mm/s. Minimization of the influence of other parameters has certain advantages and disadvantages. From the overview of weld track proportions and continuity, the process windows were found. Thus, the hatch distance could be set according to the desired overlap of the weld tracks. The optimum overlapping was considered as 60% which resulted in the use of a 110 µm hatch distance for the Group 3 welds with lower speeds and a 65 µm hatch distance for the Group 3 welds with lower Minimization of the influence of other parameters has certain advantages and disadvantages. From the overview of weld track proportions and continuity, the process windows were found. Thus, the hatch distance could be set according to the desired overlap of the weld tracks. The optimum overlapping was considered as 60% which resulted in the use of a 110 µm hatch distance for the Group 3 welds with lower speeds and a 65 µm hatch distance for the Group 3 welds with lower speeds. The disadvantage of the single-track weld evaluation is that for multiple tracks, arranged sideby-side, the thermal situation may be different. E.g., the measured parameters of the weld tracks may be changed due to a higher temperature during the build of volumetric samples.
A rapid change in the depth of the melt pool was observed ( Figure 3) within a higher laser power (300-400 W) and laser speed between 200-400 mm/s. This may be due to a change in the absorptivity of the material [28], where more energy is absorbed by the melt pool within this area of the process parameters. This behavior during processing is undesirable and in connection with the rise of the temperature during fabrication of the volumetric samples could cause a major shift of the optimal processing window far from the expected values.
Meander Strategy
A wide range of processing parameters, with use of the meander strategy, were examined within the initial study [26]. The results showed a relative porosity of over 99% for the samples with scanning speeds of 200 mm/s and laser power 200 W. However, only small cube samples (5 × 5 × 5 mm) with The disadvantage of the single-track weld evaluation is that for multiple tracks, arranged side-by-side, the thermal situation may be different. E.g., the measured parameters of the weld tracks may be changed due to a higher temperature during the build of volumetric samples.
A rapid change in the depth of the melt pool was observed ( Figure 3) within a higher laser power (300-400 W) and laser speed between 200-400 mm/s. This may be due to a change in the absorptivity of the material [28], where more energy is absorbed by the melt pool within this area of the process parameters.
This behavior during processing is undesirable and in connection with the rise of the temperature during fabrication of the volumetric samples could cause a major shift of the optimal processing window far from the expected values.
Meander Strategy
A wide range of processing parameters, with use of the meander strategy, were examined within the initial study [26]. The results showed a relative porosity of over 99% for the samples with scanning speeds of 200 mm/s and laser power 200 W. However, only small cube samples (5 × 5 × 5 mm) with the meander strategy were examined. Therefore, the aim of the first volumetric test was to evaluate the influence of sample size on porosity. In this test, cube edge length varied from 5 mm to 13 mm. All samples within this test were fabricated with the same process parameters, and those with the best result in the initial study [24] were used (LP = 200 W, LS = 200 mm/s and HD = 110 µm). The porosity of samples rapidly increased with increasing cube sizes ( Figure 6). the meander strategy were examined. Therefore, the aim of the first volumetric test was to evaluate the influence of sample size on porosity. In this test, cube edge length varied from 5 mm to 13 mm. All samples within this test were fabricated with the same process parameters, and those with the best result in the initial study [24] were used (LP = 200 W, LS = 200 mm/s and HD = 110 µm). The porosity of samples rapidly increased with increasing cube sizes ( Figure 6). This implies that the temperature distribution during the sample fabrication changes. With elongation of the specimen edge, the time between the scanning of the neighbouring track is increased and the temperature drop of the track is higher, thus the supplied energy is probably insufficient in comparison to the small sample. It is apparent that the meander strategy is not optimal for samples with a larger volume. According to this finding, all following experiments were made with a cube size of 13 × 13 × 5 mm. Mostly, irregular shaped pores, probably induced by hot cracking, were observed ( Figure 6). However, the area that was scanned first was without pores.
Chessboard Strategy
The influence of several process parameters within the chessboard strategy has been investigated. Firstly, the standard setup of the chessboard was used with LP variation between 190 and 200 W while LS varied in the range of 100-300 mm/s. Secondly, the scanning order of individual fields was changed, and lastly the influence of the chessboard field size, which varied from 5 × 5 to 1 × 1 mm, was evaluated.
Results of the first test show bands with a high number of defects alternating with bands without a major occurrence of defects (Figure 7a This implies that the temperature distribution during the sample fabrication changes. With elongation of the specimen edge, the time between the scanning of the neighbouring track is increased and the temperature drop of the track is higher, thus the supplied energy is probably insufficient in comparison to the small sample. It is apparent that the meander strategy is not optimal for samples with a larger volume. According to this finding, all following experiments were made with a cube size of 13 × 13 × 5 mm. Mostly, irregular shaped pores, probably induced by hot cracking, were observed ( Figure 6). However, the area that was scanned first was without pores.
Chessboard Strategy
The influence of several process parameters within the chessboard strategy has been investigated. Firstly, the standard setup of the chessboard was used with LP variation between 190 and 200 W while LS varied in the range of 100-300 mm/s. Secondly, the scanning order of individual fields was changed, and lastly the influence of the chessboard field size, which varied from 5 × 5 to 1 × 1 mm, was evaluated.
Results of the first test show bands with a high number of defects alternating with bands without a major occurrence of defects (Figure 7a The influence of different scanning order settings on the porosity value and defect distribution is shown in Figure 8b. In general, the measured porosity of samples with the field-based setting was 2% greater than out-in spiral setting and up to 75% of irregular shaped pores in the sample were located within the "black" fields, therefore in the area that is scanned last. With the out-in spiral setting, the irregular pores were smaller in size and located near the center of the sample, again the area which is scanned last. With the chosen out in spiral scanning order, the last part of the test was conducted. It was expected that with a smaller chessboard field size a better heat distribution over the layer could be achieved, which could be beneficial to minimize the amount of defects. The lowest porosity values were reached with a field size of 2 mm, while the most defects were localized in the center of the sample (Figure 9). The influence of different scanning order settings on the porosity value and defect distribution is shown in Figure 8b. In general, the measured porosity of samples with the field-based setting was 2% greater than out-in spiral setting and up to 75% of irregular shaped pores in the sample were located within the "black" fields, therefore in the area that is scanned last. With the out-in spiral setting, the irregular pores were smaller in size and located near the center of the sample, again the area which is scanned last.
With the chosen out in spiral scanning order, the last part of the test was conducted. It was expected that with a smaller chessboard field size a better heat distribution over the layer could be achieved, which could be beneficial to minimize the amount of defects. The lowest porosity values were reached with a field size of 2 mm, while the most defects were localized in the center of the sample (Figure 9). The influence of different scanning order settings on the porosity value and defect distribution is shown in Figure 8b. In general, the measured porosity of samples with the field-based setting was 2% greater than out-in spiral setting and up to 75% of irregular shaped pores in the sample were located within the "black" fields, therefore in the area that is scanned last. With the out-in spiral setting, the irregular pores were smaller in size and located near the center of the sample, again the area which is scanned last.
With the chosen out in spiral scanning order, the last part of the test was conducted. It was expected that with a smaller chessboard field size a better heat distribution over the layer could be achieved, which could be beneficial to minimize the amount of defects. The lowest porosity values were reached with a field size of 2 mm, while the most defects were localized in the center of the sample (Figure 9).
Hull and Core Strategy
Because the results of the chessboard field size test (Figure 8b) showed an area of about 3 mm in thickness around the sample without major defects, for the test the hull thickness was set to 3 mm. Thus, the aim was to preserve these conditions in the hull area and optimize the core area parameters only. The hull parameters remained the same as in the best chessboard experiments (LP = 200 W and LS = 200 mm/s). The laser speed for the core area was set at constant value of 100 mm/s. This was due to the known poor results for 200 mm/s and the prospective porosity values achieved in previous tests at lower speeds. The LP varied from 100 W to 400 W.
As can be seen from Figure 10, the best result was achieved with LP = 280 W. Samples below this value showed hot cracking defects and above it showed increased gas pores. The relative density for the best achieved sample was 99.62%. However, these samples showed crack occurrence mostly on the border between the hull and core area (Figure 5a).
Hull and Core Strategy
Because the results of the chessboard field size test (Figure 8b) showed an area of about 3 mm in thickness around the sample without major defects, for the test the hull thickness was set to 3 mm. Thus, the aim was to preserve these conditions in the hull area and optimize the core area parameters only. The hull parameters remained the same as in the best chessboard experiments (LP = 200 W and LS = 200 mm/s). The laser speed for the core area was set at constant value of 100 mm/s. This was due to the known poor results for 200 mm/s and the prospective porosity values achieved in previous tests at lower speeds. The LP varied from 100 W to 400 W.
As can be seen from Figure 10, the best result was achieved with LP = 280 W. Samples below this value showed hot cracking defects and above it showed increased gas pores. The relative density for the best achieved sample was 99.62%. However, these samples showed crack occurrence mostly on the border between the hull and core area (Figure 5a).
Pre-Sintering Strategy
For AlSi10Mg alloy, Aboulkhair et al. [5] achieved best results of relative density with the presintering strategy, therefore the pre-sintering strategy was chosen to explore if there is a positive effect on the relative density of EN AW 2618 alloy. Within this strategy, every layer is scanned twice before the next layer of powder is applied. The first scanning (pre-sintering) had a lower laser power setting, thus the powder was only sintered. The second scanning (melting) had standard parameters and thus the powder was fully melted. Ten samples with melting parameters of LP = 200 W and LS = 200 mm/s with the chessboard 2 × 2 mm strategy were fabricated. For pre-sintering, LP varied from 15 W up to 60 W, while a scanning strategy and laser speed identical to the melting was used. Figure 11 shows that with increasing pre-sintering laser power, the porosity of samples increased. Samples without pre-sintering achieved better relative density, therefore pre-sintering has a negative impact on the quality of samples.
Pre-Sintering Strategy
For AlSi10Mg alloy, Aboulkhair et al. [5] achieved best results of relative density with the pre-sintering strategy, therefore the pre-sintering strategy was chosen to explore if there is a positive effect on the relative density of EN AW 2618 alloy. Within this strategy, every layer is scanned twice before the next layer of powder is applied. The first scanning (pre-sintering) had a lower laser power setting, thus the powder was only sintered. The second scanning (melting) had standard parameters and thus the powder was fully melted. Ten samples with melting parameters of LP = 200 W and LS = 200 mm/s with the chessboard 2 × 2 mm strategy were fabricated. For pre-sintering, LP varied from 15 W up to 60 W, while a scanning strategy and laser speed identical to the melting was used. Figure 11 shows that with increasing pre-sintering laser power, the porosity of samples increased. Samples without pre-sintering achieved better relative density, therefore pre-sintering has a negative impact on the quality of samples.
Influence of Support Structures
Ahuja et al. [18] and Karg et al. [17] both observed an increase in the relative density of samples that were built on support structures compared to those that were built directly on a build platform. In the initial study for EN AW 2618 [24,25] this influence was examined, but no change in the sample's quality was observed. However, for larger cube samples this influence has not yet been evaluated. Support structures were created in the Materialise Magics software (21.11, Materialise nv, Leuven, Belgium). The type of support used was block support. Within the first test, the geometry of support structures was optimized. The best results were observed for hatching (distance between two lines of block support) 0.9 mm. To increase the toughness of support structures 5 cone supports were added, four to the corners and one to the center of sample.
Samples with the meander and the chessboard strategy were fabricated to compare them with samples fabricated directly on the build platform. Results showed a decrease in porosity for both scanning strategies. For the meander strategy porosity decreased on average by 6% (Figure 12a), for the chessboard strategy the reduction was not so significant. On average by 1.5% (Figure 12b). The decrease in defects was probably caused by the lower temperature gradient. The support structures act as isolation of the processed material from the built platform because it has lower heat capacity than a much larger solid block represented by the built platform. Heat transfer is then much slower.
Samples with Higher Platform Heating
The main goal of this experiment was to heat up the platform to higher temperatures in order to avoid the occurrence of cracks in the sample due to the reduced temperature gradient between the sample and the platform and reduced heat dissipation from the sample. To achieve higher temperatures of the build platform, a high temperature unit (SLM Solutions Group AG, Lubeck,
Influence of Support Structures
Ahuja et al. [18] and Karg et al. [17] both observed an increase in the relative density of samples that were built on support structures compared to those that were built directly on a build platform. In the initial study for EN AW 2618 [24,25] this influence was examined, but no change in the sample's quality was observed. However, for larger cube samples this influence has not yet been evaluated. Support structures were created in the Materialise Magics software (21.11, Materialise nv, Leuven, Belgium). The type of support used was block support. Within the first test, the geometry of support structures was optimized. The best results were observed for hatching (distance between two lines of block support) 0.9 mm. To increase the toughness of support structures 5 cone supports were added, four to the corners and one to the center of sample.
Samples with the meander and the chessboard strategy were fabricated to compare them with samples fabricated directly on the build platform. Results showed a decrease in porosity for both scanning strategies. For the meander strategy porosity decreased on average by 6% (Figure 12a), for the chessboard strategy the reduction was not so significant. On average by 1.5% (Figure 12b). The decrease in defects was probably caused by the lower temperature gradient. The support structures act as isolation of the processed material from the built platform because it has lower heat capacity than a much larger solid block represented by the built platform. Heat transfer is then much slower.
Influence of Support Structures
Ahuja et al. [18] and Karg et al. [17] both observed an increase in the relative density of samples that were built on support structures compared to those that were built directly on a build platform. In the initial study for EN AW 2618 [24,25] this influence was examined, but no change in the sample's quality was observed. However, for larger cube samples this influence has not yet been evaluated. Support structures were created in the Materialise Magics software (21.11, Materialise nv, Leuven, Belgium). The type of support used was block support. Within the first test, the geometry of support structures was optimized. The best results were observed for hatching (distance between two lines of block support) 0.9 mm. To increase the toughness of support structures 5 cone supports were added, four to the corners and one to the center of sample.
Samples with the meander and the chessboard strategy were fabricated to compare them with samples fabricated directly on the build platform. Results showed a decrease in porosity for both scanning strategies. For the meander strategy porosity decreased on average by 6% (Figure 12a), for the chessboard strategy the reduction was not so significant. On average by 1.5% (Figure 12b). The decrease in defects was probably caused by the lower temperature gradient. The support structures act as isolation of the processed material from the built platform because it has lower heat capacity than a much larger solid block represented by the built platform. Heat transfer is then much slower.
Samples with Higher Platform Heating
The main goal of this experiment was to heat up the platform to higher temperatures in order to avoid the occurrence of cracks in the sample due to the reduced temperature gradient between the sample and the platform and reduced heat dissipation from the sample. To achieve higher
Samples with Higher Platform Heating
The main goal of this experiment was to heat up the platform to higher temperatures in order to avoid the occurrence of cracks in the sample due to the reduced temperature gradient between the sample and the platform and reduced heat dissipation from the sample. To achieve higher temperatures of the build platform, a high temperature unit (SLM Solutions Group AG, Lubeck, Germany) was used. This unit allows the build platform to be heated up to 550 • C; for this experiment the platform was heated up only to 400 • C to avoid damage of the heating cylinder due to platform expansion at higher temperatures. To withstand this higher temperature of the platform, a ceramic recoating blade came into use instead of a standard silicone one. As for the protective atmosphere, there was a change to the use of argon.
The samples were built on the variation of the two relevant process windows identified in single-track welds (LP = 400 W, LS = 1300-1400 mm/s; LP = 200-300 W, LS = 100-300 mm/s). These variations were done either with the meander strategy or the chessboard strategy and the results can be seen in Figures 13 and 14. Due to the build process, elevated edges could be observed in all samples, but the samples made with the chessboard strategy apparently show a higher trend to these imperfections. In several instances this fact even led to a stop in the processing in order to avoid damage to the ceramic recoating blade. Thus, not all samples were successfully built and analyzed.
Materials 2018, 10, x FOR PEER REVIEW 12 of 17 expansion at higher temperatures. To withstand this higher temperature of the platform, a ceramic recoating blade came into use instead of a standard silicone one. As for the protective atmosphere, there was a change to the use of argon. The samples were built on the variation of the two relevant process windows identified in singletrack welds (LP = 400 W, LS = 1300-1400 mm/s; LP = 200-300 W, LS = 100-300 mm/s). These variations were done either with the meander strategy or the chessboard strategy and the results can be seen in Figures 13 and 14. Due to the build process, elevated edges could be observed in all samples, but the samples made with the chessboard strategy apparently show a higher trend to these imperfections. In several instances this fact even led to a stop in the processing in order to avoid damage to the ceramic recoating blade. Thus, not all samples were successfully built and analyzed. In comparison with samples built at a lower platform temperature, an increased incidence of round pores induced by local ablation was observed in all samples. The meander strategy did not show a significant reduction in crack occurrence (see Figure 13). For samples with the chessboard strategy and laser power of 200 W a reduced occurrence of cracks was observed when working alongside decreased laser speed. However, this lower laser speed raised the gas porosity considerably (see Figure 14). The cracks were visible in all samples and no significant change in their quantity was observed; however, a different direction of crack propagation between strategies was observed. For the meander strategy the cracks spread along the diagonal of the sample and for the chessboard strategy cracks spread in all directions.
Mechanical Properties
Fabrication of larger samples for tensile testing was performed after the evaluation of cube samples and the fixation of parameters for each tested strategy. The results of the tensile test are summarized in Table 2. In the case of the meander strategy, the achieved UTS was far below expected values, which prompted the search for other strategies suitable for a more even distribution of heat expansion at higher temperatures. To withstand this higher temperature of the platform, a ceramic recoating blade came into use instead of a standard silicone one. As for the protective atmosphere, there was a change to the use of argon. The samples were built on the variation of the two relevant process windows identified in singletrack welds (LP = 400 W, LS = 1300-1400 mm/s; LP = 200-300 W, LS = 100-300 mm/s). These variations were done either with the meander strategy or the chessboard strategy and the results can be seen in Figures 13 and 14. Due to the build process, elevated edges could be observed in all samples, but the samples made with the chessboard strategy apparently show a higher trend to these imperfections. In several instances this fact even led to a stop in the processing in order to avoid damage to the ceramic recoating blade. Thus, not all samples were successfully built and analyzed. In comparison with samples built at a lower platform temperature, an increased incidence of round pores induced by local ablation was observed in all samples. The meander strategy did not show a significant reduction in crack occurrence (see Figure 13). For samples with the chessboard strategy and laser power of 200 W a reduced occurrence of cracks was observed when working alongside decreased laser speed. However, this lower laser speed raised the gas porosity considerably (see Figure 14). The cracks were visible in all samples and no significant change in their quantity was observed; however, a different direction of crack propagation between strategies was observed. For the meander strategy the cracks spread along the diagonal of the sample and for the chessboard strategy cracks spread in all directions.
Mechanical Properties
Fabrication of larger samples for tensile testing was performed after the evaluation of cube samples and the fixation of parameters for each tested strategy. The results of the tensile test are summarized in Table 2. In the case of the meander strategy, the achieved UTS was far below expected values, which prompted the search for other strategies suitable for a more even distribution of heat In comparison with samples built at a lower platform temperature, an increased incidence of round pores induced by local ablation was observed in all samples. The meander strategy did not show a significant reduction in crack occurrence (see Figure 13). For samples with the chessboard strategy and laser power of 200 W a reduced occurrence of cracks was observed when working alongside decreased laser speed. However, this lower laser speed raised the gas porosity considerably (see Figure 14). The cracks were visible in all samples and no significant change in their quantity was observed; however, a different direction of crack propagation between strategies was observed. For the meander strategy the cracks spread along the diagonal of the sample and for the chessboard strategy cracks spread in all directions.
Mechanical Properties
Fabrication of larger samples for tensile testing was performed after the evaluation of cube samples and the fixation of parameters for each tested strategy. The results of the tensile test are summarized in Table 2. In the case of the meander strategy, the achieved UTS was far below expected values, which prompted the search for other strategies suitable for a more even distribution of heat in the layer. The chessboard strategy showed partial improvement, but the amount of defects was still too high to be comparable with extruded material. The hull and core strategy minimized the large defects, thus the improvement of mechanical properties was expected. However, the cracking on the interface of the hull and core was shown to be the major limiting type of defect. One of the samples was even broken during the machining of the final shape for tensile testing. All samples fabricated directly on the platform exhibited fragile behavior and the yield strength could not be estimated. The pre-sintering strategy showed no improvement in defect minimization, thus the fabrication of larger samples and their tensile testing was not performed. The best results were achieved for the meander strategy and the samples built on support structures, which is associated with a decrease in the temperature gradient between the previous and fabricated layer. The smaller difference in UTS between the meander and chessboard strategy in the case of the samples built on support structures shows that the effect of the lower temperature gradient between the layers is more significant than the influence of the temperature gradient across the layer.
Unfortunately, the tensile samples under high temperatures were not fabricated due to partial damage of the ceramic blade. To evaluate the influence on the mechanical properties during high temperature processing it is necessary to eliminate the elevated edges in the future.
Generally, material in the SLM state exhibited significantly lower tensile properties in comparison with the extruded state.
Metallographic Analysis
Microstructural analyses of samples prepared from cylindrical specimens after tensile tests (of its threaded heads) were conducted. In Figure 15 the microstructure after the etching of material in the extruded state can be seen. Both the transverse and longitudinal directions are shown (with respect to the cylindrical sample main axis). The preferential orientation of grains and intermediary particles typical for wrought aluminum alloy can be seen. This preferential orientation is connected to the working process of the rod. Intermediary phases of rather a coarse character are present in and out of individual grains.
As expected, the microstructure of SLM-processed material is different (Figure 16a,b). On the samples there is a visible lack of fusion porosity, cracks and gas porosity. At higher magnification (Figure 16c,d) individual weld tracks can be observed. Alongside these tracks intermediary particles can be found. In comparison with the extruded state they are very fine. In this area the initiation of solidification cracks (also known as hot cracking) occurs.
According to Olakanmi et al. [29] the mechanisms behind hot cracking are not entirely understood and there are several theories as to why they occur. One of them is the creation of tensile stress between the solid and liquid phases during the solidification process due to high temperature differences between solid and liquid.
(of its threaded heads) were conducted. In Figure 15 the microstructure after the etching of material in the extruded state can be seen. Both the transverse and longitudinal directions are shown (with respect to the cylindrical sample main axis). The preferential orientation of grains and intermediary particles typical for wrought aluminum alloy can be seen. This preferential orientation is connected to the working process of the rod. Intermediary phases of rather a coarse character are present in and out of individual grains. As expected, the microstructure of SLM-processed material is different (Figure 16a,b). On the samples there is a visible lack of fusion porosity, cracks and gas porosity. At higher magnification (Figure 16c,d) individual weld tracks can be observed. Alongside these tracks intermediary particles can be found. In comparison with the extruded state they are very fine. In this area the initiation of solidification cracks (also known as hot cracking) occurs. According to Olakanmi et al. [29] the mechanisms behind hot cracking are not entirely understood and there are several theories as to why they occur. One of them is the creation of tensile stress between the solid and liquid phases during the solidification process due to high temperature differences between solid and liquid.
Fractographic Analysis
Fractographic analysis was performed on samples (both wrought and SLM) after the tensile test. A comparison of the fracture surface of the material in both states can be seen in Figure 17.
Fractographic Analysis
Fractographic analysis was performed on samples (both wrought and SLM) after the tensile test. A comparison of the fracture surface of the material in both states can be seen in Figure 17.
The fracture surface of extruded material is not as rough as fracture surface of SLM-processed samples. In both states the ductile character of the damage mechanism with dimple morphology was observed. For the SLM state, the dimples are tiny and shallow (Figure 18b), unlike in the extruded state, where the dimples are more pronounced (Figure 18a); moreover intermediate particles in dimples are visible (Figure 19a). For the SLM state, a lack of fusion porosity and gas pores are present on the fracture surface. Inside these defects, unmelted particles of metal powder can be seen (Figure 19b). The surfaces of these pores are covered by an oxide layer. between the solid and liquid phases during the solidification process due to high temperature differences between solid and liquid.
Fractographic Analysis
Fractographic analysis was performed on samples (both wrought and SLM) after the tensile test. A comparison of the fracture surface of the material in both states can be seen in Figure 17. The fracture surface of extruded material is not as rough as fracture surface of SLM-processed samples. In both states the ductile character of the damage mechanism with dimple morphology was observed. For the SLM state, the dimples are tiny and shallow (Figure 18b), unlike in the extruded state, where the dimples are more pronounced (Figure 18a); moreover intermediate particles in dimples are visible (Figure 19a). For the SLM state, a lack of fusion porosity and gas pores are present on the fracture surface. Inside these defects, unmelted particles of metal powder can be seen (Figure 19b). The surfaces of these pores are covered by an oxide layer. The results showed some differences in the microstructure of extruded and SLM state material, however hardness measurements for both states are almost identical (100 HV 0.3 for SLM, and 104 HV 0.3 for extruded state). This suggests that the main reason for the different tensile properties is the defects observed in the SLM state. Most of these defects are solidification cracks which were present in samples for all evaluated scanning strategies. A first experiment with higher platform heating suggests that the reduction of the temperature gradient between the sample and platform may have a positive effect on cracking reduction; however, enormous amount of pores were observed for these samples. A more detailed study with smoother steps between process parameters needs to be performed to fully describe the behavior of dimples are visible (Figure 19a). For the SLM state, a lack of fusion porosity and gas pores are present on the fracture surface. Inside these defects, unmelted particles of metal powder can be seen (Figure 19b). The surfaces of these pores are covered by an oxide layer. The results showed some differences in the microstructure of extruded and SLM state material, however hardness measurements for both states are almost identical (100 HV 0.3 for SLM, and 104 HV 0.3 for extruded state). This suggests that the main reason for the different tensile properties is the defects observed in the SLM state. Most of these defects are solidification cracks which were present in samples for all evaluated scanning strategies. A first experiment with higher platform heating suggests that the reduction of the temperature gradient between the sample and platform may have a positive effect on cracking reduction; however, enormous amount of pores were observed for these samples. A more detailed study with smoother steps between process parameters needs to be performed to fully describe the behavior of crack occurrence with higher processing temperatures.
Conclusions
It is clear that scanning and processing strategies strongly affect the mechanical properties of SLM-processed aluminium alloy 2618. The best result was achieved with the meander scanning Gas pore Lack of fusion porosity The results showed some differences in the microstructure of extruded and SLM state material, however hardness measurements for both states are almost identical (100 HV 0.3 for SLM, and 104 HV 0.3 for extruded state). This suggests that the main reason for the different tensile properties is the defects observed in the SLM state. Most of these defects are solidification cracks which were present in samples for all evaluated scanning strategies.
A first experiment with higher platform heating suggests that the reduction of the temperature gradient between the sample and platform may have a positive effect on cracking reduction; however, enormous amount of pores were observed for these samples. A more detailed study with smoother steps between process parameters needs to be performed to fully describe the behavior of crack occurrence with higher processing temperatures.
Conclusions
It is clear that scanning and processing strategies strongly affect the mechanical properties of SLM-processed aluminium alloy 2618. The best result was achieved with the meander scanning strategy and the sample fabricated on support structures; however, these results are far behind the mechanical properties of the material in the extruded state. The SLM-processed samples reached only half of its values. The main cause is attributed to the formation of solidification cracks, because none of the evaluated scanning strategies resulted in crack-free material. The increase in relative density was achieved by a more even distribution of heat in the fabricated layer using a chessboard strategy. However, it was shown that the use of supporting structures and the reduction of the temperature gradient between the layers had a more significant effect on the mechanical properties. However, a further increase in the base plate temperature to 400 • C did not result in a significant reduction in defects and cracks. | 12,314.4 | 2018-02-01T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Polarization conversion in plasmonic nanoantennas for metasurfaces using structural asymmetry and mode hybridization
Polarization control using single plasmonic nanoantennas is of interest for subwavelength optical components in nano-optical circuits and metasurfaces. Here, we investigate the role of two mechanisms for polarization conversion by plasmonic antennas: Structural asymmetry and plasmon hybridization through strong coupling. As a model system we investigate L-shaped antennas consisting of two orthogonal nanorods which lengths and coupling strength can be independently controlled. An analytical model based on field susceptibilities is developed to extract key parameters and to address the influence of antenna morphology and excitation wavelength on polarization conversion efficiency and scattering intensities. Optical spectroscopy experiments performed on individual antennas, further supported by electrodynamical simulations based on the Green Dyadic Method, confirm the trends extracted from the analytical model. Mode hybridization and structural asymmetry allow address-ing different input polarizations and wavelengths, providing additional degrees of freedom for agile polarization conversion in nanophotonic devices.
In this work, we investigate polarization conversion (PC) by orthogonal dimer nano-antennas consisting of two nanorods of unequal length. We address the influence of asymmetry, gap spacing and excitation wavelength on polarization conversion. In the first section, we describe PC in L-shaped dimer antennas with an analytical model based on field susceptibilities. In the second section, the linear optical spectra and polarization conversion properties are investigated using single particle optical spectroscopy and the experimental results are compared to extensive numerical modeling using the Green Dyadic Method.
Polarization conversion in L-shaped antennas: an analytical model
The optical response of orthogonal nano-antennas is calculated using an analytical approach following earlier studies on coupled plasmonic antennas [21][22][23][24][25][26] . Figure 1a shows the investigated nanostructure which consists of two identical gold nano-antennas orthogonal to each other, one aligned along (OX) and the other along (OY). We assume that the particles are much smaller than the optical wavelength and use the quasi-static approximation throughout this section. In short, the two antennas are described as Lorentzian oscillators which are coupled via near-field dipole-dipole coupling. By inverting the coupling matrix a solution is obtained for the collective modes of the coupled system.
An optical excitation of the nano-antennas will polarize each antenna arm. In the quasi-static approximation, this induced polarization can be described by a single dipole moment. For instance, for particle (1), we have: where P 1 (r 1 , ω) is the dipole induced in particle (1) and E 1 (r 1 , ω) is the total electric field in the same particle. α 1 (ω) is the polarizability tensor of particle (1). The latter can be obtained from a numerical fit of the experimental spectra of individual nanoparticles (or antennas with large gaps) or from analytical formulas in simple cases such as ellipsoids or spheres. In the following, we assume that each component of the polarizability tensor has a lorentzian profile and that the width of all optical resonances has the same value γ. It is well-known that elongated plasmonic particles support two plasmon resonances called longitudinal and transverse which are selectively excited for an incident optical excitation polarized, respectively, along their long and short axis. In the following, we however do not take into account the transverse plasmon resonance as we investigate polarization conversion in a different spectral range. We further use polarizabilities normalized at the resonance frequency, hence with a constant amplitude |α L,i (ω sp,i )| = 1, independent of the arm length. The resonance frequencies of an antenna arm being denoted ω sp,i , we have for instance for particle (1): and A similar equation holds for particle (2), where the longitudinal polarizability α L,2 is aligned along (OY).
In the framework of the field susceptibility formalism, the self-consistent electric field in particles (1) and (2) is connected to the incident E-field E 0 (r, ω) by the following equations: In these equations, ω ′ r r ( , , ) is the Green dyadic function which relates the electric field E(r, ω) induced at r by a dipole p(r′ , ω) located at r′ by the following equation: The separation between two nanoantennas being much smaller than the optical wavelength, we only take into account the near-field contribution to the Green Dyadic tensor: (1) and (2) being respectively located at (d, 0, 0) and (0, d, 0), R = (d, − d, 0) and the propagator can be written: We define K = d 2 /R 5 = 1/(4(2) 1/2 d 3 ) as well as a coupling constant normalized to the polarizability at resonance C = K|α L,1 (ω sp,1 )|, the latter being dimensionless in CGS units. The total induced dipole can be deduced from the total electric field on each antenna arm: The self-consistent electric fields E i (r i , ω) are obtained using a matrix formalism as presented in the Methods section. In the (XY) plane, the components of the total induced dipole parallel (P t , ) and perpendicular (P t,⊥ ) to the polarization of the incident wave are: where for clarity the dependence on ω was omitted in the expression of the polarizabilities α L,1 and α L,2 and ϕ is the angle between the polarization of the incident electric field and the (OX) axis (See Fig. 1a). The intensity scattered with either polarization is proportional to the square of the modulus of the corresponding total dipole component. The polarization conversion efficiency →⊥ e can then be calculated as the ratio of the intensities of the P t,⊥ and P t , components of P t (ω): Case of orthogonal antennas with identical arm lengths. Polarization conversion in symmetric L-shape antennas has been discussed into detail in Black et al. 6 both in the capacitive and conductive coupling regimes. Their optical response is characterized by two orthogonally polarized, bright plasmon resonances. The high energy mode, called antibonding (noted A), is polarized at 45° whereas the low-energy bonding mode (noted B) is polarized at 135° with respect to the antenna axes. Light incident along these principal axes does not undergo polarization conversion. Excitation along either the horizontal (OX) or vertical (OY) axes sets up a superposition of the principal A and B modes, resulting in polarization conversion in a region of the spectrum in-between the two modes where these have an unequal phase. In the two-oscillator model presented in ref. 6, an ad-hoc oscillator was associated to each eigenmode of the coupled system. On the contrary, in the analytical model presented here, the eigenfrequencies are not an input but an output of the model. The analytical model thus accounts both for the hybridization into antibonding and bonding modes and for the polarization conversion in symmetric L-shaped antennas in a self-consistent manner.
For an optical excitation polarized along (OX), the components of the total dipole moment induced are given by while an excitation along (OY) gives The polarization conversion efficiencies in both cases follow as: For uncoupled antennas, K = 0 and the total dipole on the nanostructure has no component along the direction orthogonal to the incident polarization and therefore no polarization conversion is possible. As expected for a near-field effect, the polarization conversion efficiency is proportional to K 2 ∝ 1/d 6 . For weak couplings, only a very small spectral splitting and a weak polarization conversion are observed (Fig. 1b). For stronger couplings, two resonances appear in |P t,YY (ω)| 2 (Fig. 1c), where the resonance at high energy is the A mode whereas the low energy-mode is the B mode. The intensity |P t,YX (ω)| 2 scattered with a polarization perpendicular to the optical excitation, reaches a maximum intensity between the two eigenmodes. In this spectral range, the oscillation of the two eigenmodes is dephased: an incident wave polarized along Y induces two dipole moments with opposite phases at 45° and 135° and the resulting interference is polarized along X. Whereas the dephasing between the eigenmodes becomes closer to π when their spectral splitting increases, the intensity scattered along (OX) in this case is expected to remain very small as it is not possible to excite effectively both resonances. The results of the hybridization model are in excellent agreement with the full electrodynamical simulations presented below and with the experimental results given in ref. 6.
Case of orthogonal antennas with different arm lengths.
The two-oscillator model predicts a polarization conversion based on the phase difference of two resonances at different wavelengths. Apart from hybridization through strong coupling, a similar wavelength splitting can be obtained by introducing a structural asymmetry in the antenna by changing the length of one of the nanorods in the dimer. We now apply the analytical model to the general case of orthogonal antennas with arms of different lengths. We consider that one arm has a fixed resonance wavelength λ sp,1 = 1000 nm while the second arm has a resonance at a variable λ sp,2 . In the coupled dipole model we define the asymmetry ratio by these wavelengths, namely λ sp,2 /λ sp,1 . In order to excite the longitudinal modes of both arms simultaneously, an excitation along 45° or 135° seems adequate in the asymmetric case. Similar to conversion from (OY) to (OX), we can calculate the efficiency for conversion from an incident polarization along the directions of the hybridized antibonding (A: 45°) and bonding (B: 135°) modes of a symmetric antenna (see Fig. 1a). For A-polarized incidence this yields and for B-polarized incidence we get Figure 1(d-f) shows spectra for uncoupled asymmetric dimers with increasing wavelength splittings of comparable strength like in the coupled symmetric case in Fig. 1c. The antennas are driven along 135° (pink, "B"), hence the polarization conversion is along 45° (green, "A"). As expected from the two-oscillator model, PC occurs in the spectral range between the two resonances, similarly to the symmetric antennas. In the following we analyze this case into more detail, and we will be showing that the interplay between asymmetry and coupling always results in a reduced polarization conversion efficiency, if driving the coupled antennas with a polarization along one of the antenna arms.
In Figure 2, we compute the intensity scattered with polarization either along either (OY) (a) Y Fig. 2b) as a function of the asymmetry ratio λ sp,2 /λ sp,1 . Figure 2(a) shows that I YY mainly follows the resonance of the vertical arm for large asymmetry, while near λ sp,2 /λ sp,1 = 1 it shows a splitting corresponding to the hybridization with the horizontal nanorod. The eigenfrequencies of the coupled system display an anti-crossing behaviour, typical of a system of two coupled oscillators. The intensity scattered along (OX), shown in Fig. 2(b), increases when the two resonances overlap and decreases drastically as their spectral overlap is reduced.
From the spectral dependence of (b), the maximum PC intensity I YX can be extracted. Figure 2(c) shows the intensity I YX at the wavelength of maximum e Y→X , scattered with polarization along (OX) as a function of the asymmetry ratio λ sp,2 /λ sp,1 for different values of the coupling C. This figure confirms that for symmetric antennas (λ sp,2 /λ sp,1 = 1), maximum polarization conversion is obtained for intermediate coupling strength (C ≈ 2) where the spectral splitting amounts to the resonance linewidth 6 . When moving away from the symmetric condition, the PC intensity drops rapidly.
Let us now analyze the case of excitation polarized along the direction of the bonding mode of symmetric antennas (135°). Figure 2(e) shows the polarization conserving component P t,BB , which yields the strongest scattering (I BB ) for a resonant excitation of the bonding mode for asymmetry ratios around 1. Away from the symmetric condition, the intensity contribution of the individual nanorod resonances becomes visible as the effect of hybridization is reduced when the modes do not spectrally overlap each other. The intensity scattered with orthogonal polarization I BA = |P t,BA | 2 , shown in Fig. 2(f), reaches its highest values away from the symmetric condition and if the system is excited close to one of the resonance wavelengths λ sp,1 and λ sp,2 . The parametric map ( Fig. 2(g)) shows a significant B → A polarization conversion away from the symmetric configuration and for intermediate asymmetry ratios around 0.85 and 1.2. As discussed above, the B → A configuration does not allow any polarization conversion in the case of symmetric antennas as it corresponds to the excitation of a pure eigenmode.
At the highest polarization conversion intensity however, for both BA as well as YX conversion, the degree of polarization (DOP), defined as is close to zero, as shown in Fig. 2(d,h). This means that concurrently to the polarization converted scattering I ⊥ a large amount of scattering I is generated with polarization parallel to the optical excitation. This is indicative of the fact that in all cases, maximum PC intensity requires efficient in-and out-coupling of the optical excitation into both resonances under simultaneous satisfaction of the proper phase relation 6 . For larger splittings, the DOP further increases, however the coupling efficiency to the modes is reduced as the mode splitting increases beyond the resonance linewidth. Both in the symmetric as well as asymmetric case we observe beyond this optimum that a further increase in PC efficiency comes with a decrease of the cross-polarized scattering intensity.
Transition between hybridization-mediated and asymmetry-mediated polarization conversion. The results from Fig. 2 show the general trend that strongly coupled symmetric antennas allow polarization conversion in the XY basis whereas asymmetric antennas without coupling are best suited for PC in the AB directions. However, the parameter maps of Fig. 2 are computed in fixed polarization conditions. These do not take into account the fact that the basis of eigenmodes tilts when combining asymmetry and coupling. To explore this effect further, we investigate the intermediate regime and the transition between the two limiting cases by solving the principal axes in presence of both, asymmetry and coupling. In Fig. 3, we identify for different geometries the polarization angles ϕ PC yielding the largest PC efficiency →⊥ e at the wavelength of maximum PC. Figure 3(a) shows the angles of maximum →⊥ e obtained for the symmetric antenna λ 2 /λ 1 = 1.0. Corresponding polar plots at five selected values of the coupling strength C, shown in Fig. 3(b,c), give the parallel and PC intensities I and I ⊥ (b) and the PC efficiency (c).
In the symmetric case of Fig. 3(b), scattering with conserved polarization I has maxima at 45° and 135° which correspond to the directions of the pure eigenmodes A and B. Conversely, the cross-polarized scattering I ⊥ is maximum for X and Y incident polarizations. The maxima in PC efficiency, shown in Fig. 3a,c, are aligned with the X and Y axis for weak coupling. Due to the highly asymmetric intensities for bonding and antibonding excitation at increasing coupling strength, the angles of maximum PC efficiency →⊥ e in Fig. 3(d) become slightly offset with respect to the X/Y directions for large couplings, but always remain close to 0° and 90°.
The case of structural asymmetry is illustrated in Fig. 3(d-f) for an antenna with resonance wavelength ratio λ 2 /λ 1 = 1.2. In absence of coupling, the pure eigenmodes of this system are now along the X/Y directions. The maximum intensity of polarization conversion is obtained without any coupling for incident polarizations close to the A/B directions as can be seen in Fig. 3(d). Increasing the coupling between the antennas introduces mode hybridization, which gradually tilts the eigenmode angles toward the A/B directions for strongly coupled antennas. As a consequence, the directions of maximum PC efficiency are gradually tilted towards the X/Y basis. Both the maximum cross-polarized intensity I ⊥ in Fig. 3(e) and the PC efficiency →⊥ e in Fig. 3(f) follow this trend. We note that, whereas the cross-polarized intensity patterns have a perfect four-fold symmetry, the two angles of maximum PC efficiency are generally skewed and not perpendicular to each other due to the asymmetry in the eigenmode intensities contributing to I .
Note that the PC intensities in Fig. 3(a) and (d) are of similar magnitude, with a slightly larger value in the symmetric case. Thus, it appears that the combination of asymmetry and coupling does not increase the PC efficiency beyond the case of strong coupling in symmetric dimers. This conclusion supports the trends in the parameter maps of Fig. 2(c,g) which were taken at fixed polarizations. In the following, we perform optical spectroscopy experiments on individual antennas and compare our measurements with the results of the analytical model as well as with electro-dynamical simulations.
Single particle optical spectroscopy experiments
Orthogonal gold dimer antennas of 30 nm thickness were fabricated using electron beam lithography following methods described in ref. 6. The antennas consist of two perpendicular metallic rods of lengths L 1 and L 2 separated by a gap of width g. Figure 4a shows Scanning Electron Microscopy (SEM) images of selected antennas, corresponding to the case where the horizontal nanorod length was kept fixed at L 1 = 230 nm, while the vertical nanorod length L 2 was varied to 180 nm, 230 nm, 280 nm, and 330 nm. All nanorods have a fixed width of 120 nm.
Quantitative measurements of the scattering cross-sections for different polarizations were obtained using Spatial Modulation Spectroscopy 6 . To investigate the effect of different incident polarizations, intensities in the parallel and perpendicular scattering polarizations were determined in both the (X, Y) and (A, B) basis. Figure 4b shows experimental cross sections for orthogonal antennas with a gap of 95 nm, while results for antennas with a gap of 29 nm are shown in Fig. 4f. The antenna size parameters correspond to those of Fig. 4a, as labelled by the colored frames. The parallel polarized scattering cross sections are labelled as σ XX , σ YY , σ AA , and σ BB , where the first subscript represents the incident polarization and the second the detection polarization.
To interpret these results and go beyond the analytical model, we perform numerical simulations using the Green Dyadic Method (GDM) 27,28 for which the dielectric response of gold was taken from Johnson and Christy 29 . The GDM simulation technique relies on a volume discretization of the nanostructure into a collection of polarizable entities placed on a cubic lattice. The method allows the computation of a generalized field propagator to describe the near-field and far-field optical response of nanostructures of arbitrary shapes placed in complex environments [30][31][32] . The knowledge of this generalized propagator allows computing the electric field and polarization induced inside the nanostructure by any type of illumination 27 inside the solid angle defined by the collecting optics is then computed from the polarization distribution created inside the nano-antennas. The contribution from the substrate, which was not taken into account in the analytical model, is fully taken into account by adding to the field-susceptibility of vacuum an additional term accounting for the contribution of the substrate 28,33 . A mesh step of 7 nm is used throughout this study, yielding typically between 6000 and 9000 dipoles for a structure. Figure 4c and g shows results from GDM simulations using the experimental parameters. For large antenna gaps, the arms are only weakly coupled and excitation along X and Y polarizations results in excitation of the longitudinal modes of the individual nanorods. The experimental and calculated σ YY spectra both show optical resonances with wavelengths increasing from 1.0 μm to 1.6 μm as the vertical antenna length is increased from 180 nm to 330 nm. For completely uncoupled arms, excitation along A and B polarizations result in equal superpositions of the individual arm modes. With increasing coupling, the antenna arms are hybridized resulting in a difference between the σ AA and σ BB spectra. For both gaps under study, the A and B modes are different both in the experiment and the GDM model showing that coupling already exists for the largest gaps under study. However, for the smallest gaps the coupling is increased and the difference between σ AA and σ BB is pronounced. In this case, the long wavelength resonance is only present for illumination along the B-polarization.
Next to spectra taken at parallel polarizations, polarization conversion results in scattering cross sections in the perpendicular polarization direction. Figure 4d,e shows measured and simulated PC spectra for the cases of σ YX and σ BA with large gap. For weakly coupled arms, the Y → X PC results in overall low intensities for the different components. For the X → Y PC, the analytical model of Fig. 3 predicts zero intensity in absence of coupling. The remaining scattering intensity in the σ YX PC is a result of the finite amount of coupling for these relatively large gaps. In comparison, B → A PC results in much higher intensities for the asymmetric antennas, consistent with the theoretical model calculations of Fig. 4e. Only for the symmetric case (red curve) σ BA is very low, confirming the results of our analytical model. Results of the PC for more strongly coupled antenna arms are presented in Fig. 4h,i. The increased coupling strength results in a significant increase of the PC intensity in the Y → X direction for all antennas.
Qualitatively good agreement is obtained between the trends observed in the experimental and calculated spectra. Some differences can be attributed to the intrinsic variation of single-antenna experiments, while generally the calculated resonances are somewhat narrower and more defined than the experimental antenna resonances. We point out a systematic inaccuracy in the absolute cross sections of around a factor 2, which is similar to the level of agreement found in earlier works 6 and which may reflect some inaccuracy in the assumptions underlying the calibration. However, relative magnitudes between antennas are in agreement and are not influenced by a systematic offset.
To further assess the agreement between experiment, numerical model and analytical theory, we compare in Fig. 5 experimental and calculated values for the mode dispersion and PC intensities. Figure 5a presents the resonance wavelengths measured on several individual antennas. In addition to the L 2 /L 1 ratios of Fig. 4 we measured a number of other antenna combinations with asymmetry ratios in the range 0.5 -1.8. The results are normalized to the resonance wavelength of a single arm of length L 1 to allow direct comparison with the analytical model. The solid lines in Fig. 5a illustrate the resonance splittings obtained using the analytical model calculations for coupling coefficients C = 1.5 (red) and 7.0 (magenta). As expected for a coupled oscillator system, a stronger coupling, increases the spectral splitting between the antenna eigenmodes. The variation of λ/λ sp,1 with λ sp,2 /λ sp,1 can be accurately reproduced by the analytical model for a coupling strength of C = 1.5. Figure 5b-d compare the cross-sections for the maximum intensity scattered with polarization along the perpendicular direction, denoted as σ ⊥ , as a function of the asymmetry ratio L 2 /L 1 for the experiment (b), the numerical model (c) and the analytical model (d). The data points in Fig. 5b,c correspond to values take from the spectra of Fig. 4. Figure 5d shows results from the analytical model of Section 1, which was refined by scaling the polarizability α 2 proportional to the antenna length (i.e. the particle volume). This refinement was made to improve the agreement with the experimental geometry where the increased antenna length results in an increased resonance cross section. The analytical theory of Fig. 5d and the numerical model of Fig. 5c show good agreement in the observed trends. The experimental results show global trends that confirm the model, within the uncertainty given by the complexity of single-antenna experiments as seen in Fig. 4.
The difference between (B → A) and (Y → X) PC with L 2 /L 1 ratio is clearly illustrated and confirms the validity of the PC model for antennas combining anisotropy and coupling. The numerical simulations clearly demonstrate that the maximum σ YX is in all cases obtained for symmetric antennas. In comparison, PC along the B → A is zero for the symmetric case and increases strongly for asymmetric antennas. When coupling is introduced to the antennas, the B → A conversion becomes weaker while the Y → X conversion increases. This behaviour is in perfect agreement with the theoretical considerations in the context of our analytical model, reflecting the fact of antenna-mode driven polarization conversion for asymmetric antennas (B → A) and PC induced by mode hybridization due to coupling in the symmetric case (Y → X).
Conclusion
In conclusion, we have investigated polarization conversion in plasmonic dimer nanoantennas. Using an analytical model for two coupled dipolar antennas based on field susceptibilities we showed, that polarization conversion efficiency can be tuned by carefully adjusting the asymmetry and coupling, both having an impact on the spectral splitting between the two arms, either by a simple change in the resonance frequency of the antenna arms or by hybridization of the modes due to coupling. In the case of coupled symmetric antennas, polarization conversion is most efficient along the X and Y directions and does not occur along the bonding and anti-bonding modes. Uncoupled asymmetric antennas, on the other hand, show efficient PC for incident polarizations along the diagonal directions corresponding to 45° and 135°. Finally we found, that by introducing coupling to asymmetric antennas, the angles of highest PC efficiency are tilted. These results indicate that it is possible to control the scattering from L-shape antennas both via their structural morphology and interparticle interactions. The dependencies upon antenna morphology, excitation wavelength and polarization can be faithfully captured by a simple analytical model. Optical spectroscopy experiments performed on individual antennas are in good agreement with this model and with electrodynamical simulations based on the Green Dyadic Method. Our results show that optimal polarization conversion is obtained in the case of symmetrical antennas. They provide useful design rules for integration of these nanostructures in phase-discontinuity surfaces for applications like flat lenses or spiral waveplates.
Methods
Analytical two-oscillator model. The coupled set of equations is solved in the following way. Equations (5) and (6) (resp. ω ω ω = E r E r ( ) { ( , ), ( , )} 0 1 0 2 0 ) containing the total (resp. incident) electric field at the two particle locations. The complete linear system now takes the form: It can be noted from equation 26 that the magnitude of the electromagnetic coupling between the antenna arms depends upon the product of K by the polarizabilities. Therefore, in the following, we define the coupling constant C = K|α L,1 (ω sp,1 )|). C is dimensionless as the polarizabilities are homogeneous to a volume (CGS units).
In the following, we focus on a spectral range far from the transverse resonance of the antenna arms and therefore assume that α T (ω) = 0. ω r r ( , , ) | 6,311.2 | 2017-01-19T00:00:00.000 | [
"Physics"
] |
An efficient hand gestures recognition system
Talking about gestures make us return to the historical beginning of human communication because there is no language completely free of gestures. People cannot communicate without gestures. Any action or movement without gestures is free of real feelings and cannot express the thoughts. The purpose of any hand gesture recognition system is to recognize the hand gesture and used it to transfer a certain meaning or for computer control or/and a device. This paper introduced an efficient system to recognize hand gestures in real-time. Generally, the system is divided into five phases, first to image acquisition, second to pre-processing the image, third for detection and segmentation of the hand region, fourth to features extraction and fifth to count the numbers of fingers for gesture recognition. The system has been coded by Python language, PyAutoGUI library, OS Module of Python and the Open CV library.
Introduction
Historically, the Electronic Visualization Lab was the first to create a data glove called Sayre Glove this was in 1977. Thirty-five years later, the researchers adopted the camera to interact with the computer. In fact, the camera is compared to the data glove and it is considered more direct and natural to achieve Human-Computer Interaction [1].
Recently, the interactive by gesture has become widely used and in the future may replace the mouse and/or keyboard by vision-based devices. The main feature of using hand gestures is to interact with the computer as an input unit. The gesture is defined as a form of nonverbal communication or non-vocal communication where the body's movement can convey certain messages. Gestures are originated from different parts of the human body, but the most common ones emerge from the hand or face.
Gesture provides a new form of interaction that reflects the experience of the user in the actual world. The interaction by the gesture is more natural and does not require any hindering or additional hardware.
There are two kinds of hand gestures, static and dynamic gestures. In [2] Liang introduced the best definition for static hand gestures (hand posture) and dynamic hand gesture as: "Posture is a specific combination of hand position, orientation, and flexion observed at some time instance." "Gesture is a sequence of postures connected by motion over a short time span." The good examples of static hand gesture are "OK" or "STOP" and "No", "Yes", "goodbye" for dynamic gestures. Three approaches were used to obtain important information for any hand gesture recognition system, Data glove approaches, Vision-based approaches and Colored-Marker approaches as shown strictly in Figure (1).
In Vision-based approaches, the human motion obtained from one camera or more and devices based on vision can handle many properties for interpreting the gesture, for example, color and texture whereas the sensor has not this property [3] [4]. Although these approaches are simple, many challenges can appear for example, the lighting diversity, complex background, and present objects with skin-color similar to the hand (clutter), as well as the system, requires some criteria like time of recognition, speed, durability, and efficiency of computation [5] [6].
Data glove approaches use types of sensors to capture the position and motion of the hand. These approaches can compute in easy and accurate the coordinates of the locations of the palm's fingers, and hand configurations [6] [7] [8]. The sensors do not achieve an easy connection with the computer because it needs to be the user connected physically with the computer and hinder the movement of the hand. These devices are also expensive and unsuitable to operate in an environment of virtual reality [8] [9]. According to Moore's Law, the sensors will become smaller and cheaper over time. They will be prevalent in the future, we believe.
Colored-Markers approaches used marked gloves worn by the hand of a human and be colored to help in the process of hand tracking and to locate the fingers and palm. Marker gloves can form the shape of the hand by extract the geometric-features [10]. In [11] used a wool glove with three different colors to represent the palms and fingers. This approach considers simple and not expensive if compared with the sensor or Data Glove [11], but the natural interaction between humans and computers still not enough [9].
However, there are some challenges when we want to design a strong and real-time gesture recognition system. Challenges are related to complex hand structures that lead to difficulty in tracking and recognition. In addition, there are challenges related to the shape of the gesture, different lighting conditions, the real-time issue and the presence of noise in the background. These challenges are taken into account in this paper, by using the running average principle in the background subtraction technique to detect and extract the hand from the background and used the contour of hand as a feature.
Related works
There are many systems designed to recognize the hand gesture, some of them will be mentioned. In [12], Amiraj and Vipul introduced a system to recognize the hand gestures for HCI. They used more than one approach for the preprocessing step in their system and used two methods to perform segmentation process one with a static background and another without the constraint of background.
In the static background used constant value to a threshold, they used the Otsu thresholding algorithm, and when the thresholding is dynamic used the color at the real-time mode. In the free background constraint, used thresholding methods based on color feature and subtraction of the background model. For detecting the hand, they found the contour of it and then computing the convex hull and convexity defect to find the number of fingers. They provided three approaches to interact with devices, tracking of the finger, the orientation of the hand, and counting of fingers.
Shwetha et al. [13] provided a review for many hand gesture recognition systems that used MATLAB language. They used a Canny edge algorithm to determine the edge of the hand and used the values of hue and saturation for Skin -Color detection. They concluded that the system gave better results when used Artificial Neural Networks (ANNs) and edge detection methods.
In [14], Nancy et al. introduced a system to hand gesture recognition by using the color-marker approach. The user wearing white cloth in his hand and place a red color marker on hand's fingertip. The gestures in this system used to point on the computer screen by detection the only finger with a red color marker. However, this system does not achieve direct contact with the devices because of the use of the color marker.
In [15], Tasnuva Ahmed introduced a hand gesture recognition system. The system based on neural networks and in real-time. The researcher divided the system into four steps, image capturing, pre-processing, features extraction, and recognition. The system succeeded in distinguishing hand gestures that taking from different angles or sizes or orientations, but there is a delay in the system due to the training phase of the Artificial Neural Network as well as delay in switching between the nodes.
Badgujar et.al [16] presented a recognition system for dynamic hand gestures using contour analysis. It is an efficient system for computer control, but it applies only to PowerPoint presentations.
Nagarajan et al. [17] introduced a system to recognize the gestures in real-time depending on the number of fingers from one to five. The system is divided into four phases; the first phase to capture the image in real-time by the camera. The second phase to segment the region of hand, using HSV color space and followed that performed operations of morphology. In the third phase, the contour of hand is detection by the convex hull approach. Finally, recognize the gesture according to the number of fingers. The pose orientation of hand considers the weakness of this system.
System architecture
The general structure for any system to recognize the hand gestures can be explained as shown in
Proposed system architecture
The proposed system for hand gesture recognition consists of five phases. As shown in Figure (3), the block diagram of the system architecture. Figure 3. The block diagram of the proposed system.
The system received the hand gesture as input and execution a certain action that associated with this gesture as output. The algorithm for the system is shown below. Start: Starting the camera Step 1: Capture image from the camera.
Step 2: Extract the region of interest.
Step 3: Convert the RGB image to the grayscale image.
Step 4: Smooth the image by the Gaussian blur.
Step 6: Threshold the image.
Step 7: Perform Erosion and Dilation from morphological operations.
Step 9: Recognize the gesture by using two methods, Convex Hull and Convexity Defects.
Step 10: Execute the action that assigned to recognize gesture by computer. Stop.
The image capturing
This phase used a webcam to acquire the image (frame by frame) and based on the only bare hand without a glove or colored marker that can be hinder the user.
Pre-processing
In this phase, in order to minimize the computation time, we took only the important area instead of the whole frame from the video stream, and this is called Region Of Interest (ROI). In the image processing prefers to convert the color images into grayscale images to increase processing and after complete the processing can restore the image to its original color space, therefore, we convert the region of interest into a grayscale image. Then blurring the (ROI) by Gaussian blur to reduce the objects that have high frequency but not the target. Notice that in this phase the algorithm will fail if there is any vibration for the camera.
Hand region segmentation
This phase is important in any system to hand gesture recognition and help in enhancing the performance of the system by removing unwanted data in the video stream. In general, there are two methods to detect the hand in the image. The first method depends on Skin-Color, it is simple but effected by the light conditions in the environment and the nature of background, also suffer from clutter due to existing objects such as the face or arm having the same color of hand. This method can be used as a threshold technique that exploits the color distribution map information in a suitable space of color. The color of the skin varies significantly among people, especially among people of different 5 races in addition to the impact of lighting. To solve this problem, the researchers suggested relying on the chromaticity of the skin because it is the same for all races and contains important information, in contrast to the luminance, which is heavily influenced by lighting [18]. Thus, it is possible to say that the best space of color to detect the color of the skin is what separates the luminance from the chromaticity.
The second method does not depend on Skin-Color, but on the shape of hand and benefits from the principle of convexity in the detection of the hand. In this paper, using contour analysis that depends on the shape and to solve the problems of Skin-Color detection.
There are several methods or techniques used to extract the hand region from the image can be summarized as: 1. Edge-Detection. 2. RGB values because of the values of RGB for the hand different from the background of the image.
Subtraction of background.
This work used the background subtraction technique to eliminate all objects that be static and considered it as a background and then to separate the hand from the background. This technique needs to determine the background that can be obtained by benefit from a running average principal. The initial background is computing from making the system focus on a certain scene for at least 30 frames to obtain the average as in the equation (1). After determining the initial background, we put the hand front the camera, then computes the absolute difference between the initial background and the current frame that contains the hand as a foreground object as given in the equation (2). Finally, calculating the running average to update the background using equation (3). Average is the destination image (the average background) contains the same channels in the source image and 32-bits or 64-bits floating point. Alpha is a weight of the source image and can be considered as a threshold to determine the time for computing the running average over frames. Thus find the background and then compute the difference all that called the subtraction background.
In general, the background subtraction technique faces many challenges such as interference between objects, noise from the motion camera, shadow, and change of illumination, all these challenges are taken into account in this paper. The next step is thresholds the image that output from background subtraction and the result will be the only hand with white color and the rest image with black color. The threshold process is important and must be done before finding the contours to achieve high accuracy. Mathematically can represent the threshold principle as follows: 1, x≥ threshold (4) 0, x< threshold where f(x) is the intensity of the pixel.
All of the above processes are called Motion-Detection. Figure (4) is shown the output of the hand region segmentation process. Finally, perform a chain of morphological processes such as Erosion and Dilation to remove any small regions of noise.
Contour-Extraction
The contour can be defined as the object's boundary or outline (hand in our case) that be located in the image. In other words, the contour is a curve connecting points that have the like color value and is a very important feature in shape analysis, objects detection and recognition process.
Features extraction and recognition
After extracted a contour of hand as a feature, now we turn to the second part of the research, which is how we determine the number of fingers. From the number of fingers can recognize hand gestures, and for performing this task merged two methods, one used Convex Hull to locate the extreme points (top, bottom, left and right) and the other depends on Convexity Defects. Here, we must clarify the principle of the Convex Set, which means all lines between any two points within the hull are entirely within it. From extreme points can compute the center of palm, see Figure ( The next step is to draw a circle about the fingers, it's the center point is the center of the palm and seventy percent from the length of maximum Euclidean distance between the palm's center and extreme points represent the radius. After that using the bitwise AND operation between the circle from the previous step and the threshold image, slices of fingers that result from this operation can use to compute the number of hand fingers. After determined the gesture based on the number of hand's fingers, the corresponding operation is performed. In fact, the process of distinguishing the hand gesture is a dynamic process. After performing the required instruction from the gesture, return to the first step to take another image to be processed and so on. Figure (5) explains the Convex Hull and for preserving the feature of convexity formation the defects as shown in Figure (7). There is a defect when the object's contour is away from the Convex Hull to the object itself. Convexity defect is a vector contains three points (start, end, farthest) and the approximate distance between the farthest point and convex hull as shown in Figure (8). After found the defects must obtain the angle between the two fingers to determine the finger is held up or not. From the triangle formed by points (start, end, and farthest) can compute the angle. Then used the Euclidean Distance equation to find the length of the lines of the triangle as: Figure 8. Components of Convexity Defects.
Convexity defects method.
After that, and by cosine base can find the angle farthest as follow: Farthest =cos -1 (B 2 +C 2 -A 2 /2*B*C) If the farthest angle is less than or equal to 85 o then the two fingers are considered held up. This can use to count the fingers from two to five by: Number of fingers =number of defects + 1 (10)
Results and analysis
This paper recognized sixteen gestures as shown in Figure (9) and Figure (10). The first method formed six gestures, but the recognition of the Five gesture is not perfect. After the combined Convex Hull method with the Convexity Defects method, the recognition of the Five gesture became perfect and added another ten gestures as shown in Figure (9). The parameters that used to recognize last ten gestures are the number of defects, the number of fingers, the distance between the start point and the endpoint, the distance between the endpoint and the farthest point, the distance between the start point and the farthest point, coordinates of extreme points, the farthest angle and the distance between extreme points and coordinates of the center point. Table 1 shows the experimental results of the proposed system with threshold values (50 when background brighter than skin color, 90 when skin color brighter than the background) and the alpha value of 0.5. The performance of the system is given in Chart 1. The results of the system are shown that the rate of recognition is 97.5% and this result is considered very good compared with other research papers as explained in Table 2.
Conclusion
For a long time, the problem of distinguishing gestures was important in computer vision because of the challenge of extraction of the target object such as hand from a background that has been cluttered and all that in real-time. The human when looking to a certain image can easy detecting what is inside it but that very difficult for the computer if it looks to the same image because it deals with the image as a Matrix with three dimensions.
In this paper, we got the same results when used right or left hand. The system provided used only bare hand and webcam of Laptop so it is very flexible for the user. The system does not need a database, but it directly distinguishes the gesture and this achieves the speed of the system. The contribution in this paper is combined two methods, Convex Hull and Convexity Defects to recognize sixteen hand gestures. In the future and to enhance the system can use both hands instead of using only the right hand, and that will increase the number of gestures. The experimental results showed that the best rate of recognition is when the background is clear and the light is medium, so these limits must be addressed in the future in order to increase the accuracy of the system. | 4,375 | 2020-03-21T00:00:00.000 | [
"Computer Science"
] |
Oscillatory and Fourier Integral operators with degenerate canonical relations
We mostly survey results concerning the $L^2$ boundedness of oscillatory and Fourier integral operators. This article does not intend to give a broad overview; it mainly focusses on a few topics directly related to the work of the authors.
In (1.1) it is assumed that the real-valued phase function Φ is smooth in Ω L × Ω R where Ω L , Ω R are open subsets of R d and amplitude σ ∈ C ∞ 0 (Ω L × Ω R ). (The assumption that dim(Ω L ) = dim(Ω R ) is only for convenience; many of the definitions, techniques and results described below have some analogues in the nonequidimensional setting.) The L 2 boundedness properties of T λ are determined by the geometry of the canonical relation C = {(x, Φ x , y, −Φ y ) : (x, y) ∈ supp σ} ⊂ T * Ω L × T * Ω R .
The best possible situation occurs when C is locally the graph of a canonical transformation; i.e., the projections π L , π R to T * Ω L , T * Ω R , resp., are locally diffeomorphisms. In this case Hörmander [37], [38] proved that the norm of T λ as a bounded operator on L 2 (R d ) satisfies The proof consists in applying Schur's test to the kernel of T * λ T λ ; see the argument following (1.6) below.
It is also useful to study a more general class of oscillatory integrals which naturally arises when composing two different operators T λ , T λ and which is also closely related to the concept of Fourier integral operator. We consider the oscillatory integral kernel with frequency variable ϑ ∈ Θ (an open subset of R N ), defined by (1.3) K λ (x, y) = e ıλΨ(x,y,ϑ) a(x, y, ϑ)dϑ where Ψ ∈ C ∞ (Ω L × Ω R × Θ) is real-valued and a ∈ C ∞ 0 (Ω L × Ω R × Θ). Let T λ be the associated integral operator, (1.4) T λ f (x) = K λ (x, y)f (y)dy.
Again the L 2 mapping properties of T λ are determined by the geometric properties of the canonical relation It is always assumed that C is an immersed manifold, which is a consequence of the linear independence of the vectors ∇ (x,y,ϑ) Ψ ϑi , i = 1, . . . , N at {Ψ ϑ = 0}. In other words, Ψ is a nondegenerate phase in the sense of Hörmander [37], although Ψ is not assumed to be homogeneous. As before, the best possible situation for L 2 estimates arises when C is locally the graph of a canonical transformation. Analytically this means that (1.5) det Ψ xy Ψ xϑ Ψ ϑy Ψ ϑϑ = 0 Under this assumption the L 2 result becomes (1.6) T λ L 2 →L 2 λ −(d+N )/2 so that we discover (1.2) when N = 0. The proof of (1.6) could be given by using methods in [37] or alternatively by a straightforward modification of the argument in [38]. Indeed consider the Schwartz kernel H λ of the operator T * λ T λ which is given by H λ (u, y) = e −ıλ[Ψ(x,u,w)−Ψ(x,y,ϑ)] γ(x, u, w, y, ϑ)dwdϑdx where γ is smooth and compactly supported. By using partitions of unity we may assume that σ in (1.1) has small support; thus γ has small support. Change variables w = ϑ + h, and, after interchanging the order of integration, integrate parts with respect to the variables (ϑ, x). Since this yields, in view of the small support of γ, It follows that T * λ T λ L 2 →L 2 λ −N −d and hence (1.6).
Reduction of frequency variables.
Alternatively, as in the theory of Fourier integral operators, one may compose T λ with unitary operators associated to canonical transformations, and together with stationary phase calculations, deduce estimates for operators of the form (1.3-4) from operators of the form (1.1), which involve no frequency variables; in fact this procedure turns out to be very useful when estimating operators with degenerate canonical relations.
We briefly describe the idea based on [37], for details see [25].
Consider the operator T λ with kernel R N e ıλφ(x,y,z) a(x, y, z)dz. Let A i , i = 1, 2, be symmetric d × d matrices and define clearly S λ,i are unitary operators on L 2 (R d ). A computation yields that the operator λ −d S λ,1 T λ S * λ,2 can be written as the sum of an oscillatory integral operator with kernel O λ (x, y) plus an operator with L 2 norm O(λ −M ) for any M . The oscillatory kernel O λ (x, y) is again of the form (1.3) where the phase function is given by Ψ(x, y, ϑ) = y,w − x, w + 1 2 (A 1w ·w − A 2 w · w) + φ(w,w, z) with frequency variables ϑ = (w, z,w) ∈ R d × R N × R d , and the amplitude is compactly supported. One can choose A 1 , A 2 so that for tangent vectors δx, δy ∈ R d at a reference point the vector (δx, A 1 δx, δy, A 2 δy) is tangent to the canonical relation C associated with S λ,1 T λ S * λ,2 . Let π space be the projection C → Ω L × Ω R which with our choice of A 1 , A 2 has invertible differential. Since the number of frequency variables (N + 2d) minus the rank of φ ϑϑ is equal to 2d − rank dπ space , we deduce that det φ ϑϑ = 0.
In the integral defining the kernel of S λ,1 T λ S * λ,2 we can now apply the method of stationary phase to reduce the number of frequency variables to zero, and gain a factor of λ −(2d+N )/2 . Thus we may write S λ,1 T λ S * λ,2 = λ −N/2 T λ + R λ where T λ is an oscillatory integral operator (without frequency variables) and R λ is an operator with L 2 norm O(λ −M ) for any large M . Since S λ,i are unitary the L 2 bounds for λ N/2 T λ and T λ are equivalent.
Fourier integral operators.
The kernel of a Fourier integral operators F : C ∞ 0 (Ω R ) → D ′ (Ω L ) of order µ, F ∈ I µ (Ω L , Ω R ; C) is locally given as a finite sum of oscillatory integrals (1.7) e ıΨ(x,y,θ) a(x, y, θ)dθ, where now Ψ is nondegenerate in the sense of Hörmander [37], satisfies the homogeneity condition Ψ(x, y, tθ) = tΨ(x, y, θ) for |θ| = 1 and t ≫ 1, and a is a symbol of order µ + (d − N )/2. We assume in what follows that a(x, y, θ) vanishes for (x, y) outside a fixed compact set. The canonical relation is locally given by C = {(x, Ψ x , y, −Ψ y ), Ψ θ = 0} and we assume that where 0 L , 0 R denote the zero-sections in T * Ω L and T * Ω R . Staying away from the zero sections implies (1.8) |Ψ x (x, y, θ)| ≈ |θ| ≈ |Ψ y (x, y, θ)| for large θ (when Ψ θ is small). Let β ∈ C ∞ 0 (1/2, 2) and a k (x, y, θ) = β(2 −k |θ|)a(x, y, θ) and let F k be the dyadic localization of F ; i.e. (1.7) but with a replaced by a k . The assumptions Ψ x = 0 and Ψ y = 0 can be used to show that for k, l ≥ 1 the operators F k are almost orthogonal, in the sense that F * k F l and F k F * l have operator norms O(min{2 −kM , 2 −lM }) for any M , provided that |k − l| ≥ C for some large but fixed constant C. This follows from a straightforward integration by parts argument based on (1.8) and the assumption of compact (x, y) support. Using a change of variable θ = λϑ the study of the L 2 boundedness (and L 2 -Sobolev boundedness) properties is reduced to the study of oscillatory integral operators (1.3-4) and, in the nondegenerate case, an application of estimate (1.2) above. The result is that if F is of order µ and if the associated homogeneous canonical transformation is a local canonical graph, then F maps the Sobolev space L 2 α to L 2 α−µ An important subclass is the class of conormal operators associated to phase functions linear in the frequency variables (see [37, §2.4]). The generalized Radon transforms arise as model cases.
Here M x are codimension ℓ submanifolds in R d , and dσ x is a smooth density on M x , varying smoothly in x, and χ ∈ C ∞ 0 (Ω L ×Ω R ). One assumes that the M x are sections of a manifold M ⊂ Ω L × Ω R , so that the projections to Ω L and to Ω R have surjective differential; this assumption insures the L 1 and L ∞ boundedness of the operator R. We refer to M as the associated incidence relation.
Assuming that M is given by an R ℓ valued defining function Φ, then the distribution kernel of R is χ 0 (x, y)δ(Φ(x, y)) where χ 0 ∈ C ∞ 0 (Ω L × Ω R ) and δ is the Dirac measure in R ℓ at the origin. The assumptions on the projections to Ω L , Ω R imply that rank Φ x = rank Φ y = ℓ in a neighborhood of M = {Φ = 0}. The Fourier integral description is then obtained by writing out δ by means of the Fourier inversion formula in R ℓ , this has been used in [35] where R is identified as a Fourier integral operator of order −(d − ℓ)/2, see also [55]. More general conormal operators are obtained by composing Radon transforms with pseudo-differential operators (see [37]). The canonical relation associated to the generalized Radon transform is the twisted conormal bundle of the incidence relation, We can locally (after possibly a change of coordinates) parametrize M as a graph so that Using (1.5) with Ψ(x, y, τ ) = τ · Φ(x, y) one verifies that the condition for N * M ′ being a local canonical graph is equivalent to the nonvanishing of the determinant for all τ ∈ S ℓ−1 . Under this condition R maps L 2 to L 2 (d−ℓ)/2 . We note that the determinant in (1.14) vanishes for some τ if ℓ < d/2. In particular if ℓ = d − 1 then the expression (1.14) is a linear functional of τ and thus, if (x, y) is fixed, it vanishes for all τ in a hyperplane. Therefore degeneracies always occur for averaging over manifolds with high codimension, in particular for curves in three or more dimensions. Here we shall restrict ourselves to maps (or pairs of maps) which have corank ≤ 1.
Let M , N be n-dimensional manifolds, P ∈ M and Q ∈ N , and let f : M → N be a C ∞ map with f (P 0 ) = Q 0 . A vector field V is a kernel field for the map f on a neighborhood U of P 0 if V is smooth on U and if there exists a smooth vector field W on f (U ) so that Df P V = det(Df P )W f (P ) for all P ∈ U. If rank Df P0 ≥ n − 1 then it is easy to see that there is a neighborhood of P and a nonvanishing kernel vector field V for f on U. Moreover if V is another kernel field on U then V = αV − det(Df )W in some neighborhood of P 0 , for some vector field W and smooth ) and a choice for the kernel vector field is Definition. Suppose that M and N are smooth n-dimensional manifolds and that f : M → N is a smooth map with dim ker(Df ) ≤ 1 on M . We say that f is of type k at P if there is a nonvanishing kernel field V near P so that V j (det Df ) P = 0 for j < k but V k (det Df ) P = 0.
This definition was proposed by Comech [13], [15] who assumes in addition that Df drops rank simply on the singular variety {det Df = 0}.
The finite type condition is satisfied for the class of Morin singularities (folds, cusps, swallowtails, ...) which we shall now discuss.
2.2 Morin singularities. We consider as above maps f : M → N of corank ≤ 1.
We say that f drops rank simply at P 0 if rank Df P0 = n − 1 and if d(det Df ) P = 0. Then near P 0 the variety S 1 (f ) = {x : rank Df = n − 1} is a hypersurface and we say that f has an S 1 singularity at P with singularity manifold S 1 (f ).
Next let S be a hypersurface in a manifold U and let V be a vector field defined on S with values in T U (meaning that v P ∈ T P U for P ∈ S). We say that v is transversal to S at P ∈ S if v P / ∈ T P S. We say that v is simply tangent to S at P 0 if there is a one-form ω annihilating vectors tangent to S so that ω, v S vanishes of exactly first order at P 0 . This condition does not depend on the particular choice of ω. Next let P → ℓ(P ) ⊂ T P (V ) be a smooth field of lines defined on S. Let v be a nonvanishing vector field so that ℓ(P ) = Rv P . The definitions of transversality and simple tangency carry over to field of lines (and the notions do not depend on the particular choice of the vector field).
Next consider F : U → N where dim U = k ≥ 2 and dim N = n ≥ k and assume that rank DF ≥ k −1. Suppose that S is a hypersurface in U such that rank DF = k − 1 on S. Suppose that Ker DF is simply tangent to S at P ∈ S. Then there is a neighborhood U of P in S such that the variety {Q ∈ U : rank DF With these notions we can now recall the definition of Morin singularities ( [78], [47]).
Definition. Let 1 ≤ r ≤ n. Let S 1 , . . . , S r be submanifolds of an open set U ⊂ M so that S k is of dimension n−k in V and S 1 ⊃ S 2 ⊃ · · · ⊃ S r ; we also set S 0 := U.
We say that f has an S 1r singularity in U, with a descending flag of singularity manifolds (S 1 , . . . , S r ) if the following conditions hold in U.
(i) For P ∈ U, either Df P is bijective or f drops rank simply at P .
Definition. We say that f has an S 1r,0 singularity at P , if the following conditions hold.
(i) There exists a neighborhood U of P submanifolds S k of dimension n − k in U so that P ∈ S r ⊂ S r−1 ⊂ · · · ⊂ S 1 and so that f : U → N has an S 1r singularity in U, with singularity manifolds (S 1 , . . . , S r ).
The singularity manifolds S k are denoted by S 1 k (f ) in singularity theory (if the neighborhood is understood). An S 1,0 (or S 11,0 ) singularity is a Whitney fold; an S 1,1,0 (or S 12,0 ) singularity is referred to as a Whitney or simple cusp.
If f is given in adapted coordinates vanishing at P , i.e.
then f has an S 1r singularity in a neighborhood of P = 0 if and only if and the gradients are linearly independent at 0. Moreover f has an S 1r ,0 singularity at P if in addition The singularity manifolds are then given by In these coordinates the kernel field for f is ∂/∂t n and the map f is of type r at P . Normal forms of S 1r singularities are due to Morin [47], who showed that there exists adapted coordinate systems so that (2.2) holds with Finally we mention the situation of maximal degeneracy for S 1 singularities which occurs when the kernel of Df is everywhere tangential to the singularity surface S 1 (f ). In this case we say that f is a blowdown; see example 2.3.3 below.
Examples.
We now discuss some model examples. The first set of examples concern translation invariant averages over curves, the second set restricted X-ray transforms for rigid line complexes. The map f above will always be one of the projections π L : C → T * Ω L or π R : C → T * Ω R . Note that S 1 (π L ) = S 1 (π R ).
2.3.1. Consider the operator on functions in R d Then the canonical relation is given by Consider the projection π L then it is not hard to see that S 1 k (π L ) is the submanifold of C where in addition ξ, Γ (j) (α) = 0} for 2 ≤ j ≤ k + 1. Clearly then S 1 d−1 (π L ) = ∅ so that we have an S 1 d−2,0 singularity. The behavior of π R is of course exactly the same; moreover for small perturbations the projections π L and π R still have at most S 1 d−2,0 singularities. Note that in the translation invariant setting we have S 1 k (π L ) = S 1 k (π R ), but for for small variable perturbations the manifolds S 1 k (π L ), S 1 k (π R ) are typically different if k ≥ 2. By Fourier transform arguments and van der Corput's lemma it is easy to see that A maps L 2 (R d ) to the Sobolev-space L 2 1/d (R d ) and it is conjectured that this estimate remains true for variable coefficient perturbations. This is known in dimensions d ≤ 4 (cf. §5 below).
2.3.2. Consider the example (2.7) with d = 3 and where m, n are integers with 1 < m < n.
2.3.3.
For an example for a one-sided behavior we consider the restricted X-ray transform where γ is now the regular parametrization of a curve in R d−1 and χ 0 , χ are smooth and compactly supported. We say that R is associated to a d dimensional line complex which is referred to as rigid because of the translation invariance in the x ′ variables. The canonical relation is now given by and the singular set S 1 (π L ) = S 1 (π R ) is the submanifold on which τ · γ ′ (x d ) = 0. One computes that V L = ∂/∂y d is a kernel vector field for π L and V R = ∂/∂x d is a kernel vector field for π R . Clearly V L is tangential to S 1 (π L ) everywhere so that π L is a blowdown. The behavior of the projection π R depends on assumptions on γ. The best case occurs when γ ′ (x d ), . . . , γ (d−1) (x d ) are linearly independent everywhere. The singularity manifolds S k = S 1 k (π R ) are then given by the equations τ · γ (j) (x d ) = 0, j = 1, . . . , k, and thus S 1 d−1 (π R ) = ∅ and π R has (at most) S 1 d−2 ,0 singularities. For the model case given here it is easy to derive the sharp L 2 -Sobolev estimates. Observe that defines (modulo the cutoff function) a translation invariant operator. By van der Corput's Lemma it is easy to see that and one deduces that R maps L 2 to L 2 1/(2d−2) . It is conjectured that the X-ray transform for general well-curved line complexes satisfies locally the same estimate; here the support of χ is supported in (−ε, ε) for small ε and it is assumed that for each fixed
Strong Morin singularities.
We now discuss the notion of strong Morin singularities, or S + 1r singularities for maps into a fiber bundle W over a base manifold B, with projection π B . Here it is assumed that dim W = n and dim(B) = q ≤ n − r, so that the fibers W b = π −1 B b are n − q dimensional manifolds (see [26]). The relevant W is T * Ω R , the cotangent bundle of the base B = Ω R .
We remark that for the examples in 2.3.1 both π L and π R have strong Morin singularities while for the example in 2.3.3 π R has strong Morin singularities. This remains true for small perturbations of these examples.
In order to verify the occurence of strong Morin singularities for canonical relations which come up in studying averages on curves the following simple lemma is useful.
Lemma. Let I be an open interval, let ψ : I → R n be a smooth parametrization of a regular curve not passing through 0 and let Let π : M → R n be defined by π(t, η) = η.
For the proof assume first the linear independence of ψ (j) (t). We may work near t = 0 and by a linear change of variables, we may assume that Hence, (η ′ , t) and (ξ ′ , ξ 1 ) form adapted coordinates (cf. (2.2)) for the map π, and in these coordinates are linearly independent. Thus π has at most S 1n−2,0 singularities. Conversely, assume that π has at most S 1n−2,0 singularities. Since ψ does not pass through the origin, we may assume that ψ n (t) = 0 locally. Then the map π is given in adapted coordinates by and the linear independence follows easily from (2.3-5).
Mixed finite type conditions. We briefly discuss mixed conditions for pairs of maps
, there is a nonvanishing function α so that det Df L = α det Df R in the domain under consideration. Let V L , V R be nonvanishing kernel fields on M for the maps f L , f R . Let U be a neighborhood of P in M . We define D j,k (U ) to be the linear space of differential operators generated spanned by operators of the form where V i are kernel fields for the maps f L or f R in U , and k of them are kernel fields for f L and j of them are kernel fields for f R . Let h be a real valued function defined in a neighborhood of P ∈ M ; we say that h vanishes of order at P ∈ M and if there is an operator L ∈ D j,k so that Lh P = 0. Because of the assumption of volume equivalence det Df L in this definition can be replaced by det Df R . In the canonical example of interest here we have M = C ⊂ T * Ω L × T * Ω R , a canonical relation, and f L ≡ π L , f R ≡ π R are the projections to T * Ω L and T * Ω R , respectively.
Fourier integral operators in two dimensions
In this section we examine the regularity of Fourier integral operators in two dimensions, in which case one can get the sharp L 2 regularity properties with the possible exception of endpoint estimates. We shall assume that Ω L , is a homogeneous canonical relation and F ∈ I −1/2 (Ω L , Ω R , C), with compactly supported distribution kernels; we assume that the rank of the projection π space : C → Ω L × Ω R is ≥ 2 everywhere. The generalized Radon transform (1.11) (with ℓ = 1, d = 2) is a model case in which rank (dπ space ) = 3.
In order to formulate the L 2 results we shall work with the Newton polygon, as in [58] where oscillatory integral operators in one dimension are considered. We recall that for a set E of pairs (a, b) of nonnegative numbers the Newton polygon associated to E is the closed convex hull of all quadrants Definition. For c ∈ C let N (c) be the Newton polygon associated to the set Using the notion of type (j, k) in §2.5 we can now formulate 3.1. Theorem. Let Ω L , Ω R ⊂ R 2 and C as above and let F ∈ I −1/2 (Ω L , Ω R ; C), with compactly supported distribution kernel. Let α = min c (2t c ) −1 .
Then the operator F maps L 2 boundedly to L 2 α−ε for all ε > 0 . In the present two-dimensional situation one can reduce matters to operators with phase functions that are linear in the frequency variables (i.e., the conormal situation). We briefly describe this reduction.
First, our operator can be written modulo smoothing operators as a finite sum of operators of the form where a is of order −1/2, and has compact x support. We may also assume that a(x, ξ) has ξ-support in an annulus {ξ : |ξ| ≈ λ} for large λ. By scaling we can reduce matters to show that the L 2 operator norm for the oscillatory integral operator T λ defined by ; here χ has compact support and vanishes for ξ near 0. We introduce polar coordinates in the last integral, ξ = σ(cos y 1 , sin y 1 ) and put Then the asserted bound for T λ is equivalent to the same bound for the L 2 norm of T λ defined by for suitable χ; here we have used the homogeneity of ϕ. Now we rescale again and apply a Fourier transform in σ and see that the bound follows from the L 2 → L 2 α bound for the conormal Fourier integral operator with distribution kernel where Φ(x, y) = S(x, y 1 ) − y 2 , and b is a symbol of order 0, supported in {|τ | ≈ λ} and compactly supported in x.
Thus it suffices to discuss conormal operators of this form; in fact for them one can prove almost sharp L p → L p α estimates. Before stating these results we shall first reformulate the mixed finite type assumption from §2.5 in the present situation.
3.2. Mixed finite type conditions in the conormal situation. We now look at operators with distribution kernels of the form (3.3). The singular support of such operators is given by and it is assumed that Φ x = 0, Φ y = 0. The canonical relation is the twisted conormal bundle N * M ′ as in (1.12). In view of the homogeneity the type condition at c 0 = (x 0 , y 0 , ξ 0 , η 0 ) ∈ N * M ′ is equivalent with the type condition at (x 0 , y 0 , rξ 0 , rη 0 ) for any r > 0 and since the fibers in N * M ′ are one-dimensional it seems natural to formulate finite type conditions in terms of vector fields tangent to M, and their commutators. We now describe these conditions but refer for a more detailed discussion to [67]. Related ideas have been used in the study of subelliptic operators ( [36], [63]), in complex analysis ( [41], [2]) and, more recently, in the study of singular Radon transforms ( [11]).
Two types of vector fields play a special role: We say that a vector field V on M is of type ( . The notation is suggested by an analogous situation in several complex variables ( [41], [55]).
Note that at every point P ∈ M the vector fields of type (1, 0) and (0, 1) span a two-dimensional subspace of the three-dimensional tangent space T P M. Thus we can pick a nonvanishing 1−form ω which annihilates vector fields of type (1, 0) and (0, 1); we may choose are (1, 0) and (0, 1) vector fields, respectively. With this choice does not vanish. The quantity (3.4) is often referred to as "rotational curvature" (cf. [55]). Now let µ and ν be two positive integers. For a neighborhood U of P let W µ,ν (U ) be the module generated by vector fields adW 1 adW 2 . . . adW µ+ν−1 (W µ+ν ) where µ of these vector fields are of type (1, 0) and ν are of type (0, 1). The finite type condition in (2.4) can be reformulated as follows. Let P ∈ M and let c ∈ N * M ′ with base point P . Then C is of type (j, k) at c if there is a neighborhood U of P so that for all vector fields W ∈ W j+1,k (U ) ∪ W j,k+1 (U ) we have ω, W P = 0 but there is a vector field W in W j+1,k+1 for which ω, W P = 0. 1 Now coordinates can be chosen so that Φ(x, y) = −y 2 + S(x, y 1 ) and the generalized Radon transform is given by (3.5) Rf (x) = χ(x, y 1 , S(x, y 1 ))f (y 1 , S(x, y 1 ))dy 1 then at P = (x, y 1 , S(x, y 1 )) the mixed finite type condition amounts to For the equivalence of these conditions see [67].
We now relate the last condition to the finite type condition above. Notice that and using coordinates (x 1 , x 2 , y 1 , τ ) a kernel vector field for the projection π R is given by V R = S x2 ∂ x1 − S x1 ∂ x2 ; this can be identified with the vector field X on M. Moreover a kernel vector field for the projection π L is given by at the point c (with coordinates (x, y 1 , τ )) if conditions (3.6), (3.7) are satisfied, and this is just a condition at the base-point P .
We shall now return to the proof of Theorem 3.1 and formulate an L p version for the conormal situation.
, the convex hull of the points (1, 1), (0, 0) and The L 2 estimate of Theorem 3.1 for conormal operators follows as a special case, and for the general situation we use the above reduction. Theorem 3.3 is sharp up to the open endpoint cases (cf. also §3.5.1-3 below).
We now sketch the main ingredients of the proof of Theorem 3.3. We may assume that S x2 is near 1 and |S x1 | ≪ 1. Suppose that Q = (x 0 , y 0 ) ∈ M and suppose that the type (j ′ , k ′ ) condition holds for some choice of (j ′ , k ′ ) with j ′ ≤ j and k ′ ≤ k at Q, and suppose that this type assumption is still valid in a neighborhood on the support of the cutoff function χ in (3.5) (otherwise we work with partitions of unity).
Since we do not attempt to obtain an endpoint result, it is sufficient to prove the required estimate for operators with the frequency variable localized to |τ | ≈ λ for large λ. We then make an additional dyadic decomposition in terms of the size of |∆| (i.e., the rotational curvature). Define a Fourier integral operator F λ,l0 by then by interpolation arguments our goal will be achieved by proving the following crucial estimates: A variant of this interpolation argument goes back to investigations on maximal operators in [18] and [71], [72], and (3.9) can be thought of a version of an estimate for damped oscillatory integrals. The type assumption is only used for the estimate (3.8). We note that by integration by parts with respect to the frequency variable the kernel of F λ,l0 is bounded by (3.10) λ(1 + λ|y 2 − S(x, y 1 )|) −N χ(2 l0 ∆(x, y 1 )).
We can use a well known sublevel set estimate related to van der Corput's lemma (see [8]) to see that for each fixed x the set of all y 1 such that |∆(x, The two sublevel set estimates together with (3.10) and straightforward applications of Hölder's inequality yield (3.8), see [67].
We now turn to the harder L 2 estimate (3.9). We sketch the ideas of the proof (see [66] and also [67] for some corrections).
If we tried to use the standard T T * argument we would have to have good lower bounds for S y1 (w, y 1 ) − S y1 (x, y 1 ) in the situation where S(w, y 1 ) − S(x, y 1 ) is small, but the appropriate lower bounds fail to hold if the rotational curvature is too small. Thus it is necessary to work with finer decompositions. Solve the equation S(w, y 1 ) − S(x, y 1 ) = 0 by w 2 = u(w 1 , x, y 1 ) and expand and (3.14) where M ≫ 100/ε. In particular γ 0 (x, y 1 ) = S y1x1 (x, y 1 ) + u w1 (x 1 , x, y 1 )S y1,x2 (x, y 1 ) = ∆(x, y 1 ) S x2 (x, y 1 ) ; where the α k are smooth and V is the (1, 0) vector field ∂ x1 −S x1 /S x2 ∂ x2 . We introduce an additional localization in terms the size of γ j (x, y 1 ). For l = (l 0 , . . . , l M ), with l j < l 0 for j = 1, . . . , M define which describes a localization to the sets where |γ j (x, y 1 )| ≈ 2 −lj . A modification of the definition is required if |γ j | ≤ 2 −l0 for some j ∈ {1, ..., M }.
Since we consider at most O((1 + l 0 ) M ) = O(2 εl0 ) such operators it suffices to bound any individual T λ, l , and the main estimate is In what follows we fix λ and l and set Note that while µ i and ν i are close there may be 'large' gaps between µ i and ν i+1 for which the favorable lower bound (iii) holds. The elementary but somewhat lengthy proof of the Lemma based on induction is in [66]. A shorter and more elegant proof (of a closely related inequality) based on a compactness argument is due to Rychkov [64].
In order to descibe the orthogonality argument we need some terminology. Let I be a subinterval of [0, 1]. We say that β is a normalized cutoff function associated to I if β is supported in I and |β (j) (t)| ≤ |I| −j , for j = 1, . . . , 5 and denote by A(I) the set of all normalized cutoff functions associated to I.
Fix I and β in A(I); then we define another localization of T = T λ, l by It follows quickly from the definition and the property ν s ≤ µ s ≤ Cν s that This is because for any interval I of length µ s a function β ∈ A( I) can be written as a sum of a bounded number of functions associated to subintervals of length ν s .
One uses the Cotlar-Stein Lemma in the form , for a (finite) sum of operators j A j on a Hilbert space. (See [73, ch. VII.2]; as pointed out in [7] and elsewhere, the version (3.19) follows by a slight modification of the standard proof). Now if J is an interval of length ν s /8 and β ∈ A(J) then we split β = n β n where for a fixed absolute constant C the function C −1 β n belongs to A(I n ) and the I n are intervals of length µ s−1 ; I n and I n ′ are disjoint if |n − n ′ | > 3 and the sum extends over no more than O(ν s /µ s−1 ) terms and thus over no more than O(2 l0 ) terms. Now let |I n | = |I n ′ | ≈ µ s−1 and dist(I n , I n ′ ) ≈ |n−n ′ ||I| and assume |n−n ′ ||I| ≤ ν s /8. Let β n , β n ′ be normalized cutoff functions associated to I n , I n ′ . Then (3.20) T by the disjointness of the intervals I n , I ′ n . The crucial estimate is (3.21) T (3.20/21) allows us to apply (3.19) with θ = 0 (the standard version does not apply, as is erroneously quoted in [66]). This yields the bound To see (3.21) one examines the kernel K of T [β n ]T [β n ′ ] * which is given by and by definition of µ s−1 , ν s , I n , I n ′ and the above Lemma we have To analyze the kernel K and prove (3.21) by Schur's test one integrates by parts once in y 1 and then many times in τ , for the somewhat lengthy details see [66], [67]. Analogous arguments also apply to the estimation of T [β]T [β] * when β is associated to an interval of length ≪ ν 1 , this gives (3.18). Namely, if Rf (x) = f (y 1 , x 2 + s(x 1 , y 1 ))χ(x, y 1 )dy 1 then the endpoint L 2 → L 2 α estimate in Theorem 3.1 holds true. An only slightly weaker result for the case s ∈ C ∞ has been obtained by Rychkov [64]. For related work see also some recent papers by Greenblatt [23], [24].
3.5.2.
It is not known exactly which endpoint bounds hold in the general case of Theorem 3.3. As an easy case the L p → L p 1/p estimate holds if p > n and a type (0, n − 2) condition is satisfied (in the terminology of Theorem 3.3). A similar statement for 1 < p < n/(n − 1) is obtained for type (n − 2, 0) conditions by passing to the adjoint operator.
Some endpoint inequalities in Theorem 3.3 fail: M. Christ [9]
showed that the convolution with a compactly supported density on (t, t n ) fails to map L n → L n 1/n . The best possible substitute is an L n,2 → L n 1/n estimate in [69]; here L n,2 is the Lorentz space.
3.5.4. Interpolation of the bounds in Theorem 3.3 with trivial L 1 → L ∞ bounds (with loss of one derivative) yields almost sharp L p → L q bounds ( [66], [67]). Endpoint estimates for the case of two-sided finite type conditions are in [1]. For endpoint L p → L q estimates in the case (3.24), with real-analytic s, see [57], [79], [42].
3.5.5. It would be desirable to obtain almost sharp L 2 versions such as Theorem 3.1 for more general oscillatory integral operators with a corank one assumption. Sharp endpoint L 2 results where one projection is a Whitney fold (type 1) and the other projection satisfies a finite type condition are due to Comech [15].
3.5.6. Interesting bounds for the semi-translation invariant case (3.24) where only lower bounds on s xy (or higher derivatives) are assumed were obtained by Carbery, Christ and Wright [6]. Related is the work by Phong, Stein and Sturm ( [60], [62], [61]), with important contributions concerning the stability of estimates.
Operators with one-sided finite type conditions
We now discuss operators of the form (1.1) and assume that one of the projections, π L , is of type ≤ r but make no assumption on the other projection, π R . The role of the projections can be interchanged by passing to the adjoint operator.
4.1.Theorem [25],[26],[28]
. Suppose π L is of corank ≤ 1 and type ≤ r, and suppose that det dπ L vanishes simply. If r ∈ {1, 2, 3} then It is conjectured that this bound also holds for r > 3. The estimate (4.1) is sharp in cases where the other projection exhibits maximal degeneracy. In fact if π L is a fold and π R is a blowdown then more information is available such as a rather precise description of the kernel of T λ T * λ , cf. Greenleaf and Uhlmann [32], [33]. Applications include the restricted X-ray transform in three dimensions for the case where the line complexes are admissible in the sense of Gelfand ([21], [30], [34]); for an early construction and application of a Fourier integral operator with this structure see also [43].
In the discussion that follows we shall replace the assumption that det dπ L vanishes simply (i.e., ∇ x,z det π L = 0) by the more restrictive assumption In the case r = 1 this is automatically satisfied, and it is shown in [26], [28] that in the cases r = 2 and r = 3 one can apply canonical transformations to reduce matters to this situation. For the oscillatory integral operators coming from the restricted X-ray transform for well-curved line complexes, the condition (4.2) is certainly satisfied. We shall show that for general r the estimate (4.1) is a consequence of sharp estimates for oscillatory integral operators satisfying two-sided finite type conditions of order r − 1, in d − 1 dimensions. The argument is closely related to Strichartz estimates and can also be used to derive L 2 → L q estimates (an early version can be found in Oberlin [48]).
We shall now outline this argument. After initial changes of variables in x and z separately we may assume that moreover by our assumption on the type we may assume that We may assume that the amplitudes are supported where |x| + |z| ≤ ε 0 ≪ 1.
We form the operator T λ T * λ and write and by an integration by parts argument we get Thus the corresponding operator R x d y d is bounded on L 2 (R d−1 ) and satisfies for any N . For the main contribution H x d y d we are aiming for the estimate From (4.4), (4.5), (4.6) and the L 2 (R) boundedness of the operator with kernel the bound (4.1) follows in a straightforward way. Now observe that the operator H x d y d is local on cubes of diameter ≈ |x d − y d | and we can use a trivial orthogonality argument to put the localizations to cubes together. For a single cube we may then apply a rescaling argument. Specifically, let c ∈ R d and define Then for the corresponding operators we have Note that H x d y d c does not vanish only for small c. A calculation shows that the kernel of H x d y d c is given by an oscillatory integral with small parameters c, α = |x d − y d |, y d , and the phase function is given by Here the choice of Ψ + is taken if x d > y d and Ψ − is taken if x d < y d ; for the error we have ρ ± = O(α(|y d | + c)) in the C ∞ topology. Observe in particular that for α = 0 we get essentially the localization of a translation invariant operator. We now examine the canonical relation associated to the oscillatory integral, when α = 0. In view of (4.3.1) the critical set {∇ z Ψ ± = 0} for the phase function at α = 0 is given by ; in view of (4.2) this defines a smooth manifold. Consequently the canonical relation is a smooth manifold. By (4.2) we may assume (after performing a rotation) that Φ x d z d z1 = 0 and then solve the equation Φ x d z d (0, z) = 0 near the origin in terms of a function z 1 = z 1 (z ′′ , z d ). The projection π L is given by and ∂/∂z d is a kernel field for π L . Implicit differentiation reveals that ∂ k belongs to the ideal generated by Φ x d z j d , j ≤ k and thus, by our assumption (4.3.2) we see that π L is of type ≤ r − 1. The same holds true for π R , by symmetry considerations. Although we have verified these conditions for α = 0 they remain true for small α since Morin singularities are stable under small perturbations.
We now discuss estimates for the oscillatory integral operator S ± µ whose kernel is given by (4.8) (we suppress the dependence on c, α, y d .) The number of frequency variables is N = d and thus we can expect the uniform bound for small α. Indeed, the case α = 0 of (4.9) is easy to verify; because of the translation invariance we may apply Fourier transform arguments together with the method of stationary phase and van der Corput's lemma. Given (4.9) we obtain from (4.7) and from (4.9) with µ = λ|x d − y d | that Of course the Fourier transform argument does not extend to the case where α is merely small. However if r = 1 the estimate follows from (1.6) (with d replaced by d − 1 and N = d) since then C Ψ ± is a local canonical graph. Similarly, if r = 2 then the canonical relation C Ψ ± projects with two-sided fold singularities so that the desired estimate follows from known estimates for this situation (see the pioneering paper by Melrose and Taylor [44], and also [53], [19], [27]). For the case r = 3, inequality (4.6) follows from a recent result by the authors [28] discussed in the next section, plus the reduction outlined in §1.2. The case r ≥ 4 is currently open.
Remarks.
4.2.1. The argument above can also be used to prove L 2 → L q estimates (see [48], [25], [26]). Assume r = 1 and thus assume that π L : C → T * Ω L projects with Whitney folds. Then a stationary phase argument gives that and interpolation with (4.5-6) yields L q ′ → L q estimates for K x d y d and then L 2 → L q bounds for T λ . The result [25] is The estimate (4.10) may be improved under the presence of some curvature assumption. Assume that the projection of the fold surface S 1 (π L ) to Ω L is a submersion, then for each x ∈ Ω L the projection of S 1 (π L ) to the fibers is a hypersurface Σ x in T * x Ω L . Suppose that for every x this hypersurface has l nonvanishing principal curvatures (this assumption is reminiscent of the so-called cinematic curvature hypothesis in [46]). Then (4.10) can be replaced by and (4.11) holds true for a larger range of exponents, namely (4.12) The version of this estimate for Fourier integral operators [25], with l = 1, yields Oberlin's sharp L p → L q estimates [48] for the averaging operator (2.7) in three dimension (assuming that Γ is nondegenerate), as well as variable coefficient perturbations. It also yields sharp results for certain convolution operators associated to curves on the Heisenberg group ( [65], see §7.3 below) and for estimates for restricted X-ray transforms associated to well curved line complexes in R 3 ([25]).
In dimensions d > 3 the method yields L 2 → L q bounds ([26]) which should be considered as partial results, since in most interesting cases the endpoint L p → L q estimates do not involve the exponent 2.
4.2.2. The analogy with the cinematic curvature hypothesis has been exploited by Oberlin, Smith and Sogge [52] to prove nontrivial L 4 → L 4 α estimates for translation invariant operators associated to nondegenerate curves in R 3 . Here it is crucial to apply a square function estimate due to Bourgain [3] that he used in proving bounds for cone multipliers. The article [51] contains an interesting counterexample for the failure of L p → L p 1/p−ε estimates when p < 4. Oberlin [49] to obtain essentially sharp L p → L q estimates for the operator (2.7) in four dimension, see also [29] for a related argument for the restricted X-ray transform in four dimensions, in the rigid case (2.8).
Techniques of oscillatory integrals have been used by
4.2.4. More recently, a powerful combinatorial method was developed by Christ [10] who proved essentially sharp L p → L q estimates for the translation invariant model operator (2.7) in all dimensions (for nondegenerate Γ). L p → L q bounds for the X-ray transform in higher dimensions, in the model case (2.8), have been obtained by Burak-Erdogan and Christ ([4], [5]); these papers contain even stronger mixed norm estimates. Christ's combinatorial method has been further developed by Tao and Wright [75] who obtained almost sharp L p → L q estimates for variable coefficient analogues.
Two-sided type two singularities
We consider again the operator (1.1) and discuss the proof of the following result mentioned in the last section. [28]. Suppose that both π L and π R are of type ≤ 2. Then for λ ≥ 1
A slightly weaker version of this result is due to Comech and Cuccagna [17] who obtained the bound
The proof of the endpoint estimate is based on various localizations and almost orthogonality arguments. As in §2 we start with localizing the determinant of dπ L/R and its derivatives with respect to a kernel vector field. The form (5.2) below of this first decomposition can already be found in [15], [17].
We assume that the amplitude is supported near the origin and assume that (4.3.1) holds.
then kernel vector fields for the projections π L are given by respectively. Also let h(x, z) = det Φ xz and by the type two assumption we can assume that |V 2 L h|, |V 2 R h| are bounded below. Emphasizing the amplitude in (1.1) we write T λ [σ] for the operator T λ and will introduce various decompositions of the amplitude.
Let β 0 ∈ C ∞ (R) be an even function supported in (−1, 1), and equal to one in (−1/2, 1/2) and for j ≥ 1 let β j (s) = β 0 (2 −j s) − β 0 (2 −j+1 s). Denote by ℓ 0 that is the largest integer ℓ so that 2 ℓ ≤ λ 1/2 (we assume that λ is large). Define thus if j, k > 0 then |h| ≈ 2 −l , |V L h| ≈ 2 k−l/2 , |V R h| ≈ 2 j−l/2 on the support of σ j,k,l . It is not hard to see that the estimate of Theorem 5.1 follows from 5.2. Proposition. We have the following bounds: We shall only discuss (5.3) as (5.4) is proved similarly. In what follows j, k, l will be fixed and we shall discuss the main case where 0 < k ≤ j ≤ l/2, 2 l ≤ λ 1/2 . As in the argument in §2 standard T * T arguments do not work and further localizations and almost orthogonality arguments are needed. These are less straightforward in the higher dimensional situation considered here, and the amplitudes will be localized to nonisotropic boxes of various sides depending on the geometry of the kernel vector fields. For aP , π ⊥ bP be the orthogonal projections to the orthogonal complement of Ra P in T x 0 Ω L and Rb P in T z 0 R d , respectively. Suppose 0 < γ 1 ≤ γ 2 ≪ 1 and 0 < δ 1 ≤ δ 2 ≪ 1 and let We always assume We say that χ ∈ C ∞ 0 is a normalized cutoff function associated to B P (γ 1 , γ 2 , δ 1 , δ 2 ) if it is supported in B P (γ 1 , γ 2 , δ 1 , δ 2 ) and satisfies the (natural) estimates We denote by A P (γ 1 , γ 2 , δ 1 , δ 2 ) the class of all normalized cutoff functions associated to B P (γ 1 , γ 2 , δ 1 , δ 2 ).
In the argument it is crucial that we assume (5.11) min{ γ 1,small γ 2,small , δ 1,small δ 2,small } max{γ 2,large , δ 2,large } since from (5.11) one can see that the orientation of small boxes B Q (γ small , δ small ) does not significantly change if Q varies in the large box B P (γ large , δ large ).
A combination of these estimates (with 5.8) yields the desired bound (5.3); here the outline of the argument is similar to the one given in §3. For each instance we are given a cutoff function ζ ∈ A P (γ large , δ large ) and we decompose where the ζ XZ is, up to a constant, a normalized cutoff function associated to a box of dimensions (γ small , δ small ); the various boxes have bounded overlap, and comparable orientation. More precisely if (P, Q) is a reference point in the big box B P (γ large , δ large ) then each of the small boxes is comparable to a box defined by the conditions |π ⊥ If T XZ denotes the operator T λ [ζ XZ σ j,k,l ] then in each case we have to show that for large N For the estimation in the case (5.12) it is crucial that in any fixed large box V L h does not change by more than O(ε2 k−l/2 ) and thus is comparable to 2 k−l/2 in the entire box; similarly V R h is comparable to 2 j−l/2 in the entire box. For the orthogonality we use that Φ x ′ z ′ is close to the identity. In the other extreme case (5.14) V R h and V L h change significantly in the direction of kernel fields and this can be exploited in the orthogonality argument. (5.13) is an intermediate case. This description is an oversimplification and we refer the reader to [28] for the detailed discussion of each case.
Geometrical conditions on families of curves
We illustrate some of the results mentioned before by relating conditions involving strong Morin singularities to various conditions on vector fields and their commutators. vector field X and a nonvanishing (0, 1) vector field Y are given by Let π L,x0 the restriction of π L to N L,x0 as a map to T * x0 Ω L , then π L has strong Morin singularities if for fixed x 0 the map π L,x0 has Morin singularities.
Similarly, if y 0 ∈ Ω L , let M y 0 = {x ∈ Ω L : (x, y 0 ) ∈ M} then the adjoint operator R * is an integral operator along the curves M y0 ; now we define N R,y 0 as the set of all (x, λ) where x ∈ M y 0 , λ ∈ T * ,⊥ (x,y 0 ) M, and π R,y 0 : N R,y 0 → T * y 0 Ω R is the restriction of the map π R .
Proposition.
(a) Let x 0 ∈ Ω L and y 0 ∈ M x 0 and let P = (x 0 , y 0 ). The following statements are equivalent.
(i) Near P , the only singularities of π L,x 0 are S 1 k ,0 singularities, for k ≤ d − 2.
(b) Let y 0 ∈ Ω R and x 0 ∈ M y 0 and let P = (x 0 , y 0 ). The following statements are equivalent.
(i) Near P , the only singularities of π R,y 0 are S 1 k ,0 singularities, for k ≤ d − 2.
It suffices to verify statement (a). There are coordinate systems x = (x ′ , x d ) near x 0 , vanishing at x 0 and y = (y ′ , y d ) near y 0 , vanishing at y 0 so that near P the manifold M is given by y ′ = S(x, y d ) with where g(0) = 0.
In these coordinates we compute the vector fields X and Y in (6.1) and find By induction one verifies that for m = 1, 2, . . .
Consequently we see that the linear independence of the vector fields (adY ) m X at P is equivalent with the linear independence of ∂ m g j /(∂y m d ) at y d = 0. Next, the map π L,x 0 : N L,x 0 → T * x 0 Ω L is in the above coordinates given by and from (2.3-5) we see that the statement (i) is also equivalent with the linear independence of the vectors ∂ m g j /(∂y m d ) at y d = 0. This proves the proposition.
6.2. Families of curves defined by exponentials of vector fields. Let now {γ t (·)} t∈I be a one-parameter family of diffeomorphisms of R n which we can also consider as a family of parametrized curves, t → γ t (x) := γ(x, t).
We shall assume that x varies in an open set Ω, the open parameter interval I is a small neighborhood of 0 and that γ 0 = Id andγ = 0, whereγ denotes d dt (γ t ). Thus for each x, t → γ(x, t) defines a regular curve passing through x. As in the article by Christ, Nagel, Stein and Wainger [11], we may write such a family as for some vector fields X 1 , X 2 , ..., and N ∈ N. The generalized Radon transform is now defined by Rf (x) = f (γ(x, t))χ(t)dt and incidence relation M is given by Besides using the projections π L and π R , there are other ways of describing what it means for the family {γ t (·)} to be maximally nondegenerate, in either a one-or two-sided fashion. One is given in terms the structure of the pullback map with respect to the diffeomorphisms γ t (·), and another is given by the linear independence of certain linear combinations of the vector fields X j and their iterated commutators. We formulate the conditions on the right, with the analogous conditions on the left being easily obtained by symmetry.
6.3. Strong Morin singularities and pull-back conditions. We are working with (6.2) and formulate the pullback condition (P ) R . Form the curve Proposition. Let c 0 = (x 0 , ξ 0 , x 0 , η 0 ) ∈ N * M ′ . Then condition (P ) R is satisfied at x 0 if and only if π R has only S + 1 k ,0 singularities at c, with k ≤ d − 2. To see this, note that M ⊂ R n × R n is the image of the immersion (x, t) → (x, γ(x, t)). Thus (x, ξ; y, η) belongs to N * M ′ if and only if y = γ(x, t) for some t ∈ R and (DΦ (x,t) ) * (ξ, −η) = (0, 0) ∈ T * (x,t) R n+1 . This yields For each fixed t, let y = γ t (x), so that x = γ −1 t (y) andγ t (x) =γ t (γ −1 t (y)) = d ds (γ t+s • γ −1 t (y)) = Γ R (y, t). We thus have a parametrization of the canonical relation, which is favorable for analyzing the projection π R . Indeed the equivalence of (P ) R with the strong cusp condition follows immediately from the Lemma in §2.4.
6.4. Pullback and commutator conditions. The bracket condition (B) R for families of curves (6.2) states the linear independence of vector fields X i , i = 1, . . . , n where X 1 = X 1 , X 2 = X 2 and for k = 2, . . . , n (6.6) with universal coefficients a I,k which can be computed from the coefficients of the Campbell-Hausdorff formula ([40,Ch.V.5], see also the exposition in [11]). In particular (6.7) See [56], [26] for the computation of the vector fields X 3 , X 4 and their relevance for folds and cusps. Assuming (P ) R we shall now show that (B) R holds and how one can determine the coefficients in (6.6). By Taylor's theorem in the s variable From this it follows that Γ R (x, t) = ψ(t, X 1 , . . . , X n , . . . ) and thus condition (P ) R becomes the linear independence of ψ, ψ ′ , ..., ψ (n−1) . We will work modulo O(s 2 ) + O(st n+1 ) and so can assume that there are only n vector fields, X 1 , . . . , X n . Compute The first few terms are given by For notational convenience, we let the sum start at m = 2 instead of m = 3 and setc ∅ = 1/2, and for the higher coefficients we getc (1) = −c (2) = 1/12 and c (1,2) =c (2,1) = −1/48. These are enough to calculate the coefficients in (B) R in dimensions less than or equal to five which is the situation corresponding to at most S + 1,1,1,0 (strong swallowtail) singularities. Returning to (P ) R , since we have C 1 = A, C 2 = B, we can use the Kronecker delta notation to write C j = (−1) j n i=1 (t + δ j2 is)t i−1 X i . Now From this we obtain Since thec J 's are known (cf. [40,Ch.V.5], [77]) this allows one to compute the X i 's and this shows that the condition (P ) R is equivalent with a bracket condition (B) R for some coefficients a I,k .
To illustrate this, we restrict to n ≤ 5 and to get a manageable expression we work mod O(t 5 ) and use (6.10); the expression for Γ R (x, t) becomes then which becomes = X 1 + 2t X 2 + 3t 2 X 3 + 4t 3 X 4 + 5t 4 X 5 where the X i are given in (6.7). Thus condition (B) R in dimension n ≤ 5 is the linear independence of the X i for 0 ≤ i ≤ n − 1.
Curves on some nilpotent groups.
Let G be an n dimensional nilpotent Lie group with Lie algebra g. Let γ : R → G be a smooth curve and define G R (t) = (DR γ(t) ) −1 (γ ′ (t)), where DR g denotes the differential of right-translation by g ∈ G. Note that G R : R → T 0 G = g defines a curve in the Lie algebra g.
Lemma. The pullback condition (P ) R for the family of curves t → x·γ(t) −1 is satisfied if and only if the vectors G R (t), G ′ R (t), . . . , G (n−1) R (t) are linearly independent everywhere. | 15,345.4 | 2002-05-10T00:00:00.000 | [
"Mathematics"
] |
Characterization of Luminescent Materials with 151Eu Mössbauer Spectroscopy
The application of Mössbauer spectroscopy to luminescent materials is described. Many solids doped with europium are luminescent, i.e., when irradiated with light they emit light of a longer wavelength. These materials therefore have practical applications in tuning the light output of devices like light emitting diodes. The optical properties are very different for the two possible valence states Eu2+ and Eu3+, the former producing ultraviolet/visible light that shifts from violet to red depending on the host and the latter red light, so it is important to have a knowledge of their behavior in a sample environment. Photoluminescence spectra cannot give a quantitative analysis of Eu2+ and Eu3+ ions. Mössbauer spectroscopy, however, is more powerful and gives a separate spectrum for each oxidation state enabling the relative amount present to be estimated. The oxidation state can be identified from its isomer shift which is between −12 and −15 mm/s for Eu2+ compared to around 0 mm/s for Eu3+. Furthermore, within each oxidation state, there are changes depending on the ligands attached to the europium: the shift is more positive for increased covalency of the bonding ligand X, or Eu concentration, and decreases for increasing Eu–X bond length.
Introduction
In the last half of the 20th century, solid state physics has been responsible for a remarkable revolution in electronics with the replacement of the vacuum tube by semiconductor transistors. Not quite so dramatic but more visible, literally, is the replacement of incandescent lighting by light emitting diodes (LEDs) with luminescent materials. Among these materials are the lanthanides, which have unique magnetic, luminescent, and electrochemical properties and are therefore used for many different applications, such as magnets, batteries, superconductors or in optical devices. In particular, europium has attracted much attention due to its red emission in the trivalent state (Eu 3+ ), which is widely used in fluorescent lamps, displays, and recently for solid-state lighting [1,2]. The broad emission of divalent europium (Eu 2+ ) is used in lighting applications [3] as well as euro bank notes [4].
As a rule when substituting for trivalent ions (Al 3+ , Y 3+ , La 3+ ) europium generally forms Eu 3+ , while for divalent ions (Mg 2+ , Ca 2+ , Sr 2+ , Ba 2+ ), it goes in as Eu 2+ . In some cases, the optical activation with divalent europium turns out to be difficult, since Eu 2+ is easily oxidized to Eu 3+ . An exact knowledge of the Eu 2+ and Eu 3+ content is necessary to guarantee an efficient light output and to optimize the material for practical applications. Eu 2+ and Eu 3+ are clearly distinguishable by photoluminescence spectroscopy. Eu 2+ is characterized by a broad emission, while Eu 3+ shows narrow emission lines. Both Eu 2+ and Eu 3+ are sensitive to changes in the surrounding crystal field. Photoluminescence spectroscopy, however, does not allow a quantitative analysis of Eu 2+ and Eu 3+ ions. For example, Eu 2+ does not fluoresce in the fluorozirconate base glass, although it is clear from electron paramagnetic resonance (EPR) [5] and Mössbauer studies [6] that it is present in the glass.
X-ray absorption spectroscopy is one method to determine not only the charge of the doped europium ions, but also allows a quantitative analysis [7,8]. However, it requires access to synchrotron radiation sources. 151 Eu Mössbauer spectroscopy has been established as a sensitive tool to distinguish quantitatively between Eu 2+ and Eu 3+ ions and to investigate the local structure around europium ions in solids. Unlike electron paramagnetic resonance (EPR) which cannot be used for trivalent europium ions (since the 7 F 0 ground state is not paramagnetic), information about both oxidation states emerges directly from the Mössbauer isomer shift. The Mössbauer Effect is the recoilless resonant fluorescence of gamma-radiation. It was discovered in 191 Ir by Rudolf L. Mößbauer in 1958 [9] for which he received the Nobel Prize in 1961. After it was observed in 57 Fe, the field developed so fast that the first International Mössbauer Conference took place in 1960 at the University of Illinois [10]. Since then, the Mössbauer Effect was found in many isotopes including 119 Sn, 121 Sb, and 151 Eu and has been applied in many fields of sciences, such as physics, chemistry, biology, and medicine. A review of 151 Eu work has been written by Grandjean and Long [11]. An index of Mössbauer data is available from the Mössbauer Effect Data Center [12].
Many review papers assessing Mössbauer spectroscopy as a characterization technique for a wide variety of materials have been published. A general review paper has been published in 1968 [13], followed by Mössbauer spectroscopy on iron [14], glass [15][16][17], Fe/S proteins [18], and europium chalcogenides [19] among others. No review paper is known for the investigation of europium luminescence using Mössbauer spectroscopy. In this article, its application to europium-containing inorganic luminescent materials is reviewed.
The Lanthanide Ions Eu 2+ /Eu 3+ and Their Optical Properties
Europium is the chemical element with the atomic number 63 and the electron configuration [Xe] 4 f 7 6s 2 . As for all lanthanides, the most stable oxidation state is +3, but europium forms divalent compounds as well. Divalent europium is rather unstable and oxidizes in air to form trivalent Eu compounds. Both Eu 2+ and Eu 3+ have an incompletely filled 4 f shell, which is shielded by the filled 5s and 5p shells. This peculiar electronic configuration is responsible for their unique optical and luminescent behaviour. The energy level diagrams for Eu 2+ and Eu 3+ are shown in Figure 1. Due to the shielding, 4 f n energy levels are only weakly influenced by the host material and can be depicted as solid lines each with a characteristic energy. Using the term symbol 2S+1 L J where S is the total spin quantum number, L is the total orbital quantum number, and J is the total angular momentum quantum number the ground state configuration for Eu 2+ and Eu 3+ are 8 S 7/2 and 7 F 0 , respectively.
Intra-configurational 4 f → 4 f transitions are forbidden by the parity selection rule. However, the 4 f wave function mixes with an opposite parity wave function, e.g., 5d, and the 4 f n transitions gain intensity. The inter-configurational 4 f → 5d transitions are allowed and appear in the spectra as broad absorption and emission bands; they are depicted as grey bands in Figure 1. Additionally, their energy depends strongly on the composition of the host structure.
Whereas the red luminescence of trivalent europium results from forbidden 4 f → 4 f transitions, the luminescence of divalent europium results from allowed 4 f → 5d transitions. Both are discussed in more detail in the following sections. [23] at the University of Rennes in France. They were typically used for optical fibers due to their high infrared transmittance [24,25]. ZBLAN glass, however, is fragile and sensitive to acids. Interestingly, its manufacture has been initiated on the International Space Station to avoid defects [26]. In 2003, Schweizer et al. thought of substituting some of the BaF 2 for BaCl 2 and adding Eu 2+ [27]. On heating, the BaCl 2 drops out as a crystallite, incorporating the optically active divalent europium. This initiated an investigation into their suitability as X-ray image plates for medical diagnosis [28]. Further applications of fluorozirconate glass are optical devices, such as colour displays [29] or up-converting and down-converting glass layers for solar cells [30,31]. Glass ceramics with hexagonal phase crystallites can be used as scintillators [32] while the orthorhombic phase is suitable for storage phosphors [33].
Europium can be incorporated in its divalent and trivalent state in ZBLAN glass. Eu 3+ shows its typical emissions in the red spectral range (see Section 2.2) in the fluorozirconate base glass. Upon annealing, some of the Eu 2+ ions are incorporated into the BaCl 2 nanocrystals leading to an intense Eu 2+ -related fluorescence under ultraviolet excitation. After 20 min of annealing at 260 • C the Eu 2+ fluorescence spectrum shows a main emission band at 407 nm ( Figure 2, solid curve) and a weaker, but broader emission at 485 nm. After annealing at 290 • C ( Figure 2, dashed curve), the 407 nm emission is shifted to 402 nm, while the 485 nm band has completely disappeared. The 402-nm and 407-nm emission bands are attributed to Eu 2+ in hexagonal and orthorhombic BaCl 2 , respectively, while the origin of the additional weaker but broader band at 485 nm is unknown. These observations are described in detail in [34]. The quantum efficiency spectrum of Eu 2+ in ZBLAN containing orthorhombic BaCl 2 crystallites is shown on the left. Normalized PL emission spectra of Eu 2+ -doped fluorochlorozirconate glass processed at 260 • C (solid curve) and 290 • C (dashed curve) for 20 min. The PL was excited at 285 nm and recorded at room temperature. The 260 • C processed sample only contains hexagonal BaCl 2 in a glass matrix, while the sample processed at 290 • C contains mainly orthorhombic BaCl 2 in a glass matrix [34]. The quantum efficiency spectrum (on the left side) has been recorded for the 290 • C processed sample.
Oxide glasses containing divalent lanthanide ions have attracted considerable attention as optical and magneto-optical devices. These glasses are advantageous for applications, such as frequency and time domain optical memories. Aluminoborate glasses containing a large amount of Eu 2+ show a Faraday effect [35]. The paramagnetic Faraday effect can be used for magneto-optical devices, such as optical isolators, optical switches, and optical shutters [36].
Eu 2+ in (Persistent) Phosphors/Aluminates
Persistent luminescence (also known as phosphorescence) is a phenomenon in which light (UV, visible or IR) is emitted for minutes, hours or even days after the initial excitation. The mechanism underlying this phenomenon is not fully understood but is known to involve energy traps. These traps are filled during excitation. After excitation, the stored energy is released to emitter centres, which gradually emit the light. There are many persistent phosphors, but the most studied are the strontium aluminates doped with Eu 2+ and Dy 3+ due to their high brightness, long lifetime, and stability. The slow decay gives the ion the ability to store information, which may be read later by optical (laser) or thermal stimulation [37]. However, large-scale production of these materials needs to be developed for them to realize their full commercial potential. For a detailed review of persistent phosphors, the reader should look at the following references [38,39].
BaMgAl 10 O 17 :Eu 2+ has two broad excitation bands at approximately 270 nm and 300 nm and shows a blue luminescence with the peak near 450 nm. Together with a red-emitting phosphor and a green-emitting phosphor, it yields a white emitting blend for fluorescent lamps and plasma display panels [40,41]. However, it is unstable in a variety of lamp-related processing conditions and also during the lamp life.
Eu 2+ in Other Luminescent Materials
CaS-based phosphors are studied for dosimetry applications [47]. For Eu 2+ -doped CaS containing additional impurities (such as Sm 2+ ), the stored dose can be read out upon optical stimulation in the infrared spectral range (in a so-called optically stimulated luminescence (OSL) process) instead of thermoluminescence [48].
Nitridosilicates provide efficient luminescent materials that are industrially applied in commercial phosphor-converted LEDs [49,50]. Here, Eu 2 SiN 3 is of particular interest since it is the only mixed-valence europium nitridosilicate, i.e., it has two different crystallographic Eu sites at which one site is occupied by Eu 2+ ions, while the other site is occupied by Eu 3+ ions [51]. It is also noteworthy that Eu 2 SiN 3 has a black colour due to its small band gap of 0.2 eV [50].
Oxonitridosilicates combine structural features and properties of both, oxosilicates and nitridosilicates. EuSi 2 O 2 N 2 shows a narrow yellow emission and is investigated for 2-and 3-phosphor-converted light-emitting diodes [52].
Optical Properties of Eu 3+
Eu 3+ shows in different materials an efficient red luminescence with high quantum efficiencies. In 1989, the quantum efficiency of Eu 3+ in BaCa 2 Y 6 O 12 was found to be up to 25% [53]. The quantum efficiency of nanocrystalline powder of Lu 2 O 3 :Eu 3+ reaches 90% [54] and that of bulk Y 2 O 3 :Eu 3+ of 92% [55]. In borate and in fluorozirconate glass, the Eu 3+ quantum efficiency was determined to be 86% [56] and 94% [57], respectively.
Eu 3+ in Luminescent Glasses
Eu 3+ was used in fibre lasers in the early 1990s, mainly as a co-dopant with other rare earths to enhance efficiency [58,59]. More recently, it has been used as a down-converter in photovoltaic application [60,61] and to produce white light in LED's, also as a co-dopant [56,57,62]. Energy transfer is the key to increase efficiency in both cases, the details of which are discussed within the references given here. The literature on Eu 3+ -doping of borate glasses is more plentiful than that involving ZBLAN. Again, recent research has focused on the creation of white light and many glasses are not pure borates but combinations, such as fluoroborate, borogermanate, aluminoborate, lead borate, and borosilicate, for example. Whatever the host, Eu 3+ is responsible for the red component of emission and other rare earths complement Eu 3+ in order to emit other colours, although white light is the most desirable right now. Older papers, starting in the early 1990s, focused on the study of fundamental luminescence properties.
The absolute PL quantum efficiency (QE) and PL emission spectra of Eu 3+ -doped borate (66B 2 O 3 ·33BaO·1Eu 2 O 3 , values in mol %) and ZBLAN glass (51ZrF 4 ·20BaF 2 ·20NaF·3.5LaF 3 ·3AlF 3 · 0.5InF 3 ·2EuF 3 , values in mol %) are depicted in Figure 3. The maximum quantum efficiency value is found in ZBLAN glass and amounts to 94% for 395-nm excitation ( 7 F 0 to 5 L 6 transition). The glasses can also be excited in the blue spectral range, for instance at 465 nm ( 7 F 0 to 5 D 2 transition), resulting in QE values of 70% and 82% for ZBLAN and borate glass, respectively. In comparison to ZBLAN glass, borate glass provides a higher QE in the longer wavelength range, but a lower one in the short wavelength range.
In both glass systems, transitions from the excited state, 5 D 0 , to the ground state levels, 7 F J (J = 1, 2, 3, 4, 5, and 6) are observed leading to the typical emission in the red spectral range. In ZBLAN glass, additional emissions in the ultraviolet/blue spectral range are observed originating from the excited states, 5 D 1 and 5 D 2 , to the ground states, 7 F J (J = 0, 1, 2, and 4). In borate glass, emissions from the excited states, 5 D 1 and 5 D 2 , are quenched due non-radiative relaxation, whereas in ZBLAN glass the probability for radiative emissions from these levels is significantly higher. The 5 D 0 and 5 D 1 states are separated by 1750 cm −1 [20]. In the case of borate glass with a maximum phonon frequency of 1400 cm −1 [63,64], only one to two phonons are needed to bridge the gap, while, for ZBLAN glass, with a maximum phonon frequency of 580 cm −1 [65], more than three phonons are necessary. Thus, the non-radiative transition rates in ZBLAN glass are significantly smaller than in borate glass enabling radiative emission. The electric-dipole transition 5 D 0 to 7 F 2 is hypersensitive to variations in crystal symmetry [1]. The high intensity of this transition in borate glass indicates the amorphous nature of the matrix material without inversion symmetry for the Eu 3+ ion. For ZBLAN glass, the intensity of the 5 D 0 to 7 F 1 transition is higher than in borate glass, which implies a higher crystallinity of ZBLAN glass compared to borate glass. Therefore, Eu 3+ is a useful spectroscopic probe of the environment surrounding the lanthanide ion.
Eu 3+ in Lanthanide Oxides
Eu-doped sesquioxides Ln 2 O 3 (Ln = In, Sc, Y, La, Gd, Lu) are very important materials as they are used as the red-emitting phosphors in fluorescent lamps and colour television projection tubes [66]. Y 2 O 3 :Eu 3+ nanoparticles in spherical morphology are used for flat-panel displays [67]. Lu 2 O 3 :Eu is a very attractive host for scintillators [68] or x-ray phosphors [69], due to a high density of lutetia, which ensures that ionizing radiation is efficiently absorbed in relatively thin layers of lutetia-based phosphors.
Eu 3+ in Other Luminescent Materials
The production of III-V semiconductor based LEDs with efficient emission in the green and red spectral range is still challenging. GaN, doped with erbium and europium, enables the emission in the green and red spectral range, respectively, for lighting applications of optoelectronic devices [70].
Pyrochlore materials are known for their good thermal properties. Lanthanide-doped pyrochlores are used for contact-free surface temperature measurements [71].
Europium-doped titanium dioxide is developed as a substitute for the high-cost red-emitting phosphor Y 2 O 3 :Eu. TiO 2 :Eu is demonstrated to be a good sensitizer to absorb light and transfer energy to Eu 3+ ions [72,73]. It is also advantageous for practical applications due to its low cost, its chemical and thermal stability, and its good mechanical properties [72,74]. YVO 4 :Eu is a well-known red phosphor applied in cathode ray tubes, fluorescent lamps, and plasma displays. It provides high efficiency, colour purity, and thermal stability [75,76]. In addition, it is used as a fluorescent biological label for the detection of Na + channel dynamics on cell membranes [77]. EuVO 4 might be suitable to track biological systems, such as histidine and bovine serum albumin [78].
Zirconia is a widely-used material in optics due to its wide band-gap, high transparency, high refractive index and hardness [79]. Eu-doped ZrO 2 has been investigated for lamp and display applications [80].
Mössbauer Isotope 151 Eu
Eu has two naturally occurring isotopes, namely 151 Eu (47.82%) and 153 Eu (52.18%) of which the former is the most useful for Mössbauer spectroscopy owing to its low γ-ray energy of 21.54 keV. The source used is 151 Sm, and its decay scheme is shown in Figure 4. The half-life of 151 Sm with 90 years is very long [83]. Two different β − -decays occur. Only 0.9% decays into the excited state of 151 Eu. The remaining 99.1% goes to the ground state [83]. The 21.54 keV transition from the excited state with spin +7/2 to the ground state of 151 Eu with spin +5/2 has a lifetime of 14 ns and a resulting linewidth of 47 neV. Mössbauer isotopes must have a long lifetime for the decay of the excited state and very low lying excited states. These criteria exclude some isotopes. 151 Eu is the most used Mössbauer isotope of the lanthanide elements. To produce a spectrum, the γ-ray energy is varied using the Doppler effect by moving the 151 Sm source relative to the 151 Eu-containing absorber. A plot of transmitted γ-ray counts against velocity yields the Mössbauer spectrum. The velocities needed are of the order of millimetres per second (in non-SI units this is a furlong per fortnight-a snail's pace!). The linewidth is governed by the lifetime of the 21.54 keV transition and is 2.52 mm/s. The 21.54 keV radiation may also be produced with synchrotron radiation with an appropriate monochromator. Such facilities can be found at the European Synchrotron Radiation Facility (ESRF) in Grenoble (France), the Applied Photon Source (APS) at the Argonne National Laboratory (USA) or at the Super Photon ring (SPring-8) at the Harima Science Park (Japan).
Mössbauer measurements are usually performed in transmission geometry, but using backscattering geometry, γ-rays, x-rays, conversion electrons or Auger electrons may be detected as the excited nuclear state decays back to the ground state. For the 151 Eu isotope, there are no conversion electrons [84,85], but Bibicu et al. [85] found that this isotope emits Auger electrons. The transmission geometry gives volume information, while the backscattering geometry gives predominantly surface information.
Isomer Shift
The isomer shift arises from the electric monopole interaction between the nucleus and the surrounding electron shell. The overlap of the charge density distribution of s-electrons with the nucleus causes a shift of the energy levels of the nucleus. The shift is observable because the ground and excited nuclear states have different radii and hence different overlaps with the electron cloud. It cannot be measured directly and therefore is quoted relative to a known absorber. In this paper, the values are given with respect to EuF 3 . The isomer shift is a good indicator for the valence state of the isotope. It is useful for the investigation of valence states, ligand bonding states, and electron shielding.
Eu 3+ compounds exhibit isomer shifts between 0 and +3 mm/s. The large difference in isomer shifts of about 12 mm/s between divalent and trivalent Eu compounds results mainly from the shielding effect of the additional 4 f electron in Eu 2+ compounds; Eu 2+ has the configuration 4 f 7 , while Eu 3+ has 4 f 6 . Other effects like bond lengths, covalency, and coordination numbers produce less pronounced variations up to ±2 mm/s for Eu 3+ and ±1 mm/s for Eu 2+ but are clearly resolved due to the large difference between the nuclear radii of the ground and the excited state for the 151 Eu resonance, respectively. The isomer shifts of Eu in binary compounds are listed in Table 1. It is evident from Table 1 that fluorine is the most ionic ligand and therefore it is mentioned first in the following chapters.
Quadrupole Interaction
Quadrupole splitting may arise for nuclei in states with an angular momentum quantum number I > 1/2 due to their non-spherical charge distribution so that they have an electric quadrupole moment Q. This causes a split of the nuclear energy levels in the presence of an anisotropic electric field (anisotropic electronic charge distribution or ligand arrangement) when the lattice has a non-cubic structure. The charge distribution is characterized by the electric field gradient (EFG). From the quadrupole splitting, information on oxidation state, spin state, and site symmetry can be obtained. In a compound with cubic symmetry, the electric field gradient (and therefore the quadrupole interaction parameter) is zero and a single transition is observed [11], which is the case for the majority of the papers, summarized in this review article. If a threefold or fourfold axis is present, the electronic charge distribution will be symmetrical and eight transitions are allowed (see Figure 5). If there is no threefold or fourfold symmetry axis passing through the nucleus, the components of the electric field along the principal axes are different, the asymmetry parameter is non-zero, and there are 12 allowed transitions [86].
The quadrupole splitting is usually only partly resolved owing to the linewidth of about 2.5 mm/s. This gives an asymmetrically broadened line profile (see Figure 5), which for small interactions, has to be distinguished from overlapping contributions from more than one inequivalent site.
Magnetic Hyperfine Interaction
Magnetic hyperfine splitting is caused by the interaction between the nuclear magnetic moments and the magnetic field of the electrons on the atom. It is only observed for Eu 2+ and to observe it requires magnetic dilution so that electron spin relaxation rates are slow. The paramagnetic hyperfine fields are typically of the order of 30 T.
The nuclear spin I splits into 2I + 1 sublevels, i.e., six levels for the ground state of 151 Eu (I = 5/2) and eight for the excited state (I = 7/2). The selection rules ∆m = 0, ±1 give rise to a symmetric 18-line spectrum, where, because of the smaller g-value of the excited state, it has the appearance of a six-line spectrum with broadened lines (see Figure 6).
Linewidth (FWHM)
The linewidth of the absorption line is dependent on the lifetime of the nuclear excited state [89]. In the case of an ideally thin absorber, the linewidth is twice the natural linewidth [89]. Usually, the linewidth of an amorphous material is broader than that of the corresponding crystalline material, e.g., Eu 3+ in metaphosphate has a linewidth of 1.96 mm/s and 1.76 mm/s for the amorphous and crystalline material, respectively [90].
Materials Overview
An overview of different luminescent materials, containing Eu 2+ and Eu 3+ with the corresponding isomer shifts at room temperature, is given in Tables 2 and 3, respectively. The isomer shift is in the range −9.7 to −14.3 mm/s for Eu 2+ and −0.93 to +1.47 mm/s for Eu 3+ .
The isomer shifts listed in Tables 2 and 3 are visualized in Figure 7, sorted for different material systems. Fluorides show isomer shifts from −14.3 to −13 mm/s and oxide glasses from −13.5 to −13 mm/s. Eu 2+ in different kinds of aluminates have the largest range of isomer shifts from −9.7 to −15 mm/s. For materials, such as vanadates, sulphides, nitrides, and titanium dioxide, only a few papers were found and therefore depicted as "other" with isomer shifts ranging from −10.5 to −12.5 mm/s. For Eu 3+ , the isomer shifts vary from −1 to +1.4 mm/s with fluorides having isomer shifts at approximately 0 mm/s, because the isomer shifts in this paper are given relative to EuF 3 . Eu 3+ has isomer shifts from 0 to 1 mm/s in oxide glasses and −0.6 to +0.6 mm/s in vanadates. In aluminates and other materials, the Eu 3+ isomer shifts range from approximately −0.9 to +1 mm/s. Tables 2 and 3.
Fluorides
CaF 2 with Eu 2+ impurities occurs in nature as the mineral fluorite (or fluorspar) and the term fluorescence originates from its luminescent properties. Eu 2+ in CaF 2 shows strong luminescence, while that of Eu 3+ is much weaker [128]. Implanted Eu 2+ substituting for Ca 2+ luminesces in the violet spectral range at 420 nm, while a second emission band at 680 nm arises from interstitial sites; the latter is eliminated on heating [129].
The 151 Eu Mössbauer spectra of highly diluted (0.1 mol %) Eu 2+ ions in CaF 2 showed an almost temperature-independent asymmetrically split pattern, arising from the paramagnetic hyperfine interaction AS.I in a cubic crystal field with slow electron spin relaxation. In a small external magnetic field B of 0.2 T such that gµ B B > A, an almost symmetrical pattern was observed. Both, the spectra with and without an external field, are well described using the spin Hamiltonian and previous electron paramagnetic resonance data. A more concentrated (2 mol % Eu 2+ ) sample exhibited a strongly broadened symmetrical resonance line due to an increased Eu-Eu spin relaxation rate in an external magnetic field of 0.2 T. The Mössbauer spectra exhibited further broadening and additional magnetic structures due to the reduced relaxation rate. When a large field of 6 T was applied such that gµ B B is much larger than the crystal field splitting, a fully resolved hyperfine pattern was observed at 2.5 K, with an effective field at the Eu nuclei of −33.7 T; at higher temperatures, superimposed patterns originating from excited electronic states were observed in the spectra [92].
Fluorochlorozirconate (FCZ) Glasses
Much work has been done on fluorochlorozirconate (FCZ) glasses and glass ceramics. Coey et al. [93] made measurements on 61ZrF 6 ·12BaF 2 ·7ThF 4 (values in mol %) doped with 20EuF 2 . Most of the europium was Eu 2+ with an isomer shift of −14.18 mm/s corresponding to a large coordination number ranging between 8 and 12 which is typical for glasses. While the Eu 2+ resonance line in the glass is extremely broad compared with EuF 2 , the Eu 3+ line is at least as narrow as that of EuF 3 . Measurements of the variation of the absorption at different temperature enable the relative binding strengths of Eu 2+ and Eu 3+ to be determined. The binding for Eu 2+ (Θ Debye = 145 K) was weaker than that of Eu 3+ (Θ Debye = 261 K).
An important material in computed radiography is ZBLAN, which contains 51ZrF 4 ·17BaF 2 · 3.5LaF 3 ·3AlF 3 ·20NaF (values in mol %), which, when doped with Eu 2+ , has applications as storage phosphors [33]. Weber et al. [91] showed that EuCl 3 heated at 710 • C for 10 min resulted in almost equal amounts of EuCl 2 and EuCl 3 . This could be important for production of EuCl 2 storage phosphors since EuCl 2 is more expensive than EuCl 3 . They also have shown that samples made with 5 mol % EuCl 2 contained both EuCl 2 and EuCl 3 in the ratio 78:22 (Figure 8a). Samples made with a mixture of 2.5 mol % EuCl 2 and 2.5 mol % EuCl 3 had a ratio 37:63 (Figure 8b), i.e., 13% of the Eu 2+ was oxidized. Johnson et al. [28] showed that EuCl 2 as raw material oxidizes to Eu 2 O 3 while EuCl 2 in the glass oxidizes to EuCl 3 . Pfau et al. [94] studied ZBLAN doped with 0.5 mol % InF 3 (to keep the ZrF 4 from being reduced) and with BaCl 2 and 5 mol % EuCl 2 substituted for BaF 2 . About 88% of the europium was in the divalent state. When 5 mol % EuF 2 was used, only 70% of the europium was in the divalent state. The isomer shift is not influenced by the thermal treatment of the glasses and amounts to approximately −14 mm/s. The Debye temperatures are Θ Debye = 147 K for Eu 2+ and Θ Debye = 186 K for Eu 3+ . [101] also revealed a small amount of Eu 3+ (<10%). The linewidth decreased with increasing Eu concentration from 11 mm/s to 3.6 mm/s, which is presumably due to unresolved paramagnetic hyperfine structure resulting from increased spin-spin relaxation. At a concentration of 25%, the emission spectrum shows an additional band at 490 nm as a consequence of the presence of very small traces of a second phase. No influence of the temperature on the spectra was observed in the range from 10 K to 300 K.
Aluminates
Arakawa et al. [95] reported an isomer shift of −9.7 mm/s which decreases with increasing size of the alkaline earth atom, a result of the decreased electron density at the Eu nuclei. The isomer shift decreases for increasing Eu concentration. The emission spectra of BaAl 12 Amorphous Al 2 O 3 doped with Eu 3+ shows a broad line, which was decomposed (not necessarily uniquely) into contributions from eight different quadrupole-split sites, as would be expected from an amorphous material [108]. The average isomer shift of 1.107 mm/s suggests strong covalent bonds and leads to the conclusion that Eu 3+ ions replace Al 3+ ions in the Al 2 O 3 matrix.
Hölsä et al. [42] investigated CaAl 2 O 4 with a low Eu concentration due to the segregation of the Eu ion from the CaAl 2 O 4 phase. With Eu 2+ doping, the compounds exhibit phosphorescence (persistent luminescence). Their Mössbauer spectra show a very broad line resulting from Eu 2+ but also a small amount of Eu 3+ . EuAl 2 O 4 has a green emission with the maximum at 515 nm [97]. Mössbauer spectra show two crystallographic sites for Eu 2+ with different isomer shifts of −13.4 mm/s and −12.83 mm/s. The photoluminescence excitation spectrum shows two broad bands at 380 nm and 430 nm.
Tronc et al. [99] made measurements on LaMgAl 11 O 19 , which has the magnetoplumbite structure for compositions with 100% and 30% Eu replacing La using different methods of preparation. The spectra showed line broadening arising from paramagnetic hyperfine splitting, which was partially resolved in the most dilute (30% Eu) sample.
Oxide Glasses
Tanabe et al. [114] found that in silicate and aluminate glasses the isomer shift of Eu 3+ increases with increasing optical basicity of the glasses. The optical basicity measures the electron donation by the oxygen anions to the metal ion used as a probe (Eu), the correlation indicates that the charge transferred to the Eu 3+ ion occupies mainly 6s shells and hence the phosphates have the smallest charge in the 6s shells compared to the other oxide glasses.
In silicate glasses M 3 Na 8 Si 13 O 65 (M = Mg, Ca, Ba) the Eu 3+ isomer shift decreases with decreasing modifier cation size, i.e., increasing electronegativity ( Figure 9) [112]. The shift is close to that of Eu 2 O 3 [110], suggesting that Eu 3+ ions locate in sites similar to those in Eu 2 O 3 . Musić et al. [111] obtained similar values for the isomer shift of Eu 3+ in sodium borosilicate glasses. In related aluminosilicate crystal Ba 0.95 Eu 0.05 Al 2 Si 2 O 8 , the europium is mostly Eu 2+ with an isomer shift of −14 mm/s. About 5 to 10% of the europium is Eu 3+ with an isomer shift close to 0 mm/s [45].
In metaphosphate glasses M 5 Al 4 P 25 O 76 (M = Mg, Ca, Ba), the Eu 3+ isomer shift increases with decreasing modifier cation size, i.e., increasing electronegativity. This is in contrast to the behaviour in silicate glasses [112] and is ascribed to the effect of the short π-bonds between the modifier atoms and the non-bridging phosphate chains. Metaphosphate glasses containing Zn, Sr, and Pb were investigated by Concas et al. [115]. The glass containing Pb has a lower isomer shift compared to the other two samples. A sodium phosphate glass containing Eu 3+ and Ce 3+ was irradiated by multipulse excimer-UV-laser irradiation, leading to an extinction of the luminescence [113]. The as-made glass showed an isomer shift of approximately 0.79 mm/s and decreases to 0.37 mm/s after irradiation. It is believed that the electronic traps created by irradiation expand the wave function of 5s electrons, thus decreasing the electronic density at the nucleus.
The borate glasses B 2 O 3 :Eu and B 2 O 3 -A 2 O 3 :Eu (A = Li, Na, K) were studied by Winterer et al. [102] for Eu concentrations between 0.1% and 33%. The isomer shifts of −13.0 mm/s corresponded to Eu 2+ . The more dilute glasses showed paramagnetic hyperfine splitting with fields of 35 T. The more concentrated ones showed quadrupole broadened Eu 2+ lines, and also some Eu 3+ . Fujita et al. [36] obtained similar results for B 2 O 3 -Na 2 O:Eu glasses. In sodium borate glass prepared under a reducing atmosphere, Fujita [131] observed the Eu 2+ and Eu 3+ absorption bands in the Mössbauer spectra and investigated spectral hole burning in Eu 3+ emission spectra. The relative hole area increases with the absorption area ratio, R, of Eu 2+ /Eu 3+ in the Mössbauer spectrum up to a value of R = 0.3 and decreases slightly for larger ratios.
Fujita et al. [36] studied 15EuO·85((1 − x)B 2 O 3 ·xNa 2 O) and found that the Eu 2+ isomer shifts increase as the concentration of Na 2 O increases and decrease as the concentration of EuO increases. In borate glasses with increasing network modifier content, the amount of three-coordinated boron decreases and that of four-coordinated boron increases up to about 33 mol % Na 2 O content, beyond which the four-coordinated boron starts to decrease (boron anomaly) due to the appearance of non-bridging oxygen. Hence, the electron density at the Eu nucleus drastically increases in this compositional region, since the non-bridging oxygen has more electron donation ability than the bridging oxygen and increases the 6s electron density effectively.
Nemov et al. [126] investigated germanium glasses (BaGeO 3 ) 1−x−y (Al 2 O 3 ) x (0.45CaF 2 ·0.55MgF 2 ) y with Eu 2 O 3 concentrations varying from 5 mol % to 20 mol %. Europium was observed only in the trivalent state. The influence of the Eu content was minimal, while the increase of the fluorine content led to a larger isomer shift as well as to a higher frequency (shorter wavelength) of the PL emission band. The linewidth of both the isomer shift and the emission band decreased for increasing fluorine content. [66,100,117]. Y 2−x Eu x WO 6 (x = 0.05-0.4) was investigated by van Noort and Pompa [123]. The Eu ions were in the trivalent state and occupied three different sites, two sites with symmetry C 2 and one site with symmetry C 1 . Small differences in covalency produce the isomer shifts obtained of 1.25 mm/s, 0.65 mm/s, and 0.05 mm/s, respectively, and which increased with Eu 3+ concentration, due to an increase of the lattice parameter a (Figure 11).
Lanthanides and Related Compounds
Monoclinic and cubic Gd 1.9 Eu 0.1 O 3 showed different average isomer shifts of 1.08 and 1.05 mm/s, respectively, resulting from the different oxygen surroundings of the Eu 3+ ions in the two crystallographic structures [66,86].
Titanates
TiO 2 :Eu amorphous powder showed only Eu 3+ sites with isomer shifts of approximately 0.5 mm/s independent of Eu concentration, while the linewidth increases with increasing Eu concentration [125]. Calcination at temperatures above 500 • C lead to the formation of brookite and anatase phases, while for temperatures higher than 1000 • C pure rutile phase was formed. The Mössbauer spectra for all temperatures were similar, the luminescence intensity decreases with increasing calcination temperature.
However, Ningthoujam et al. [106] reported on a TiO 2 :Eu anatase phase, which showed two Mössbauer absorption bands, corresponding to Eu 3+ and Eu 2+ with isomer shifts of −0.62 mm/s and −13.04 mm/s, respectively. The samples were annealed at 500 • C and 900 • C and the spectra showed one peak corresponding to Eu 3+ with isomer shifts of −0.48 mm/s and −0.64 mm/s, respectively. In the 900 • C annealed sample, the Eu 2 Ti 2 O 7 phase was formed, which did not show any luminescence. Annealing of the sample at temperatures from 300 • C to 1000 • C did not change the isomer shift significantly.
Nitrides
Europium nitrodosilicate Eu 2 SiN 3 has mixed valence and its Mössbauer spectrum contains lines from both Eu 2+ and Eu 3+ [50]. For GaN:Eu, Mössbauer spectra showed that the Eu was preferentially in the trivalent state [135]. EuSi 2 O 2 N 2 showed an emission in the yellow spectral range and the Mössbauer spectra showed only Eu 2+ at 78 K [52].
Sulfides
CaS has a rock-salt structure and Eu 2+ is supposed to replace Ca 2+ in the lattice. For CaS:Eu luminophors, Danilkin et al. [136] observed only one Mössbauer absorption peak, corresponding to Eu 3+ ions. Pham-Thi et al. [103] synthesized CaS:Eu using the flux method with Na and K. The Mössbauer spectrum of the sample prepared with Na polysulphide flux showed only Eu 3+ absorption while with K flux both ions were detected.
ZrO 2
Mössbauer spectra of ZrO 2 :Eu doped with 1 and 2 mol % Eu 2 O 3 showed only Eu 3+ with an isomer shift independent of Eu concentrations, while the linewidth decreases with increasing Eu concentration indicating that the europium environment is changing [127]. It was concluded that, at low concentrations, europium ions occupy sites in both the tetragonal and monoclinic structure and with increasing Eu concentration the incorporated ions are mainly substituted into the tetragonal structure substituting Zr 4+ .
Yttrium Aluminum Garnet (YAG:Eu)
Constantinescu et al. [81] used Mössbauer spectroscopy to investigate structural changes during the phase transition from amorphous to crystalline in yttrium aluminum garnet (Y 3 Al 5 O 12 ). Annealing the YAG with temperatures lower than the phase transition temperature, which is between 900 • C and 915 • C [137], revealed two peaks in the Mössbauer spectra, while annealing with higher temperatures gave one transmission peak. For annealing at temperatures from 930 • C to 1400 • C, an increase in crystalline size was obtained from X-ray diffraction measurements and a decrease in photoluminescence linewidths resulting from the higher crystallinity as well as a slight increase in Mössbauer absorption area was observed for all investigated temperatures [81].
Correlation of Isomer Shift with Bond Length and Covalency
An increase in isomer shift corresponds to an increase in covalency (see Table 1). It is seen that for luminescent materials there is a correlation between the isomer shift and the bond length Eu-X (Figure 12), since an increase in the bond length decreases the local density at the europium sites and hence the electron density at the Eu nuclei. Such a correlation has been found for Eu 3+ in Y 2−x Eu x O 3 Hintzen ( Figure 11) Related correlations also exist between the isomer shift and europium concentration, which result from the difference in size between the europium and host ions e.g., Eu 2+ and Eu 3+ in La 1−x Eu x MgAl 11 O 19 [99] and Eu 2+ fluorogermanate glasses (Nemov et al. [126].) Glass correlations are found between the isomer shift and the electronegatively (or size) of the modifying cations in sodium silicate M 3 Na 8 Si 13 O 33 and aluminophosphate M 5 Al 4 P 25 O 76 glasses (M = Mg, Ca, Sr), as shown in Figure 9 [112].
Determination of Site Occupancies
The occupancy of each site when there are several sites in a material may be determined since there are small differences between the isomer shift of them by computer fitting the spectrum. The unresolved and overlapping contributions from each site leads to a broadened spectrum.
The lanthanide sesquioxides with the general formula Ln 2 O 3 have complicated crystal structures. In the cubic system, there are two different sites for the lanthanide ions: 25% in a more symmetric site C 3i (S 6 ) and 75% in a less symmetric site C 2 , which shows quadrupole broadening [66]. For Sc 2 O 3 :Eu and In 2 O 3 :Eu the two sites could be resolved [66,86]. Their average isomer shifts are 1.27 mm/s and 1.38 mm/s respectively. For Sc 1.9 Eu 0.1 O 3 the shifts for the sites C 3i and C 2 are 0.70 mm/s and 2.59 mm/s, respectively [66]. In other compounds, the difference in shift is less and is determined from the (slightly asymmetrical) line broadening. Hintzen et al. [66] showed that the difference in isomer shift between the C 3i and C 2 sites increases linearly with decreasing lattice parameter a. The decrease of the Eu-O distance increases the electron density, which affects the C 3i site more than the C 2 site, as shown in Figure 10 [66,117]. Concas et al. [86,119] obtained in cubic nanocrystalline Y 2 O 3 an occupational probability of the C 3i and C 2 sites of 27% and 73%, respectively compared with bulk material values of 22% (C 3i ) and 78% (C 2 ) [119]. Concas et al. [118] also investigated Lu 1.8 Eu 0.2 O 3 . They prepared the samples in four different ways: (1) combustion with urea; (2) combustion with urea and sintered; (3) combustion with glycine; and (4) combustion with glycine and sintered. All samples show an isomer shift of 1.25 mm/s averaged over the two different sites C 3i and C 2 . The samples show a variation in site occupancy with the different preparation methods. For the nanocrystalline powders (1) and (3), approximately 20-23% of the Eu ions are in C 3i symmetry, while for the ceramics only 15-16% of the Eu ions are in C 3i symmetry. Thus, there is preferential occupation of the C 2 site, especially for the (spherical) ceramic samples, which causes a higher fluorescence intensity since the 5 D 0 to 7 F 2 transition is forbidden. In monoclinic Gd 2 O 3 , the Eu 3+ ions are equally distributed over the three different crystallographic sites [66].
In Y 2−x Eu x WO 6 :Eu, the Eu 3+ ions occupy three different sites, two with C 2 and one with C 1 symmetry. Small differences in covalency produce the isomer shifts obtained of 1.25 mm/s, 0.65 mm/s, and 0.05 mm/s, respectively and which increased with Eu 3+ concentration [123]. BaCa 2 Y 6 O 12 :Eu 3+ has two different sites with isomer shifts of 0.2 and 1.5 mm/s. At the site with the lower shift, Eu 3+ replaces Y 3+ and for the higher Eu 2+ replaces Ca 2+ . The Y 3+ site is preferred with respect to the Ca 2+ site by about a factor of 2 [53].
Conclusions
We have reviewed the use of Mössbauer spectroscopy for characterizing luminescent materials activated by europium. Mössbauer spectroscopy is a powerful probe of europium as it is element-specific and can provide knowledge of the valence state, covalency, site symmetry, and occupation and coordination number, all of which may be important for the study and development of luminescent materials. The large difference in isomer shift between Eu 2+ and Eu 3+ enables the ionic state to be identified and the relative amounts of each to be determined in a host material. The spectra confirm that europium usually substitutes as Eu 2+ for a divalent ion like Ca 2+ and as Eu 3+ for a trivalent ion like Al 3+ or Y 3+ , but that both oxidation states may sometimes occur together since Eu 2+ easily oxidizes to Eu 3+ . The shift also depends upon the ligand X since it affects the electron density at the europium nuclei. Roughly, it scales with the local density, and so increases with increasing coordination number and decreasing Eu-X bond length or ligand diameters. This means it increases for strong covalent bonding (or low electronegativity) and for higher europium concentrations. The quadrupole splitting gives information about local symmetry and can be important for identifying or confirming the crystal site. Mössbauer spectra can also distinguish between crystallographic sites with the same ionic state and measure site occupancy in unusual circumstances, for example, in nanoparticles. Photoluminescence spectroscopy can clearly distinguish that Eu 2+ and Eu 3+ : Eu 2+ are characterized by a broad emission, while Eu 3+ shows narrow emission lines, but, as the spectra overlap, it cannot give a quantitative estimate of the relative amounts of each. In summary, Mössbauer spectroscopy is a unique and leading technique in the characterization of materials containing multi-valent elements with distinctly different isomer shifts. | 10,925.6 | 2018-05-01T00:00:00.000 | [
"Physics"
] |
High-temperature processing of municipal solid waste
. At present, processing and recycling of municipal solid waste (MSW) has become more relevant in our country and the world at large. This problem concerns large towns and cities, where every year millions of tons are produced all kinds of fractions of household waste. Disposal or recycling of solid waste - is an environmental issue, but it is associated with the solution of complex technical, energy and economic challenges. The purpose of the study is to identify the advantages and disadvantages of modern methods of processing and disposal of solid waste (MSW) with the prospect of developing and creating a device for recycling MSW taking into account modern approaches to energy saving and environmental protection. The main results of the study are to create a simple, reliable and technically sound method of MSW destruction to obtain additional energy. The significance of the results obtained for the construction industry is to create a device for the disposal of solid waste with the production of solid combustion products and further use as building materials and products for various purposes. Because of the technological process of processing MSW, it becomes possible to return for the reuse of the resulting additional amount of energy.
Introduction
The increasing volume of municipal solid waste demands acceptance of urgent measures for their removal and elimination from territories of settlements. Each resident on average throws out 400-500 kg of garbage a year [1][2][3][4].
Processing of MSW are the complex problem demanding big money and new technologies. At a delay of removal of MSW from territories of the cities and in, features, from megalopolises the emergence of global epidemics is possible [5][6][7][8][9].
The main problem for the next years is search of effective ways of processing of MSW. Today burial of MSW for the present is the main way of their utilization. At many enterprises and institutions, old technologies prevail and therefore in settlements the waste having the increased danger to the population collects. Burial of this waste does not solve a problem of their neutralization; it only shifts the solution of this problem on the near future [10][11][12][13][14].
The approaches to utilization of MSW existing now is burial, transformation to biogas, processing in organic fertilizer are not always acceptable as they, as a rule, are not followed at least by preliminary sorting all of the increasing garbage stream. Unfortunately, sorting of garbage is very expensive an action and, in modern economic realities is unrealizable.
Therefore, now the most expedient method of utilization of MSW is their burning with preliminary sorting just before their destruction [15].
MSW consist of fractions, various on heat of combustion, and average heat of combustion depends on external parameters are humidity, temperature, pressure. Average heat of combustion of MSW makes about 8000 kJ/kg. This marked-out thermal energy has to be surely used therefore already in the near future burning with effective use of the marked-out energy will be the main method of processing of MSW.
Already in the nearest future waste incineration, installations with development of additional amount of thermal and electric energy will successfully compete with the traditional systems of classical power [16][17][18][19][20].
Methods of Utilization of MSW
A promising method for processing MSW is separation and recycling of all waste components and their subsequent use in the national economy. There are two approaches for the implementation of this plan [1].
The first approach at the inlet of this complex should be provided pad unparsed weight of solid waste. Downstream but this approach is considered ideal in the present conditions cannot be realized [2,3].
Certainly, it is possible to divide the waste stream with the help of robotics, but such a process would not pay off and therefore unacceptable.
The second approach is most often used in developed countries. Here, in residential areas and tanks are installed with special mark different colors to plastics, paper, glass, metal, organic, etc. Ideally, the population or employees of institutions that produce waste should share the waste.
The average composition of MSW [4][5][6][7] presented in Table 1. Among waste management methods, currently first place belongs to landfill, which take out the main part of the waste. However, the waste of their toxicological effects are acutely and extremely dangerous. Therefore, the site where they are stored, causing enormous damage to the environment, namely, the result is severe contamination as the ground surface, and ground to a depth of 20 meters, as well as underground water [8][9][10][11].
In some European countries polygons used for converting biogas which is formed during the rotting waste in renewable energy sources. MSW conversion into biogas stage are shown in Table 2. In the process of entering, the microorganisms decompose the organic component of MSW without air access to such components as methane -CH4 , carbon The heat of combustion of biogas enables its application in the energy sector. When the expansion of one ton of MSW is released ~ 260 m 3 of biogas. As a result, decomposition of waste produced combustible gas mixture consisting of approximately 60% CH4 methane, 35% CO 2 carbon dioxide and 5% nitrogen N 2 .
Particularly dangerous from an environmental point of view are unorganized dumps, because in which the combustible gas mixture enters the atmosphere from the soil displacing oxygen O 2 , and prevents the growth of plants. Unorganized dump a fire.
To implement the second method for processing MSW at a selected site must dig a ditch. Further, the pit must be isolated from soil and lay pipelines for output of biogas from garbage array for its further use in the heat (heating, power generation).
A third approach is the processing of solid waste into organic fertilizer (compost). The process of neutralization and refining is due to self-heating of MSW, resulting in the development of aerobic thermophilic microorganisms at a sufficient amount of oxygen O 2 .
In the course of chemical and biological reactions of solid waste to the self-heating temperature T=60…70 °C. This temperature is detrimental for pathogenic bacteria, thereby enabling disposal. With stirring, the best material obtained by the contact between the organic matter and microorganisms. Complex organic compounds are decomposed to form shapes, which are easily assimilated by plants (compost).
Half reduce further enzymatic biodegradable material mass, and the solid obtained a stable product. However, direct composting solid waste is impractical because derived fertilizers contaminated with heavy metals and glass (electronic waste -computers, televisions, mobile phones, etc., light bulbs, used galvanic cells). This requires careful sorting of waste, which is not always economically feasible.
The fourth approach is the burning of MSW. In most cases, it is the most suitable method of solid waste disposal.
Waste consist of different character calorific value and different-sized particles, the average calorific value of which is dependent on external parameters: temperature, pressure, humidity. Average heat of combustion of MSW ~8000 kJ/kg [12][13][14][15][16]. The combustion of MSW consumes a large amount of oxygen O 2 , which increases considerably with increasing content of the waste plastic materials. Advantages and disadvantages of MSW incineration in Table 3 are shown. It should be noted that when burning MSW released into the atmosphere chloride and hydrogen fluoride, sulfur dioxide, nitrogen oxides, and metals and their compounds, mainly aerosol form [17,18]. Incineration of waste containing synthetic polymeric materials formed dioxins and furans. Dioxins is the group of substances, molecules that constitute the basis of hexagonal carbon rings. If they have no chlorine atoms, these toxic substances is not more than gasoline, but the substitution of hydrogen atoms in the rings of chlorine atoms formed dangerous dioxins is only about twenty compounds of varying degrees of toxicity [19].
Group furans less toxic than dioxin group, but also those, and others are carcinogenic. There are plenty of sources of uncontrolled emissions of dioxins and furans the temperature of combustion is low (T< 600 °C). When this mode is formed in ten times more dioxins and furans than in incineration plants, where a high-temperature process is used (more than 1000 °C).
Dioxins is toxic and carcinogenic substances synthesized by man, so pre-sorting of waste prior to incineration is needed. One of the species is a pyrolysis combustion process is a thermal decomposition of solid waste without air.
Because of this process produced combustible gas, liquid and solid products carbonaceous residue.
Depending on the temperature of pyrolysis distinguish the following types: lowtemperature pyrolysis (T<500 °C), wherein the yield of liquid products and solid residue is maximal and minimal yield of fuel gas; medium temperature pyrolysis (T=500…1000 °C). With such hot pyrolysis gas yield increases, and the yield of liquid products and solid residue decreases; high-temperature pyrolysis (more than 1000 °C), whereby the yield of liquid products and solid residue is minimal, and the output of fuel gas is maximal. Pyrolysis provides an opportunity not only to dispose of household waste, but also to receive valuable hydrocarbons of petroleum series. Consequently, reduced processing costs of solid waste.
By pyrolysis disadvantages: complicated construction and high cost ovens; great staff; incomplete disintegration of dioxins at the end of the process; Heavy metals do not melt and precipitate together with the sludge.
Currently, the emphasis is on technology, not only for MSW incinerators, but also for the transformation from resulting heat energy.
It is believed [20] that in the near future combustion with generation of electricity and heat is the main method of processing waste. In the future, waste incineration power plants will be included in the integrated waste management system, together with enterprises on recycling and reuse of some materials (metal, glass, plastic, paper, etc.) Along with this improvement requires the resulting flue gas cleaning methods. Various schemes MSW incineration. The main disadvantage of these devices is the low degree of purification of produced harmful emissions, and low economic efficiency of the process.
Installation for high-temperature processing of solid waste is as follows. Waste on the conveyor 1 pass through the separator of metal 2. It ferrous metal is separated and falls into the containers 3 and 4. The waste separated by a metal screw 5 through the loading hatch 6 overlap flap 7. In the furnace 8 disposed electrode unit 9, which supplied with power from a three-phase power source 10. The electrode assembly 9 itself includes a cylindrical body 22, electrode 23 and central electrode 24.
Each of the electrodes 23 disposed along the periphery of the housing 22, the block electrodes of cylindrical shape connected to a power supply phase 10 and central electrode 24 is connected to the neutral conductor has a length of one diameter greater than the remaining electrodes, thereby increasing the combustion zone of electric arcs. All electrodes have the same diameter.
When using a three-phase alternating electric current is observed effects of electric arcs pressure combustible wastes at temperatures T=1500…1800 °С, causes rotation and vigorous stirring melt able mineral components, more complete reaction and the release of mineral residue from the gases.
The use of a three-phase arc instead of a single-phase power provides savings of up to 35-40 %. Excitation electric arcs produced by the activator 11.
Unloading of solid combustion products produced through the manhole 12 into the container 13. The solid combustion products can be successfully processed in different directions. Of the ash, using plasma technology produces artificial sand paving filling. Also, use ash can to produce ceramic and concrete products for construction application.
The effluents from the combustion chamber 8 gaseous products enter the afterburner 14, where they are neutralized and ignition of spark plug 16, fed by the high voltage unit 15.
To intensify the neutralization process gases in the afterburner 14 from the ozonizer 17 through the nozzle 18 is supplied ozone. Neutralized gaseous combustion products enter the heat recovery boiler 19. The resulting vapor therein is sent to a turbine generator 20 to produce electricity. The neutralized and cooled product gases enter the atmosphere through a stack 21.
Results and Discussion
Installation took out the patent for an invention. Also allows utilizing municipal solid waste with the minimum emission of toxic substances. This installation has no analogs.
During the operation of similar units, certain requirements to composition of initial raw materials are necessary and there is a rigid dependence of technological processes on quality of MSW.
In addition, essential lack of these devices is the low efficiency of processing of municipal solid waste. Besides, implementation of ways of processing of MSW at temperatures below 1100 °C and duration of stay of products of decomposition of MSW in the combustion chamber leads less than 2 sec. to formation of products of incomplete combustion and super toxic and thermodynamic steady dioxins and furans. Their processing is necessary for elimination of above-mentioned substances at temperatures of T=1500…1800 °C, as it is reached in the offered installation.
The numerical experiment of computer model of this installation was made. Results of the computing experiment, which show changes of mole dole of chlorobenzene depending on temperature and a mass fraction of the products after thermal utilization MSW, are given in fig. 2. For better neutralization at combustion of municipal solid waste of the formed dioxins and furans, their neutralization in the afterburner and in the environment of gaseous ozone is used.
In comparison with the system of flowing non-stationary reactors [21] this installation gives vent smaller harbingers of dioxins in ~ 100 times under the same physical conditions.
The neutralized gaseous combustion products come to the exhaust heat boiler producing steam, which goes to the turbine generator for production of additional quantity of the electric power that gives considerable economic effect.
Use of a three-phase electric arc instead of single-phase gives economy of electrical energy to 40 %. Thus, a gaseous component which as a rule, contains dangerous and toxic substances, including compounds of chlorine and fluorine, dioxins, furans, hydrocarbons and others are exposed to chemical heat treatment.
At the same time, there is their destruction thanks to heating to high temperature at which their stability sharply goes down, and to course of chemical reactions with formation of new non-toxic or much less toxic substances. | 3,283 | 2019-10-20T00:00:00.000 | [
"Materials Science"
] |
Evaluation of Post-Quantum Distributed Ledger Cryptography
This paper evaluates the current cybersecurity vulnerability of the prolific use of Elliptical Curve Digital Signature Algorithm (ECDSA) cryptography in use by the Bitcoin Core, Ethereum, Bitcoin Cash, and enterprise blockchains such as Multi-Chain and Hyperledger projects Fabric, and Sawtooth Lake. These blockchains are being used in media, health, finance, transportation and government with little understanding, acknowledgment of the risk and no known plans for mitigation and migration to safer public-key cryptography. The second aim is to evaluate ECDSA against the threat of Quantum Computing and propose the most practical National Institute of Standards and Technology (NIST) Post-Quantum Cryptography candidate algorithm lattice-based cryptography countermeasure that can be implemented near-term and provide a basis for a coordinated industrywide lattice-based public-key implementation. Commercial quantum computing research and development is rapid and unpredictable, and it is difficult to predict the arrival of fault-tolerant quantum computing. The current state of covert and classified quantum computing research and advancement is unknown and therefore, it would be a significant risk to blockchain and Internet technologies to delay or wait for the publication of draft standards. Since there are many hurdles Post-Quantum Cryptography (PQC) must overcome for standardisation, coordinated large-scale testing and evaluation should commence promptly.
Introduction
Rapid advances on a global scale in Quantum Computing technologies and the threat it poses to most standardized encryption prompted NIST to put out an international call for candidate quantum-resistant publickey cryptographic algorithms to evaluate for standardization. NIST will conduct efficiency analysis on their reference platform delineated in the Call for Proposals; NIST invites the public to perform similar tests and compare results on additional platforms (e.g., 8-bit processors, digital signal processors, dedicated complementary metal oxide semiconductor (CMOS), etc.) and provide comments regarding the efficiency of the submitted algorithms when implemented in hardware.
This research has two goals; the first is to examine the vulnerabilities in current Asymmetric Digital Signature Cryptography (ASDC) as used in private key generation in Bitcoin Blockchain technology in the PQC era. The second goal is to independently test and evaluate candidate NIST algorithms to assist in the process of selection of acceptable candidate cryptosystems for standardisation and the proposal of potential replacement of ADSC in private key generation in blockchain and distributed ledger technology. Most blockchain and distributed ledger technologies use an asymmetric digital signature scheme for private key generation such as, ECDSA, which has been cloned often from the Bitcoin Blockchain. These digital signature schemes are being implemented in critical sectors of government and the economy. Evaluations will include cryptographic strengths and weaknesses of NIST candidate pool of submitted algorithms. It is expected that the analysis will consist of required performance parameters that include;
Public Key, Ciphertext, and Signature Size, Computational Efficiency of Public and Private Key Operations, Computational Efficiency of Key Generation, and Decryption Failures against NIST provided Known Answer Test values (KAT).
Blockchain and Distributed Ledger cryptography private key generation cyber-security concepts are poorly understood, and often misrepresented.
There is a misconception that Blockchain technology can't "be hacked," resulting in a general endorsement for critical sectors and industries [1]. The author believes that the technology offers excellent cyber-security promise for many areas, but the limitations and strengths must be defined. This work examines the weakness of the ECDSA and its current vulnerability and uses in the Bitcoin Blockchain or Distributed Ledger Technology (DLT). Many industries are rapidly adopting versions or mutations of the first of the Bitcoin Blockchain technology in essential sectors such as information technology, financial services, government facilities, healthcare, and Public Health Sector seemingly, without cybersecurity due diligence, a proper comprehension of the cryptography vulnerabilities or plans for addressing quantum computing threats [2]. The ECDSA is the foundation of Public Key Infrastructure (PKI) for many Internet applications and open source projects, and it's the primary source for public-key cryptography. The second part of this paper offers the most practical and near-term first-round candidate NIST Lattice-Based Post-Quantum Cryptography solution with a recommendation for immediate coordinated (academia, the private sector, government) independent testing, verification, and validation (IV&V) and test framework for sharing results [3]. This framework aids in speeding the approval of PQC standards that are vital to global cybersecurity. The scope of this work evaluates the lattice-based digital signature scheme qTESLA, based on the verifiable hardness of the decisional Ring Learning With Errors (R-LWE) [4]. Quantum computing's threat adversely affects the cybersecurity of financial services such as payment systems, general network communications systems, business functions including cloud computing, Internet of Things (IoT) and critical infrastructure. Further, the author believes that currently estimated timelines for the availability of large-scale fault-tolerant quantum computers are underestimated due to unpredicted global progress and the veil of secrecy surrounding classified research programs led by organizations and governments around the globe. It is, therefore, essential to begin work and testing the most likely candidate algorithms for normalization.
Implications in this work
Current encryption systems and standards such as Ron Rivest, Adi Shamir and Leonard Adleman (RSA), Digital Signature Algorithm (DSA), and ECDSA impact everything from defense, banking, healthcare, energy, telecommunications, intelligence, Internet and the Blockchain. The compromise, disruption or non-availability of one of these sectors would severely impact the health and safety of U.S. national security, public health, safety or its economy.
Blockchain technology is a revolutionary technology that has great potential in many applications. This technology has gained global interest in all industry sectors based on cryptography-based algorithms that are considered vulnerable today but will be increasingly threatened by accelerated advances in quantum computing.
Significance of the findings
The time to test and validate new post-quantum cryptology is now, given it takes at least ten years to build and deliver a new public key infrastructure. The pace at which quantum computing advancements can be anticipated is uncertain. The ability to transition to post-quantum cryptology appears to be very complicated, and there are many unknowns concerning establishing, standardizing and deploying post-quantum cryptography systems. All of this must be completed before the arrival of large-scale quantum computers because the cybersecurity of many vital services will be severely degraded.
Bitcoin and Distributed Ledger Technology
The Bitcoin Cryptocurrency (BTC) is the first widespread application of blockchain technology. The critical elements of Blockchain and DLT have been in existence for decades, and they include fault-tolerance, distributed computing, and cryptography. Succinctly, the first iteration of this technology is a decentralized distributed database that keeps records of transactions relatively secure and in an append-only mode, where all peers eventually come to a consensus regarding the state of a transaction. The Bitcoin Blockchain like others operates in an open peer-to-peer (P2P) network, where each node can function as a client and a server at the same time. The nodes in the system are connected over TCP/ IP and once a new node is connected that node broadcast peer IP addresses via Bitcoin address messages. Each address maps to a unique public and private key; these keys are used to exchange ownership of BTCs among addresses. A Bitcoin address is an identifier of 26 to 35 alphanumeric characters [5] . Since the advent of BTC along with its choice of a data structure, called a block, modified blockchain technologies, makes use of different data structures such as Directed Acyclic Graph (DAGs). Therefore, recent versions of the newest blockchains can longer accurately be called blockchains, and it is more appropriate to use the term Distributed Ledger (DL) that applies to all version of the blockchain. Presently, according to Crypto-Currency Market Capitalizations [6], there are more than 2000 alternate cryptocurrencies, and most make use of the Bitcoin Blockchain or are clones with minor differences in the private key generation cryptography and structure. The primary configuration changes include the underlying hash function, block generation times, data structures and method of distributed consensus. However; the critical task of generating private keys in blockchains remains unchanged across most blockchain adaptions, and this work asserts that the foundation of the current cryptocurrency markets and all the private and public sectors using this technology are vulnerable to the same cybersecurity weaknesses.
ECDSA, libsecp256k1 and OpenSSL
The ECDSA algorithm is part of public-key cryptography and is also the cryptography the Bitcoin blockchain uses to generate the public and private keys. The ECDSA is used in critical infrastructure, secure communications over the Internet, cellular and Wi-Fi and in many blockchain forks in use today. Specifically, the Bitcoin blockchain uses the ECDSA and the Koblitz curve secp256k1 [7] which have significant weaknesses which include general algorithm structure, side-channel attacks, and threats from quantum computers. The Koblitz Curve was not adopted for standardisation by NIST due to the nonrandom structure of the algorithm. The Bitcoin creator selected a non-NIST P-256 approved curve to serve as a source of entropy. Entropy is defined in this case as the randomness inserted by an operating system or application for use in cryptography that requires random data. OpenSSL is an opensource software library used in BTC technology and ECDSA applications to secure communications and many critical infrastructures. OpenSSL [8] provides software Pseudo Random Number Generator (PNRG) based on a variety and type of hardware and software sources. Its core library is written in the C programming language. The process starts once the Bitcoin Core client is installed, and the user receives a set of ECDSA key pairs, called Addresses. The PRNG starts in the state unseeded and this state; it has zero entropy. A call to RAND bytes is made, and it will transfer automatically into the state seeded with a presumed entropy of 256 bits and is feed to the PRNG through a call to RAND add. The keys generated from this process are necessary to transfer BTC from one Address to the other. Next, the client needs to sign a specific message (called Transaction) with the private key of the user. The public key is used to check if the given user has rights to BTC [9].
The ECDSA algorithm relies on generating a random private key used for signing messages and a corresponding public key used for checking the signature. The bit security of this algorithm depends on the ability to compute a point multiplication and the inability to calculate the multiplicand given the original and product points.
The Koblitz curve secp256k1 is non-verifiably random and is defined by Standards for Efficient Cryptography Group (SECG), instead of the NIST 186-3 DSS Standard using the elliptic curve secp256r1. The security of the ECDSA algorithm and protocols relies on a source of distributed random bits.
Fault Attack on Bitcoin's Elliptic Curve with
Montgomery Ladder Implementation.
This Montgomery Ladder Fault Attack method is a fault attack on elliptic curve scalar product algorithms and can be used when the (y-coordinate) is not used. The bit security of the elliptic curve parameters in most cases can be significantly reduced. The Fault attack is a robust side-channel technique that is used to break ECDSA cryptographic schemes. The idea is to inject a fault during the computations of implementation and to use the faulty outputs to deduce information on the secret key stored in the secure component [10]. Table 1 gives the resultant bit security after the Montgomery Ladder Fault Attack.
The bold font indicates the scep256k1 security is below 2 60 since these computations can be easily performed with classical computers. The mention 'r' denotes parameters explicitly recommended in the standard, while the mention 'c' denotes parameters in conformance with the standard. The column "Strength" refers to the standard. Clearly, implementations without protections, the attacker can compute the discrete logarithm in the twist with a cost of 2 50 operations and retrieve the secret scalar for n = 256.
Algorithm Security Strength
Breaking a cryptographic algorithm can be defined as defeating some aspect of the protection that the algorithm is intended to provide. For example, a block cipher encryption algorithm that is used to protect the confidentiality of data is broken if, with an acceptable amount of work, it is possible to determine the value of its key or to recover the plaintext from the ciphertext without knowledge of the key.
The approved security strengths for federal applications are 128, 192 and 256 bits. Note that a security strength of fewer than 128 bits is no longer approved because quantum algorithms reduce the bit security to 64 bits. NIST Special Publication 800-57 Part 1 Revision 4: Recommended for Key Management as shown in Table 2 [11]. The Fault Attack on Bitcoin's Elliptic Curve with Montgomery Ladder Implementation yields security strength of only 50 bits as shown in Table 1.
NIST and Post-Quantum Cryptography
In December 2016, NIST formally announced its Call for Proposals (Request for Nominations for Public-Key Post-Quantum Cryptographic Algorithms), [12]. This call solicited [4]. Public Key Systems based on R-LWE is computationally superior over LWE systems because of reduced overhead, greater capacity for message space and smaller public key sizes.
Selected algorithm for test and evaluation: qTESLA
The author's considerations for the selection qTESLA, are "reasonable" key and ciphertext sizes, and to a lesser extent the number of CPU cycles required for encryption, decryption, and verification, and potential incorporation into constrained devices such as smartphones and emerging IoT devices. Additional considerations included trust, metrics, parameters, migration, compatibility, and efficient and secure implementation. This submission utilizes two approaches for parameter generation. The first approach is called "heuristic qTESLA," and it uses heuristic method parameter generation and the second approach is called "provably-secure qTESLA," and its parameter generation is provably-secure. qTESLA includes five parameter sets that correspond to two security levels located in Table 3.
The security of lattice-based systems is provably secure under worst-case hardness assumptions. In the author's view, it is not likely that current PQC will be direct replacements for current standards and will likely impact the entire category of Internet protocols, such as Transport Layer Security (TLS) and Internet Key Exchange (IKE).
System parameters can be viewed in Table 4 and Table 5.
Informal Signature Scheme
Informal descriptions of the algorithms that give rise to the signature scheme qTESLA are shown in Algorithms 1, 2 and 3. These algorithms require two basic terms, namely, B-short and well-rounded, which are defined below. Let q, LE, LS, and d be system parameters that denote the modulus, the bound constant for error polynomials, the bound constant for the secret polynomial, and the rounding value, respectively. Require: Message m, public key pk = (a1, …, ak, t1, …, tk), and signature (z, c) Ensure: "Accept" or "reject" signature [4].
Performance of post-quantum qTESLA algorithms analysis
To evaluate the performance of the provided implementations written in portable C, the author ran benchmarking suite on three machines powered by: (i) an Intel® Core™ i7-6500 CPU @ 2.50 GHz x 4 (Skylake) processor (see
Analysis
The author argued that the uncertainties had not been appropriately addressed. For example, there is the possibility that additional quantum algorithms or techniques will be developed, which will lead to new and unanticipated attacks. Also, it is difficult to calculate the impact of those programs that are highly classified, and its performance characteristic is not public. Rapid and unpredictable advancements in quantum computing, are endangering or making current encryption schemes obsolete. It has been established that the most significant threat posed by quantum computers is directed towards current RSA, ECC digital signature scheme systems on which Bitcoin, Distributed Ledger and much of Internet-based technology uses.
It has been settled that the current RSA and ECC based public key cryptography are broken, and the AES cryptography is adversely reduced in bit security by quantum computing era. It is the author's view that recommendations such as doubling the AES key size need to be examined while considering the constraints of present systems. Current AES-128 is reduced to 64-bit security, and AES-256 would have 128-bit security.
An example of the impact of doubling the key size for AES-256 to AES-512 is not well documented and verified. This alternative algorithm (AES-512) would most likely use input block size and a key size of 512-bits. An increasing number of rounds and key schedule would adversely impact performance constraints, especially for constrained devices. The higher the key size, the more secure the ciphered data, but also the more rounds needed. In the hardware perspective, a bigger key size also means a larger area and power consumption due to more operations that need to be done. More focus and examination need to be done for AES in the PQC era, especially for constrained devices.
The author specifically, examined the ECDSA that are in use in Bitcoin and Distributed Ledger technologies. Secondly, evaluated NIST Candidate PQC for standardisation and possible replacement in blockchain and other public key cryptography Internet-based technologies. Table 6 gives the ECDSA (P-256) parameters used as the benchmark for comparison regarding the number of quantum security bits, and the size of the public key, secret key and signature key as an independently controlled variable. According to NIST, the use of schemes with less than 112-bit security is deprecated and will eventually be disallowed for use by U.S. government institutions to handle sensitive data. It is noted that that speed at which the encryption and decryption occurs is also an important parameter. The following results cannot be compared directly with the vendor qTESLA's submitted results, but; specific observations can be made with alternative applications and platforms. It is the author's view that if the key sizes are not manageable and practical for use in conventional and constrained devices, then the time or speed becomes less critical metric compared to key size. Table 7, Table 8 and Table 9 gives the results of the independent tests on respective platforms and performance is measured (in thousands of cycles) of the reference implementation. Results for the median and average (in the first and second table respectively) are rounded to the nearest 10 3 cycles. Signing is performed on a message of 59 bytes.
Recommendations
The PQC Standardisation process is complex, arduous and requires coordinated involvement (academia, private and public sector) and requires significant IV&V before formalization. Successful PQC must be resistant to both classical and quantum attacks. Multiple tradeoffs will have to be considered such as security, performance, key size, signature size, and side-channel resistance countermeasures. Other important considerations are the capability to migrate into new and existing applications such as TLS, IKE, code signing, PKI infrastructure.
It is necessary to begin a coordinated international campaign to mitigate the uncertainties of breakthroughs and the unknowns regarding classified programs. The aim should include, information sharing between the academic, public and private sector toward the common goal.
It is critical to devise and initiate the incorporation of cutting edge yet practical PQC to prevent a disastrous impact on global privacy, security, and economy before the arrival of large-scale fault-tolerant quantum computing.
Conclusion
qTESLA's submission for NIST Security Categories I and III as tested on platforms described in this work are more than two orders of magnitude larger for the public-key for qTESLA-p-1 (128-bit security) and qTESLA-p-III (192-bit security). The qTESLA-p-1 secret key is 56 times the size of ECDSA's secret key and qTESLA-p-III is two orders of magnitude larger.
It is essential to come to a consensus on how to assess quantum security. Currently, there is not a clear agreement on the best way to measure quantum attacks. It is, nevertheless, fundamental that work continues with alternatives that will produce smaller key sizes, comparable to the current ECDSA algorithms. The major drawback with qTESLA is the large key sizes which make it unlikely to be accepted in its current configuration. However, there is ongoing research being done to make it potentially a more viable candidate, both by reducing the key sizes and providing more efficient implementations (see tables 7, 8, 10).
The qTESLA's "Heuristic" submission for NIST Security Categories I and III are qTESLA-I, qTESLA-III-space, and qTESLA-III-size. The vendor claims that their heuristic approach is the security level of an instantiation of a scheme by the hardness level of the instance of the underlying lattice problem. Also, the claim is that it corresponds to these parameters regardless of the tightness gap of the provided security reduction if the corresponding R-LWE instance is intractable.
These claims and the necessary proof are beyond the scope of this work and cannot be independently verified and validated and is not the author's aim. It is important to note that; the results of qTESLA's heuristic algorithm were captured and are analyzed against its provably secure submissions. The heuristic algorithms were tested on the same platforms identified in the provably secure submission. qTESLA-I's public-key size vs. qTESLA-p-1's public-key size is a reduction of 90%. The secret key size at the same bit security level is reduced by 60%, and the signature size is reduced by 52%. Observations for public keys; qTESLA-III-size vs. qTESLA-p-III is reduced by 92%; secret key size reduction is 66%; signature size reduction is 56% (see Table 10).
The difference in the heuristic key sizes are dramatically reduced and compares more favorably to ECDSA (P-256) parameters. While the heuristic values are dramatically reduced compared to the provably secure values, the key sizes are still large compared to current standard ECDSA (P-256) sizes. For example; the best result for the secret key size for qTESLA-IIIsize (4160) vs. ECDSA (P-256) secret key size (96) is a 4233% increase and would prove problematic in existing systems.
Future Work
The author selected qTESLA's submission which is 1 of 5 NIST Candidate PQC digital signature schemes. Additional work needs to be done in verifying and validating and testing vendors results. Concrete PQC parameters for testing and validation need to be created for the promotion of a baseline. The parameters should be modified to determine the best tradeoffs while maintaining required security. Moreover, the organization of guidelines and standards are necessary for the wider cryptography community to aid in PQC standardisation create efficient, high-quality implementations.
Continued measurements of current PQC scheme implementations should be performed, such as performance and memory usage on the ARM and CMOS platforms. Many embedded devices have ARM and CMOS architecture and have limited computational and memory resources. NIST currently plans a Post-Quantum Cryptography Round 2 call tentatively schedule in 2019 and will offer additional opportunities for IV&V and research. | 5,164.6 | 2019-03-16T00:00:00.000 | [
"Computer Science"
] |
How Do You Know That I Do Not Know How It Feels to be a Bat?
: This paper has provided an objection to Nagel’s bat theory as the defense for physicalism. Thomas Nagel has used a bat theory to point out the contradiction feature between a scientific theory and phenomenal consciousness. In this work, I argued that what we know as phenomenal consciousness may just be our memory system, which can be explained by science.
Introduction
In this paper, we will present Nagel's consciousness objection to materialism's theory of the 'mind and body problem'.First, we shall explain materialism's mind-body theory.Then reconstruct Nagel's consciousness objection.Lastly, we will defend materialism with the argument that memory can explain consciousness and subjectivity.
Before discussing Nagel's bat theory, a questionwhat exactly is the mind and body problem?According to the Stanford Encyclopedia, ontologically, the mind and body problem discusses what is the mind (mental states, consciousness, and so on)?Other than as an ontological problem, the causal relation is also being widely discussed in philosophy [1].To give an example, when you open your internet browser to enjoy a live streaming cooking show, two kinds of events --mind events and physical events --have happened here."Open your browser" is a physical activityan action you do with your body.Differently, your enjoyment is a mind event-an event that happens in your mind and does not affect the external world.Then, a problem ariseshow do your mind and physical events interact?
Dualism and Materialism's Approach
A Dualist like Descartes may argue that the mind and body are separate.For instance, in his sixth meditation, Descartes as a Dualist believes that mind and body are separated since we can grasp them separately [2].While Descartes introduces God into his argument to help justify his theory, he only demonstrates the relation between mind and body, but he does not provide his readers with an explanation of how the body and mind interact.In fact, every dualist cannot explain the question of how the body and mind interact, because the connection is necessary when people differ them as two distinct existences.Alternatively, this obstacle does not apply to materialists, because they hold a theory that the mind is, actually, identical to the brain, which is a part of the body.According to Stanford Encyclopedia, "this holds that every property (or at least every property instantiated in the actual world) is identical with some physical property" [3].Thus, the type physicalism, engaging in mind and body problems, hold the idea that the mental states of our mind can possibly be reduced to some kind of theory in science, such as physics.In other words, while we may describe a mental state differently from physical terminology, they are actually the same thing.For instance, an article from Mcgill university states that "…when you stub your toe on a rock, you feel pain at that specific spot on your body.The pain is often so sharp and so localized that you might be tempted to believe that it's your toe itself that's experiencing the pain.But that's not really the case at all." [4].The exert manifest argument applies here; the pain, that we used to consider a mental state, may actually trigger some physical experience.Additionally, this physical experience not only stays localized at one's perception spot but connects to one's nerve system.The pain, which has been recognized as the activity coming from the "soul" by dualism, is actually from a series of physical changes.Thus, differs from dualism, materialists do not support any metaphysical assumptions about the mind.Instead, they believe that it is an illusion to recognize body and mind movements separately because those two events actually both belong to brain states, which can be explained by science.Another example, 'happiness' can help the reader to understand.The feeling of happiness is actually a physical event known as sympathetic excitation.We, as everyday people without a scientific background, just call this excitation 'being happy'.To give a. anecdotal example; a boy may be called Benjamin by his classmates but called Ben by his family.The two names refer to the same person.Likewise, due to a lack of professional knowledge, we may name some mental states, such as emotion, differently, yet they can be ultimately paired with scientific terminology.
Nagel's Bat Theory
Thomas Nagel points out that a problem may exist if we think our minds can be fully explained by science.Specifically, Nagel thinks that mental states have a feature called 'phenomenal consciousness' which contradicts the feature of science.The phenomenal consciousness refers to the unique perspective of how you process events, such as your sensory experience, and emotions….At the heart of Nagel's theory is that everyone feels what it is to be "me" in a unique way.Two elements, objects perceived and the subjective perceiving agent (us), are necessary for us to learn about the world.This refers to the fact that even perceiving the same object, we need to use our own subjective perspective to process it, which makes our consciousness different and unique.For instance, one person may find a frozen yogurt delicious and shares it with a friend.However, the friend comments it was too sour for them.The sense presented to you by the frozen yogurt should be the same, yet how you process the sense makes each of us have a different experience with the frozen yogurt.
Based on the subjective feature of phenomenal consciousness, Nagel provides an example of a bat.Readers may wonder why Nagel has chosen this specific animal, a bat, instead of a human."Bats are not blind, but…, they do not use their visual system very much…This appears to create difficulties for the notion of understanding what it is like to be a bat" [5].Human beings usually perceive the world through five senses, including visuality.Thus, the claim that one faces obstacles to conceiving what is like to be, another human, sounds not so convincing.Distinctively, bats perceive the world through echolocation experience, a unique sensation that helps them to locate or avoid surrounding objects, which is a kind of phenomenal consciousness that humans do not have.Based on this, Nagel thinks the bat's feeling would be impossible for the human reader to conceive as humans do not share the same phenomenal consciousness with bats.This obstacle will enforce the belief that the consciousness or the feelings of being oneself are subjective that cannot be explained by neutral scientific language.This is because science is objective, which means it can always be explained neutrally with no human interpretation.For instance, a scientific fact; water boils at 100 degrees Celsius.There is no subjective character to this, no matter who boils the water, the boiling point will not change.Alternatively, phenomenal consciousness does possess subjective character, such as 'only bats can know what is feels like to be a bat'.Hence, Nagel thinks this contradictory element, subjective phenomenal consciousness, will place a challenge on materialism's reasoning that every mental state can be explained scientifically [6].
Some Objections in the Philosophy Field
Nagel's bat theory has stimulated many philosophical discussions.One of the very popular objections claims that possible stimulations may exist other than the imagination.Nagasawa puts forth "Surely, we cannot know what it is like to be a bat just by reading textbooks on physics or biology.However… we can know it by carefully imagining or simulating how a bat… flies and detects the location of its target" (Nagasawa, 2003).This objection states that reading textbooks or descriptions of what it is like to be a bat may not assist humans to grasp the impossible subjective phenomenal consciousness of being a bat.Yet, the impossibility is not in the transportability consciousness of the mind but the perceiving means.Thus, Nagasawa proposes the possibility that we may be able to learn what is like to be a bat by imagining and simulating.Readers may also possess doubt about the feeling of being a bat and whether this can be learned even by imagining or stimulating, while at least Nagasawa's objection demonstrates that Nagel's conclusion cannot be set in stone until this possibility is solved.Also, the difference between homo-species and non-homo-species usually the element being doubted by skeptics.Specifically, because of their unique echolocation system, Nagel has chosen bats instead of other animals or humans to use.This choice of bat indeed creates obstacles for us to feel the difficulties in the transmission of consciousness.While it also brings the problem that crossspecies examinations seem not fully equal to homo-species.In other words, Nagel has only demonstrated that humans cannot imagine the feeling of being a bat, but he has not provided the evidence that humans cannot imagine the feeling of being another human.For instance, we cannot use Microsoft Word to open up a .pdfdocument because the two software have different coding, while one can open .pdfdocuments with a .pdfreader.Thus, it is still possible for us to experience each other's consciousness within humans.Since our major focus is on the body and mind theory of humans instead of all-natural species, whether consciousness is transmutable between the same species may still need to be discussed.
In addition, Nagasawa mentions that a distinction may exist between "…knowing a feeling of a being and the physical characterization of a being."In detail, Nagasawa states, "Nagel's ultimate goal is to undermine physicalism by showing the difficulty of giving a purely physical characterization of…a bat.However, is not the same as knowing a physical characterization of what it is like to be a bat" [7].Nagasawa states that, thinking of ourselves, we do know the feeling of being ourselves.Yet, we may find it hard to use any physics terms to fully describe it ourselves.Thus, the fact that we cannot even use physics terms to describe to others what we do know of our consciousness, demonstrates that describing one thing in physics terms should differ from the phenomenal consciousness.In fact, even though Nagel successfully justifies his belief that subjective consciousness is not transmutable, this transmutable feature still faces a challenge when applied to objecting to materialism because their claim is about reducing mental states to physics descriptions.
Possible Eastern Approach Inspired by Zhuang Tzu
Alternative to Western approach and solutions.A dialect between, Zhuang Tzu and his friend Hui Tzu, in Chinese philosophy may provide the reader with a different objection, how could Nagel speak for another human that who cannot know what it feels like to be a bat.To be specific, in the chapter "Qiu shui" in "Zhuang Tzu", Zhuang Tzu and his friend Huizi have seen a fish jump out of the water while they are spending time together.After Zhuang Tzu comments on the pleasures of fish, Huizi asks him, "You, sir, are not a fish, how do you know what the joy of fish is?" [8].Zhuang Tzu replies, "You, sir, are not me, how do you know that I do not know what the joy of fish is?" [8].
In fact, the problem Zhuang Tzu and Huizi are concerned about in this story is not a mind and body problem because, at that time, body and mind theory was not a concern.Instead, their focus is on their debating skills.Nevertheless, this exert provides us with a new angle and inspiration for Nagel's argument.In the text, Zhuang Tzu mentions an important question, when Huizi questions whether he knows the feeling of being a fish, how can Huizi know if Zhuang Tzu knows what it's like to be a bat.This question points out that Huizi assumes consciousness is transmutable when he presupposes that Zhuang Tzu, a fellow human, could not know what it's like to be a fish, based on his knowledge that he, himself is a human, does not know.Yet, this presupposition itself entails that consciousness is transmutable, at least within the same species.When applying to the case of the bat, Nagel knows that he cannot conceive what is like to be a bat, because he is not one and applies this thought to all human beings.This thought itself demonstrates that Nagel also has the intuition that consciousness is transmutable.Specifically, there are two reasonable possibilities: 1) Nagel thinks consciousness is untransmutable between individuals, including animals.Thus, he does not know what we are, as another individual, capable to feel.In this way, his argument cannot prove that every human being in unable to know the consciousness of a bat.Thus, it is theoretically possible as a human to feel what it's like to be a bat, unless every human being denies it.
2) Nagel thinks consciousness is transmutable between the same species because he presupposes that what he cannot feel other human beings cannot feel.Thus, he has some presupposing that consciousness is transmissible within the same species.
Obviously, Nagel needs to provide further justification for either of the above.His premise and argument have some contradictions, if consciousness is untransmissible then his premise also cannot be transmitted to others.
More Objections on Nagel
We still may consider further the subjective phenomenal consciousness within humans.However, we can put forth think there is no phenomenal consciousness.What accounts for the feeling of being 'me' is just accumulated memory.Instead of using phenomenal consciousness to perceive the world, you are just pairing your sensation with your 'database' of accumulated memory.First, we will explain the problem in Nagel's theory and then explain the memory proposal.You ->use your phenomenal consciousness to perceive the object -> idea generated You -> perceive frozen yogurt->pairing with your previous experiences ->idea generated Thinking about your response to the question of whether you like frozen yogurt, you are able to answer it immediately without pause.However, you will probably need more time to think when being asked "do you like the lady sitting next to you?"This difference leads to a question: why is the responding time different if phenomenal consciousness decides what we see?To explain this with a metaphor, phenomenal consciousness is like perception through sunglasses, which means the world you see is already colored and you should be able to describe it instantaneously.However, the fact is that you need different time lengths to process different objects.For example, if the phenomenon consciousness exists, you will have some preference, for example, you like tall girls; so, you should be able to make judgments according to your preference the moment you see the lady next to you.However, the hesitation proves that you have no instant answer for it, which runs counter to Nagel's thinking that we always have the feeling of being 'me'.Moreover, one may argue that this comparison is problematic because feelings surrounding attraction to a someone should be much more complex than frozen yogurt, which we all would agree with.However, there is a difference between describing and thinking.When wearing 'consciousness glasses' to see the girl, what you see through the glasses is your answer.But the fact that you may need a longer time come to your opinion demonstrates that what you see is not your opinion.Instead, you perceive the object and then use some brain functions to process it.Hence, Nagel's objection is still problematic.
We can potentially fix the consciousness problem, for materialists, by pointing out that Nagel mistakenly takes scientifically explainable memory as consciousness.Every sensory experience, things you have perceived through your senses, has been stored in your memory.You pair the novel sensory experience with the previous sensory experience.Then, you will give positive feedback to familiar things and give negative feedback to unfamiliar things.Yet, one may argue that memory cannot explain everything.Thinking again about our frozen yogurt case, you and your friend have different comments because of your different memories.This is not because of a single piece of memory but accumulated memory, ph strips can help to visualize this.
You have been raised in a different environment and accepted different food in the past.The first time you had something sweet, you probably just feel fresh about this taste.However, in the future, every sweet thing you taste will be stored in your brain in the form of a memory, and you will develop a sequence of sweetness degrees just like the color sequence of the ph tests.Every time you have a new sensory experience, you will pair it with your sequence and get a conclusion.Therefore, your friend thinks this frozen yogurt is sour because this taste experience is at a very low degree in their sequence of sweetness degree.In other words, everything you have experienced will become part of one's memory and causes your individuality.The individuality or subjective phenomenal consciousness is the sum of all of one's previous experiences rather than of a metaphysical soul.Although this is a philosophical topic, some scientific result seems to support this assumption.As the biological sciences and biomedical engineering professor, Arnold from the University of Northern California states "We found that the process of forming new memories changes how brain cells are connected to one another.While some areas of the brain create more connections, others lose them" [10].As the quote illustrates, our new experiences will change our brain cells as they form new memories.Then, these new memories will actively change the arrangements of the brain cells.The close connection between brain rearrangement activity with the formation of new memory demonstrates the importance of memory in, what usually has been called, 'consciousness'.
One may argue that the limitation of memory makes it hard for us to accept that a new sensory experience is being paired with every memory we had before.The answer to this argument proposed here is that humans have different layers of memory and human capacity cannot fully control it.Memory is like a library with many bookshelves and what you experience is the books sitting on the shelves.You will take notes after you read a book, then, you will place the book back on the bookshelf.The further in life, you progress, the more "books" you have stored.Hence it will be hard for you to find a book instantly because there are too many shelves to search for it on.Under certain conditions, such as hypnosis or re-experiencing, you still can locate it.However, going through the notes, the surface level of memory is much easier.Freud, in his "Five Lectures about Psychoanalysis" also states that when conducting hypnosis with a patient who has hysteria symptoms, patients are able to give the doctor some clue about these 'stuck' thoughts.Then, patients may be able to recall the memory that they think they have forgotten when the doctor discusses those clues with them after they wake up [11].In this sense, memory is similar to a huge supermarket, Costco for example.While this may seem strange at first, Costco has very organized racks where you can find products easily without help.The stuck thoughts can be thought of as the things in the warehouse.Sometimes we can reason confidently that there are more things in the back, but we cannot list those items easily without 'a special moment'.For instance, we have all had the experience where we have only had fleeting contact with someone.It is so short that you have already forgotten that person moments later.However, you can still just about recognize that person the next time you pass them in the street, much like the warehouse, where we know things exist in the back without entering the warehouse, but we can never know what exactly is in the back.A customer must ask for help from staff to access the items hidden away.For their part, the doctor is like the salesperson in Costco who possesses the key to accessing the warehouse.Hypnosis seems to be the key to one's "memory warehouse".There is, however, no assurance, that you can find what you want, even if you get access to the warehouse.Freud's observation is very interesting because by thinking reversely, we may be able to reflect on brain cognition.Hence, we do not possess an 'inventory list' of our memory stock, which makes the reflection of our consciousness becomes more difficult.
Moreover, the statement that we give negative feedback to unfamiliar things and positive feedback to familiar things may lead to another doubt --sometimes we have a positive feeling towards an unfamiliar thing and vice versa.We can answer this in that your judgment of familiarity based on your surface memory is not accurate, because we only know what may be recorded by the memory, but we do not know how does our brain records things.We have neglected most of our sensory memory, making it impossible to judge the familiarity of the things within your deep memory.We can again turn to the internet recommendation function for an explanation.Sometimes when browsing the internet, ads about the shoes you have desired for a while pop up.You think this is a coincidence because you never used this device to search for this shoe style.However, a while later, you accidentally find that your friend has mentioned it once and it has been recorded through targeted marketing.This is similar to the memory recording process.In another word, by performing actions, you always have an imprint left on your memory surface.However, the environment, i.e. other people passing by, or the small talk from the next table, has all been recorded in your memory in some way, which we do not fully realize.For instance, you may feel joyful when you taste some unfamiliar Chinese kung pao Chicken, but most Westerners will not find white fungus soup delicious.Both of these are unfamiliar things, while, the major elements, sweet or sour, of Kung Pao chicken, can be paired as familiar to your sensory sequence but the white fungus cannot.
Regarding some assumptions to explain the feeling of pain or other emotions.They are all due to different degrees of familiarity, but our languages have made it more romantic.For example, selfabasement and modesty are actually the same things --the unfamiliarity of being accomplished.However, we use a different word to represent it based on the context.More examples can be provided: when someone compliments himself, two comments, pride or confidence, can be made.It is because we value people's self-compliment based on our memory.Hence, I believe every mental state can be paired to the memory system, which science can explain.
Conclusion
In conclusion, materialism may contain many unfolding questions because of our limits on technology and neuroscience.It is, however, too early to invite the metaphysical soul into this mind and body field when we still lack understanding on how our brain works.In his essay, Nagel thinks subjective consciousness is an obstacle for materialism to justify that all mental states can be explained by science.Yet, I suggest that consciousness, mentioned by Nagel, is just a misunderstanding of memory.However, the memory functions may need more effort from technology to examine. | 5,182.8 | 2023-02-28T00:00:00.000 | [
"Philosophy"
] |
Dataset Denoising Based on Manifold Assumption
Learning the knowledge hidden in the manifold-geometric distribution of the dataset is essential for many machine learning algorithms. However, geometric distribution is usually corrupted by noise, especially in the high-dimensional dataset. In this paper, we propose a denoising method to capture the “true” geometric structure of a high-dimensional nonrigid point cloud dataset by a variational approach. Firstly, we improve the Tikhonov model by adding a local structure term to make variational diffusion on the tangent space of the manifold. Then, we define the discrete Laplacian operator by graph theory and get an optimal solution by the Euler–Lagrange equation. Experiments show that our method could remove noise effectively on both synthetic scatter point cloud dataset and real image dataset. Furthermore, as a preprocessing step, our method could improve the robustness of manifold learning and increase the accuracy rate in the classification problem.
Introduction
Since objects vary gradually in the real world, the manifold assumption indicates that the data points depict the state of an object should distribute on a smooth low-dimensional manifold embedded in high-dimensional observation space [1]. Dimensionalities of the manifold are key factors that control variation of the object state. For example, in Figure 1, the images of the rotational duck toy distribute on a one-dimensional manifold (a curve) embedded in high-dimensional pixel space. Each image depicts a particular state of the duck. Although the pixel values change dramatically at these images, humans could discover easily that they are controlled by one key factor: rotation of the duck.
Learning the knowledge hidden in the manifold-geometric distribution of a high-dimensional dataset is essential in many machine learning algorithms. For example, manifold learning algorithms aim to discover the nonlinear geometric structure dataset by preserving different local geometric properties [3][4][5][6][7][8]. e embedding results can be further used in data visualization, motion analysis, and classification [9,10]. Moreover, much research takes manifold assumption as a constraint condition in its objective function [11,12]. It is worth noting that manifold assumption is applied to explain why deep learning works well recently [13][14][15]. is research indicates deep learning could capture the manifold structure of one kind of knowledge by powerful nonlinear mapping.
However, noise is inevitable in data acquisition. For example, in Figure 1, the noiseless images of the rotational duck toy (red points) should lie on a curve embedded in the pixel space. However, due to the long exposure time and camera shake, the duck becomes "brighten" and "small" in the image. e corresponding noise data point, which is marked by "N" and green color in Figure 1, does not lie on the curve because pixel values change dramatically in the noise image.
Noise makes machine learning models fragile and hard to train. For example, the outlier points are difficult to handle in the classification and clustering task. Machine learning model needs to become more complex to get proper results [13]. In manifold learning algorithms, noise points make recovered embeddings difficult to capture the true manifold-geometric distribution of the dataset. e reason is that the "short circuit" phenomenon arises easily in the noise dataset which destroys the local linear structure of the manifold [16].
In this paper, we propose a novel denoising method based on manifold assumption. Our aim is to obtain the data points that lie on the noiseless manifold through noise data points. Compared with the existing denoising methods, our method has two contributions worth being highlighted: (1) Our method makes use of manifold-geometric distribution information of the dataset. erefore, this method works for a dataset rather than a single data point.
(2) Our method improves the Tikhonov model to make the variational diffusion on the tangent space of the manifold for a high-dimensional nonrigid point cloud dataset.
Our method could capture the "true" geometric structure of the noise dataset. After denoising, the key factors that control the geometric distribution of the dataset are maintained and the characteristics of individual points are removed as noise. As a preprocessing step, our method could improve the robustness of manifold learning and increase the accuracy rate in the classification problem. e rest of the paper is organized as follows: a brief review of the research on the manifold assumption is outlined in Section 1. Section 2 describes the motivation and details of the proposed method. In Section 3, experiments are conducted on both synthetic and real data to evaluate our method. Section 4 concludes remarks and a discussion of future work.
Related Work
Existing denoising methods always work for the noise in a single data point, such as "Gaussian noise" or "pepper noise" [17,18] in an image. However, these methods could not deal with the noise that distorts the geometric distribution of the dataset, such as the noise duck toy image (green point) caused by longer exposure time and camera shake in Figure 1.
Only a few studies exist to deal with this problem. Gong et al. [19] proposed a local linear denoising method. is method removed noise by projecting noise data points to the tangent space of manifold which is estimated by the principal component analysis method firstly. en, local denoised patches are aligned to get the global denoising dataset. However, the principal components may be distorted because they are calculated by the neighborhood of noise data points, which could lead to a wrong denoising result. Hao et al. [16] also utilized principal component analysis and projection method to find the noiseless data points. erefore, it has the same problem. Moreover, many machine learning methods proposed the noise-resistant model for outliers but did not discuss denoising as an independent problem [7,20]. For example, Zhang et al. [7] proposed an adaptive neighborhood selection method by the shrink and expand strategy to resist noise on the neighborhood of manifold.
In this paper, we propose a denoising method for the dataset.
is method improves the Tikhonov method by adding a local structure term. e optimal solution is obtained by minimizing the objective function through a variational diffusion approach.
Proposed Approach
Let F � f(1), f(2), . . . , f(m) { } be the noise dataset. f(x) ∈ R D is the x-th data point in F. D is the dimension number of f(x). Let U � u(1), u(2), . . . , u(m) { } be the noiseless dataset we want to obtain. u(x) ∈ R D is the x-th data point in U. f(x) � u(x) + ξ(x), ξ(x) ∈ R D is the noise of f(x). e goal is to recover U from F. We illustrate our method in three steps: firstly, introduce to inspiration and motivation; then, construct the objective function by improving the Tikhonov model; and finally, optimize the objective function and get the solution by taking discrete operators.
Inspiration and Motivation.
Manifold assumption claims that the noiseless data point u(x) that depicts the object state (the blue points in Figure 2) should lie on a smooth manifold U (blue surface in Figure 2) embedded in observation space. However, noise points f(x) (red points) distribute on the noise manifold F. e denoising problem is how to obtain ux on Ufrom f(x) on F.
Objective Function.
e objective function is formulated in this part. Firstly, we illustrate the Tikhonov model briefly in image denoising which is similar to our problem. en, the challenge of our problem is shown. Finally, we improve the Tikhonov model and construct the objective function for our problem. are pixel values at row xand column y in noise and noiseless image, respectively. ξ(x, y) is the noise. In Figure 2, if we regard the x, y, and z coordinate of f(x) as row number, column number, and pixel value, then the red manifold F depicts the pattern of noise image. erefore, the image denoising problem is to find a noiseless image U from F. e Tikhonov model is one of the most classical variational models to deal with this problem [21]: where Ω is the image domain and dx is the area element (pixel) in Ω. ∇u is the gradient of u(x). e first term Ω (u − f) 2 is "data term" that measures the Euclidean distance between F and U. e second term Ω |∇u| 2 dx is "smooth term" that measures the noise strength of U. Since these two terms have opposite effect, the parameter α balances these two terms. If α is small, U is close to F but the noise strength is large. On the other hand, the noise becomes small but the image pattern of U is "unlike" F.
e Challenge of Our Problem.
In the image denoising problem, the gradient operator is defined as [21] ∇u When minimizing the "smooth term" Ω |∇u| 2 dx in (1), the pixel values in the image became the same, whereas the image area does not change since x and y are fixed.
However, in our problem, the dataset is nonrigid and high-dimensional cloud points. Let u(x) � [u(x) 1 , u(x) 2 , . . . , u(x) D ] ∈ R D be a data point. D is the dimension number of u(x). Suppose N u(x) is the neighborhood of u(x) which is determined by the KNN method: Naturally, the gradient operator is defined as ∇u � u(x) − u y 1 , u(x) − u y 2 , . . . , u(x) − u y k T .
(4) erefore, the "smooth term" in (1) is When minimizing an objective function, the "smooth term" makes u(x) and u(y i ) become the same point. erefore, the "cluster" phenomenon arises in the dataset-some points are brought close together and the other points are pushed away. erefore, the geometric structure of the manifold U (blue surface in Figure 2) will shrink to a few point clusters rather than becoming smooth.
erefore, the Tikhonov model could not be applied directly to solve our problem.
Our Objective Function.
To deal with this problem, we maintain the geometric distribution of U by keeping the tangent linear structure when minimizing the objective function. Since the neighborhood of the manifold could be regarded as tangent space (the blue plane in Figure 3), we make the neighborhood structure of U the same as F. e weight of local linear representation is utilized to depict the geometric structure of the neighborhood.
e weight W f of data point f(x) is defined as where f(y i ) ∈ N f(x) and W fi is the i-th component of W f between f(x) and f(y i ). Similarly, the linear representation weight of u(x) is defined as W u . e local linear structure can be maintained if we set W u the same as W f . en, f(x) could only move along the normal space of manifold when minimizing the "smooth term" in the objective function because the tangent geometric structure is fixed by W u . erefore, we add a "local structure term" in the Tikhonov model: where N u(x) W fi f(y i )dy is the linear reconstruction of u(x). us, our objective function is where α and β are balance parameters.
Optimal Solution.
In this part, we get optimal u by minimizing objective function (8). e solution in the continuous form is calculated firstly. en, the discrete operator is defined and plugged to get a discrete solution.
Noise data points Noise manifold Noiseless data point Noiseless manifold Figure 2: Illustration of the idea of our method: obtain the noiseless blue points that lie on smooth manifold (blue surface) from the noise red points that distribute on an irregular surface (noise manifold).
Solution in Continuous Form.
To get optimal u, we calculate the derivative of (8) with respect to u by variational approach and set it to zero: erefore, the Euler-Lagrange equation of u is en, And the boundary condition is
Solution in Discrete Form.
To get the discrete solution, we define the discrete Laplacian operator in (11) by spectral graph theory [22]. Firstly, the gradient of u(x) is defined as is gradient is a k-dimensional vector because there are k data points in N u(x) . e subscript "wG" is abbreviated to "weighted graph." W d (x, y) is a weight vector. e component W d (x, y i ) should be important if u(x) and u(y i ) are near. On the contrary, the component should be unimportant if u(x) and u(y i ) are far away. erefore, we define W d (x, y) as where d(x, y) is the vector of Euclidean distance between u(x) and u( . (15) Consequently, the gradient of a vector v(x, y) is (the derivation procedure is listed at "Notice" at the end of this capture): Let v(x, y) � ∇ wG u(x, y) � (u(y) − u(x)) ������ w(x, y), therefore, the discrete Laplace operator of u(x) can be defined by Mathematical Problems in Engineering We plug the discrete Laplace operator into (11). e solution of our object energy function (8) is where the superscripts k and k+1 are the iteration step. e initial value of u is set to f. e optimal u is obtained by iteration, which ends up when E(u) < ε, where E(u) is the objective function value and ε is a small error we set. e boundary condition (12) could be ignored because the dataset is scattered and nonrigid cloud points.
Notice: e gradient of a vector v could be derived as follows: ������ w(x, y) erefore,
Experiments
In this section, we evaluate our algorithm on both the synthetic scatter point cloud dataset and real image dataset. en, this method is utilized as a preprocess step for manifold learning and classification task. e major parameters of our algorithm include (1) the neighborhood size k; (2) the smooth term weight α; and (3) the local structure term weight β.
Experiments on Synthetic 3D Scatter Cloud Data.
In this part, we test our algorithm on the classical "swiss roll" dataset.
e data points are sampled from 2D manifold randomly embedded in the 3D space like a swiss roll cake. Figures 4(a) and 4(b) at first row are noiseless and noise dataset at [− 8, 10] and [0, 0] viewpoint, respectively. It is obvious that noise data points distribute around the "swiss roll" manifold but do not lie on it exactly. Our goal is to recover the noiseless dataset in Figure 4(a) by the noise dataset in Figure 4(b). In this experiment, we set the number of data points n � 1300, KNN parameter k � 12, and the noise parameter NI � 1. e MATLAB code of the swiss roll dataset is listed in Table 1. e second, third, and fourth rows in Figure 4 are denoising results by our method with α and β equal to (1, 1), (3,1), and (0.3, 1), respectively. For ease of viewing, we set the denoising datasets at [− 8, 10] and [0, 0] viewpoints in the left and right columns. In the right column, it is easy to see that the denoising data points are closed to the tangent space of manifold compared with (b), which show that our method is effective. Among them, (f ) seems to be the best result because the denoising points are the nearest to manifold compared with (d) and (h). However, the "cluster" phenomenon arises in the denoising dataset; some points are close together and the other points are pushed away, which is easy to see in (e). e reason is that the large smooth parameter (α � 3) makes geometric distribution distort when minimizing the objective function. Conversely, the "cluster" phenomenon in (g) is not serious when we set a small parameter α � 0.3, but the noise is large.
To conduct a quantitative comparison between noise and denoising datasets, we assess the quality of the denoising datasets by mean square error (MSE) and tangent distance error (TE). MSE is a widely used index which measures the average squared Euclidean distance difference between two datasets: where N is the point number of the dataset. u i and u * i are a noise data point and corresponding noiseless data point.
Mathematical Problems in Engineering e tangent distance error (TE) measures the distance of u i to the tangent space of the manifold. A small TE indicates that u i lies on the manifold and noise is weak. On the contrary, the noise strength is large if TE is big. For the convenience of calculations, we approximate TE as the Euclidean distance between u i and its nearest data point in the noiseless dataset. e tangent distance error (TE) is defined as Input: the number of datasets: n; noise parameter: NI Output: swiss roll dataset, noiseless and noise t � (3 * pi/2) * (1 + 2 * rand(n, 1)); Height � 30 * rand (n, 1); Noiseless data � [t · * cos(t) height t · * sin(t)]; Noise data � [t · * cos(t) height t · * sin(t)] + NI * rand n(n, 3); 6 Mathematical Problems in Engineering where N is the number of data points, u i and u * i represent the denoising data point and noiseless data point, respectively. U * is the noiseless dataset.
To evaluate our algorithm, we test seven sets of α and β ranging from 0 to 10. MSE and TE are listed in Tables 2 and 3. When α and β equal 0, the "data term" is the only term remaining in the objective function (8).
erefore, the denoising dataset is the same as the noise dataset and the value at (α � 0, β � 0) is the errors of the noise dataset. While α is small and β is large, the "data term" and "local structure term" maintain the geometric structure of the noise dataset. erefore, the errors at the upper right of the table are close to the errors of the noise dataset. While α is large and β is small, the "smooth term" plays a major role. It could lead to a "cluster" phenomenon which distorts the geometric structure of the dataset and make errors large at the bottom left of the table. It is able to see that the errors near the diagonal of tables are much smaller than the others.
Experiments on the Image Dataset.
In this part, we test our method on two real image datasets: MNIST handwritten number dataset [23] and "LLE face" dataset. Image is regarded as a point in pixel space. For example, the image in the MNIST dataset could be regarded as a point in 784dimensional space because it has 784 pixels. erefore, the only difference between this part to experiment 3.1 is that the dimensionality of image-point is much higher than the synthetic scatter point in 3D space.
We analyze denoising images both from the subjective and objective aspects. Firstly, our method is applied to raw image datasets. Ideally, key factors that control the geometric distribution of the dataset could be maintained and the characteristics in individual images are removed as noise. Since there is no ground truth of the raw image dataset, we could only evaluate results by eyes subjectively. Secondly, we add several types of noise in an image and utilize MSE to measure the denoising images by our method and classical image denoising methods objectively.
Experiments on the Raw Image Dataset.
We select "number 3" and "number 4" datasets in MNIST which contain 1010 and 982 images, respectively. e size of each image is 28 * 28 pixels. e "LLE face" dataset contains 1965 face images with different expressions and shooting angles. e size of each image is 28 * 20 pixels. Figure 5 shows 110 images in the "handwritten number 3" dataset. e left side is original images and the right side is the corresponding denoising images by our method. In this experiment, k � 15, α � 0.8, and β � 1. Four typical images are marked with a box and listed in Figure 5. It can be seen that the blurring strokes become clear and the posture of number in the image is maintained. Figure 6 shows the 110 images in the "handwritten number 4" dataset. e left and right sides are original images and the corresponding denoising images by our method, respectively. In this experiment, k � 15, α � 8, and β � 1. It can be seen that the denoising images maintain the main factors, such as the angularity of number "4." And the individual characteristics are removed after denoising; for example, the difference of stroke width becomes small after denoising. Four typical images are marked with a box and listed in Figure 6. It is obvious that the margin of "head" of number "4" becomes large in the first two images after denoising. In the third image, the stroke width becomes broad. In the fourth image, the "bend" at the upside of the stroke is removed. Figure 7 shows the denoising result for the LLE face dataset. is dataset contains 1965 face images and the size of each image is 28 * 20 pixels. In this experiment, k � 15, α � 3, and β � 0.8. [4] shows that this dataset distributes on the manifold that spans by two key factors: head pose and expression, where the expression reflects by lip shape in images.
It can be seen that these two factors are maintained after denoising and the characters in the individual image are removed as noise. Four typical images are marked with a box and listed in Figure 7. In the first two images, the head twists to the left and right slightly in the original dataset whereas the head pose is fixed after denoising. In the third image, the original head seems to be smaller than the other images which may be caused by camera shake. e corresponding denoising image enlarges the face, and the cheek and chin became "fat." In the fourth image, the eyes are "open" after denoising.
Experiments on the Noise Image.
In this part, we add several different types of noise to an LLE face image. en, our method and three classical image denoising methods are applied to these noise images. Finally, MSE is utilized to evaluate denoising images. Figure 8 shows the denoising images by four denoising methods for five types of noise. e first column is a raw LLE face image. Brightness noise, Gaussian noise, salt and pepper noise, rotation noise, and scaling noise are added to the raw image which are shown in the second column, top to bottom row. e MATLAB code of noise model is listed in Table 4. ree classical denoising methods, mean filtering, median filtering, and Tikhonov method are utilized to deal with these noise images. e corresponding denoising images are listed in the third, fourth, and fifth columns in Figure 8. e images in the last column are denoising results by our method. MSE is listed below each image. In this experiment, the size of the raw LLE face image is 28 * 20 pixels. In mean filtering, the size of the filter is 2 * 2 pixels. In median filtering, the size of the filter is 3 * 3 pixels. In the Tikhonov method, the smooth parameter is 0.3. e parameters in our method are set to k � 15, α � 3, and β � 3.
It can be seen that three classical denoising methods have no effect on brightness noise, rotation noise, and scaling noise. ese noises still exist in denoising images. e MSE even becomes larger after denoising in contrast to the noise image whereas our method has a good effect. For example, the rotation face is fixed at the fourth row and sixth column and MSE becomes smaller. e reason is that classical image denoising methods make use of the pattern information in a single image. ey could not "see" the geometric distribution information of the whole image dataset whereas our method removes noise by drawing noise data points back to the noiseless manifoldgeometric distribution of the image dataset.
Denoising Dataset for Manifold Learning.
In this part, we utilize our method as a preprocessing step and compare the recovered low-dimensional embeddings of noise and denoising datasets on several manifold learning algorithms. In this experiment, α, β, and k are 1, 0.8, and 13. denoising dataset by LTSA. Figures 9(g) and 9(h) are embeddings of the noise and denoising dataset by HLLE. It is obvious that embeddings of the noise dataset could not reflect the geometric distribution of manifold since the neighborhoods easy to result in the "short circuit" phenomenon. By taking the denoising dataset, all the three manifold methods could get the proper embeddings. e results of Isomap result in the "hole" phenomenon because the calculated geodesic distance is always larger than it really is. To conduct a quantitative comparison, we assess the quality of the embeddings by three indexes: embedding error, trustworthiness error, and continuity error [8]. e embedding error E measures the squared distance from the recovered low-dimensional embeddings to the ground truth coordinates which could be defined as where N is the number of data points and y n and y * n represent the embedding coordinates and ground true coordinates, respectively. is index tends to measure global structure distortion of the manifold. e trustworthiness error T and continuity error C measure the local geometric structure distortion. e trustworthiness error measures the proportion of points that are too close together in the low-dimensional embedding and continuity error measures the proportion of points that are pushed away: where k is the point number in the neighborhood, r(n, m) is the rank of the point u m in the ordering according to the pairwise distance from point u n in the high-dimensional space, and r(n, m) is the rank of the point y m in the ordering according to the pairwise distance from point y n in low-dimensional embedding. e variables U (k) n and V (k) n denote the neighborhood points of u m in low-dimensional embedding and high-dimensional space, respectively.
We test our method on several dimension reduction methods. e noise swiss roll dataset contains 1300 points. Here, we set α, β, and k to 1, 0.8, and 13. e best embedding results among several trials are selected in this experiment. e embedding error, trustworthiness error, and continuity error are listed in Tables 5-7, respectively. To show the effectiveness of our method, the errors of noise dataset, denoising dataset, and noiseless dataset are listed in three rows. It could be seen that the errors become small by taking the denoising dataset in Isomap, LLE, HLLE, LTSA, and AML. However, LE and LPP have a poor performance by taking denoising dataset.
Classification Experiment.
In this part, we utilize our method as a preprocessing step and compare the accuracy rate of the original dataset and denoising dataset in the classification task. MNIST handwritten number dataset is selected which contains 60000 images with ten classes from numbers 0 to 9. Each class has about 6000 images and the size of each image is 28 * 28 pixels. To get the denoising dataset, we utilize our denoising method for these ten classes, respectively.
In this experiment, we specify different numbers of images in each class as training data and utilize the remaining images as test data both in the original dataset and denoising dataset. A simple one-hidden-layer neural network is adopted as a classifier. e input layer has 784 units corresponding to the pixels in an image. e output layer has 10 units corresponding to ten categories from number zero to nine. We set 25 units in the hidden layer including a bias unit. e parameters of the network are trained by the BP method.
For each classification task, we repeat 10 times and list the mean accuracy rate in Figure 10. e labels "original dataset" and "denoising dataset" are raw MNIST dataset and denoising dataset with our method. e x-coordinate is the number of training images in each class and the y-coordinate is the accuracy rate. e blue and red lines are the accuracy rate of the original dataset and denoising dataset, respectively. It is obvious that the accuracy rate goes down as the number of training images decreases in each class. e performance of the denoising dataset is much better than the original dataset, especially when the training number is less than 50 in each class. e accuracy is above 96% even when there are only 10 training images in each class for the denoising dataset. e reason is that the individual characters are removed in the denoising dataset, which is shown in Figures 5-7 in Section 3.2.1. e denoising datasets that distribute on a "clean" manifold expanded by key factors of the dataset could make machine learning algorithm easy to learn the geometric distribution knowledge of the dataset. It also illustrates that there is some kind of essential features to the classifier that is captured by our method.
Conclusion and Future Work
We propose a denoising method for the dataset rather than a single data point. is method is inspired by the manifold assumption. A local structure term is added in the Tikhonov model to make the noise points diffuse on the tangent space of the manifold. Our method could prominent the major factors hidden in the dataset and remove characteristics of the individual data point. Experiments show that our method could eliminate noise effectively on both synthetic scatter point cloud dataset and real image dataset. And as a preprocessing step, our method could improve the robustness of manifold learning and increase the accuracy rate of the classification problem. However, the parameters are sensitive in this model because the optimal solution is calculated by iteration. e geometric distribution of the dataset is distorted when the smooth term parameter is large. On the contrary, the noise intensity is still large after denoising. Our future work will focus on this problem.
Data Availability
Some or all data, models, or codes generated or used during the study are available from the first author upon request.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 7,029.4 | 2021-01-18T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
Spin and Charge Transport in the X-ray Irradiated Quasi-2 D Layered Compound : κ-( BEDT-TTF ) 2 Cu [ N ( CN ) 2 ] Cl
The interplane spin cross relaxation time Tx measured by high frequency ESR in X-ray irradiated κ-(BEDT-TTF)2Cu[N(CN)2]Cl is compared to the interplane resisitivity ρ⊥ and the in-plane resistivity ρ‖ between 50 K and 250 K. The irradiation transforms the semiconductor behavior of the non-irradiated crystal into metallic. Irradiation decreases Tx, ρ⊥ and ρ‖ but the ratio Tx/ρ⊥ and ρ⊥/ρ‖ remain unchanged between 50 and 250 K. Models describing the unusual defect concentration dependence in κ-(BEDT-TTF)2Cu[N(CN)2]Cl are discussed.
Introduction
κ-(BEDT-TTF) 2 Cu[N(CN) 2 ]Cl (κ-ET 2 -Cl hereafter) is a layered organic conductor on the borderline of a Mott metal-insulator transition [1,2].The electronic band is effectively half-filled with one hole per two ET molecules.The electronic structure is two dimensional, layers of ET molecules are separated by polymeric anion layers (Figure 1).Judged from the electrical and magnetic properties of non-irradiated κ-ET 2 -Cl, the following temperature ranges are distinguished: (i) Below T N = 23 K: weakly ferromagnetic insulator [3]; (ii) Between 23 and about 50 K: a smooth insulator to a semiconductor transition with an anomalous magnetic field dependent magnetism; (iii) Above 50 K: semiconductor with a very small gap and a temperature-independent paramagnetic susceptibility [4].In all three temperature ranges the coupling between layers is extremely weak: in ranges (i) and (ii) magnetic oscillations of adjacent layers are independent in general direction magnetic fields; In range (iii) electron hopping between layers is extremely rare.At ambient temperatures electrons diffuse several tenths of a micrometer confined to a single molecular layer [5].X-ray irradiation at relatively small doses changes drastically the physical properties in all three temperature ranges.Sasaki et al. [6] proposed that the deviation from half filling is the underlying reason for the sensitivity to irradiation.The results and discussion of irradiation induced changes of magnetism in range (i) and (ii) will be presented in a forthcoming paper.This paper is restricted to the discussion of the conducting high temperature range.Here the change by irradiation of the resistivity from a semiconductor to a metallic temperature dependence is the most remarkable phenomenon.Section 2 is a brief description of experimental details and the high frequency ESR method to determine the interlayer hopping frequency ν ⊥ .The relation between interlayer hopping and resistivity (Section 3) is most important for the interpretation.Details of the method and the high frequency ESR results in non-irradiated κ-ET 2 -Cl and κ-ET 2 -Br were published in References [7,8].Section 4 presents the experimental results.Irradiation has little effect on both interlayer coupling and intralayer spin relaxation at 250 K and we argue in Section 4.1 that the concentration of irradiation-induced defects is low.In Section 5 we discuss possible mechanisms for the qualitative changes in the electronic transport.
Experimental
This study follows previous optical, resistivity, static magnetization and low frequency (9 GHz) ESR measurements on irradiated κ-ET 2 -Cl [6,9] and κ-ET 2 -Br [10] crystals at Tohoku University (Sendai, Japan).The method of irradiation has been described earlier [6].Irradiation intensities were about 1/2 MGy per hour.ESR was measured on the same crystal with increasing doses.The total doses of 90 h, 180 h, 360 h, 720 h resulted from incremental irradiation after each ESR measurement.The ESR data on the non-irradiated crystal are taken from measurements on a different crystal.We found that the ESR of non-irradiated crystals are well reproducible.Although the non-irradiated crystal in this study is different from the two crystals in the previous ESR study [7], the results agree well.ESR spectra between 3 K and 250 K were recorded for all irradiation doses in magnetic fields along the crystallographic b and c axes and along ϕ ab = 45 • from the a axis in the (a, b) plane.In this paper we present only the ϕ ab = 45 • data from which the interlayer hopping rate is determined.The ESR spectra were taken at 111.2 GHz and 222.4 GHz in the BUTE ESR Laboratory [11].ρ in-plane and ρ ⊥ interplane resistivity data were measured on different crystals and with different irradiation doses but under the same conditions as used for ESR.The difficulty of contacting make absolute values of ρ ⊥ and in particular of ρ uncertain.
Method of Measuring ν ⊥
The measurement of ν ⊥ by ESR is based on the assumption that spin and charge hopping rates are equal.We use the convention 1/T x = 2ν ⊥ , where 1/T x is the spin cross relaxation rate due to hopping from one layer to the two adjacent layers.In κ-ET 2 -Cl T −1 2A and T −1 2B , the spin relaxation rates of conduction electrons due to processes intrinsic to layers A and B are long (i.e., in the ns range); consequently the ESR spectrum depends sensitively on interlayer hopping.
ν ⊥ can be measured by electron spin resonance (ESR) in materials like κ-ET 2 -Cl where chemically equivalent but structurally different layers (denoted by A and B in Figure 1) alternate [8].If the A and B layers do not interact, the ESR spectrum is a superposition of two lines at the Larmor frequencies ν A and ν B .Since the g-factor tensors of A and B layers are differently oriented, the A and B lines are resolved in sufficiently high magnetic fields.In κ-ET 2 -Cl the splitting is largest in fields oriented along ϕ ab = 45 • in the (a,b) plane while it is zero in the (a,c) plane and in the principal crystallographic directions.For a finite interaction between adjacent layers the ESR spectrum depends on the strength of the interlayer interaction, i.e., on the magnitude of ν ⊥ : The two lines are resolved; the lineshapes differ only slightly from the non interacting case.
The three cases are demonstrated in Figure 2 for κ-ET 2 -Cl.Here ν ⊥ was increased by applying pressure [7].
Magnetic Field (Tesla) 0 GPa
To extract ν ⊥ and T −1 2 , the lineshapes are fitted to spectra calculated from two coupled Bloch equations.The electronic exchange between layers represented by an effective magnetic field is also included in the fit.In this paper all data were obtained by fitting ESR spectra recorded at two frequencies, 222.4 GHz and 111.2 GHz.g A and g B were also free parameters, they change little with temperature.
Relation between Interlayer Hopping ν ⊥ and Resistivity ρ ⊥
ν ⊥ and ρ ⊥ are closely related [12].The resistivity is proportional to the hopping time and inversely proportional to the density of available states, D. The electrical current j is given by where ∆µ = eEd is the potential difference between adjacent layers due to the electric field E. The relation is particularly simple if the electrons of molecular layers form a Fermi liquid.Then D = D(E F ) is the density of states (DOS) per ET dimer for both spin directions of the metallic layers at the Fermi energy, E F .The perpendicular conductivity calculated from the hopping rate is where 1/F is the two dimensional charge carrier density.Assuming one hole/dimer in κ-ET 2 -Cl, F = (ac)/2, where a and c are the in-plane, b = 2d the out-of-plane lattice constants.The significance of Equation ( 2) lies in the possibility to determine the DOS from measurements of the perpendicular resistivity and spin cross relaxation time without knowing details of the barrier between layers: In a simple semiconductor with a gap U and phonon assisted hopping with attempt frequency ν 0 , the hopping rate is given by 1 Interlayer hopping measured by ESR in non-irradiated κ-ET 2 -Cl and κ-ET 2 -Br crystals has been discussed in detail in Reference [7].At 250 K the interlayer hopping time is T x = 2.6 ± 0.5 ns for the Cl compound and the same for the Br compound within experimental accuracy.The conduction is quasi-two-dimensional: at 250 K electrons are confined to single layers for times several orders of magnitude longer than the momentum relaxation time.The density of states estimated from Equation ( 2) is about a factor of 5 larger than calculated from the band structure.Between 250 K and 50 K T x increases rapidly with decreasing temperature in the Cl compound while it is about constant in the Br compound.
Interlayer Hopping Rate and Resistivity above the Metal Insulator Transition
The experimental data of the cross relaxation time T x = 1/(2ν ⊥ ) and the perpendicular resistivity ρ ⊥ above the metal-insulator transition at various irradiation doses are summarized in Figure 3.The irradiation doses are in the same range in the two experiments.The scales of T x and ρ ⊥ are chosen in the figure so that the experimental points of T x and ρ ⊥ coincide at 250 K for the non-irradiated sample.Figure 4 displays the in-plane and interplane resistivities ρ and ρ ⊥ respectively of non-irradiated and 300 h irradiated samples.The resistivities in Figures 3 and 4 are similar to the ones measured on a different crystal in Reference [6].The resistivity ρ ⊥ = 60 Ω cm in this work agrees reasonably well with the measurements of Reference [13].
We note from Figure 3 that the interlayer hopping time and the perpendicular resistivity are to a good approximation proportional for all irradiation doses and in the full temperature range above 50 K.The non-irradiated sample has a semiconducting behavior with T x and ρ ⊥ smoothly increasing as the temperature is decreased.On the contrary, the 180 h and more irradiated samples have a metallic like behavior: T x and ρ ⊥ decrease with decreasing temperature.In the irradiated samples the abrupt metal-insulator transition is qualitatively different from the smooth transition in the non-irradiated sample.Furthermore, irradiation shifts the transition to lower temperatures.
Figure 4 shows that ρ ⊥ and ρ have similar temperature and irradiation dependence.The ratio ρ ⊥ /ρ is about 600 independent of temperature for both the irradiated and non-irradiated samples.This ratio, even if uncertain, is typical for measurements in the literature.The 250 K ESR spectra change little under irradiation (Figure 5a).Even at the highest irradiation dose the interlayer hopping rate remains low since the A and B layer lines are well resolved.Figure 5b shows the cross relaxation T x and intrinsic relaxations T 2A and T 2B calculated from the spectra.T x decreases by less then a factor of 2 for the largest dose while T 2A and T 2B are not changed significantly by the irradiation.
The mean free path within ET layers is shorter than the molecular separation, implying a very short in-plane electronic momentum lifetime.In contrast, out of plane hopping events are rare, ν ⊥ in κ-ET 2 -Cl is about 10 9 s −1 and electrons diffuse to long distances without hopping to adjacent layers.Although the irradiation increases somewhat ν ⊥ , electrons remain confined to single molecular layers for long times.In addition, T 2 , the electronic spin relaxation time within the conducting ET layers, is independent of irradiation.Usually charged defects effectively increase the spin relaxation rate in conductors.The insensitivity of T 2 to irradiation and the relatively small decrease in T x are in agreement with earlier findings [9] that the concentration of induced defects is small even under the largest dose.GHz and the spin relaxation times.The interlayer hopping time T x decreases rapidly with irradiation, especially at low doses.On the other hand, the intrinsic spin relaxation is not sensitive to the irradiation.The single ESR line at 720 h irradiation has an ESR-frequency-dependent width and illustrates the "motional narrowing" case described in Section 2.1.
Discussion
We first list the main findings of the present and earlier works on the conducting and magnetic properties in the temperature range between 50 K and 250 K. (Below this temperature the properties change).
(1) The perpendicular resistivity ρ ⊥ is semiconducting-like in non-irradiated κ-ET 2 -Cl.Irradiation decreases ρ ⊥ in the full temperature range, the decrease is non-linear with dose.At higher doses the resistivity is metallic, i.e., it increases linearly with temperature.(2) The interlayer spin hopping time T x and ρ ⊥ have the same temperature and irradiation dose dependence.The ratio T x /ρ ⊥ is independent of temperature and irradiation dose.(3) The resistivity anisotropy ρ ⊥ /ρ is typically between 100 and 1000.It is independent of temperature and irradiation dose.(4) The magnetic spin susceptibility is approximately temperature independent and does not change with irradiation [9].(5) The intralayer spin relaxation time T 2 is independent of temperature and dose.
The interpretation of the qualitative change in transport properties by irradiation in κ-ET 2 -Cl poses a difficult problem.Matthiessen's rule describes the change of resistivity by defects in usual metals.Impurity scattering in metals increase the resistivity by a temperature independent quantity, the increase is linear with defect concentration and the temperature dependent phonon resistivity is not affected.Clearly, nothing of this applies to κ-ET 2 -Cl: irradiation decreases the resistivity, the decrease is non-linear with defect concentration and the temperature dependence is drastically changed from semiconducting like to metallic.
At the same time other quantities that depend sensitively on the electronic structure are independent of temperature and defect concentration.In Fermi liquids the magnetic spin susceptibility χ and the perpendicular hopping time to resistivity ratio T x /ρ ⊥ are both proportional to the density of states.Although κ-ET 2 -Cl is a strongly correlated system and at high temperatures there are no long lifetime quasi-particles, the independence of the susceptibility and the T x /ρ ⊥ ratio indicate that the electronic structure is not strongly affected by the irradiation.The defect dose independence of T 2 , the relatively small decrease of T x at 250 K with dose and earlier magnetic measurements [9] indicate that the defect concentration is small.Sasaki et al. [6] proposed that irradiation of organic layered compounds has two effects: it increases the electron momentum scattering rate and at the same time dopes the material.By creating localized electrons at defects in the anion layer and delocalized holes in the conducting molecular layers a charge imbalance from half filling takes place.In metallic compounds far from the Mott transition, like κ-(BETS) 2 FeCl 4 , defects enhance the electron scattering as in other metals.In materials close to the Mott metal-insulator transition, doping of holes changes the electronic properties drastically as the band is no more half filled.At high temperatures the extra holes change the semiconducting behavior to metallic, at low temperatures the metal-insulator and the magnetic ordering transitions are suppressed.To understand the measured electric and magnetic properties one has to assume that the small deviation from half band filling changes the resistivity from semiconducting to metallic-like but has little effect on the band structure in general.
It is not simple to understand within the model the independence of the resistivity anisotropy from irradiation induced defect concentration.Why does adding high mobility carriers increase the in-plane conductivity and the interlayer hopping time in the same way?Kumar and Jayannavar [12] proposed a mechanism for a temperature independent anisotropy ρ ⊥ /ρ in materials where the perpendicular conductivity is incoherent.They showed that if the in-plane scattering time τ is much shorter than the interlayer hopping time 1/ν ⊥ , then the ratio ρ ⊥ /ρ is independent of τ .To understand the doping independence of ρ ⊥ /ρ one has to assume that irradiation introduces carriers with an increased τ but at the same time the parallel and perpendicular electronic overlap integrals remain unchanged.
A different approach has been proposed by Analytis et al. [14].They assumed that irradiation creates two kinds of defects.Like in the model of Sasaki et al., some defects increase the electronic scattering rate.There is a further defect-assisted interlayer channel which decreases the interlayer resistivity.This model gives a good description of the irradiation dose dependence of ρ ⊥ in κ-(BEDT-TTF) 2 Cu(SCN) 2 , an organic conductor close to the Mott transition.The in-plane resistivity is assumed to be small and without any effect on ρ ⊥ .A defect-assisted interplane conductivity has also been proposed [15] to explain the metallic zero frequency interplane conductivity and the absence of a Drude peak in the optical conductivity in κ-ET 2 -Br.However, it is difficult to understand within this model the irradiation and temperature independence of the anisotropy between 50 and 250 K in κ-ET 2 -Cl.Due to difficulties in contacting and/or macroscopic sample imperfections, the current is usually inhomogeneous in strongly anisotropic conductors and in general the interlayer resistivity affects in-plane measurements.If ρ is very small, the contribution of ρ ⊥ might dominate the apparent in-plane resistivity.This would explain that ρ ⊥ and ρ have (apparently) the same temperature dependence both for the non-irradiated and irradiated crystal.However, this model does not explain why the anisotropy is independent of irradiation dose.
Conclusions
In κ-ET 2 -Cl, the interplane spin cross relaxation time measured by high frequency ESR is proportional to the interplane resistivity in the temperature range between 50 and 250 K.The ratio T x /ρ ⊥ is unchanged in the range of irradiation doses where the resistivity changes from a semiconductor-like behavior to metallic.Since κ-ET 2 -Cl is a strongly correlated conductor close to the metal insulator and magnetic ordering transitions, the spin and charge hopping rates could have different temperature or doping dependence.The insensitivity of T x /ρ ⊥ to doping and temperature makes this unlikely.The qualitative change of the temperature dependence of the resistivity with irradiation and the insensitivity of other quantities depending on the density of states poses a difficult problem, which is not yet satisfactorily resolved.
Figure 1 .
Figure 1.Structure of κ-ET 2 -Cl projected onto the (a, b) plane.Only one type of ET molecule per plane is shown for clarity.In the experiments the external field B is along ϕ ab = 45 • , where ϕ ab denotes the angle from a in the (a, b) plane.
Figure 2 .
Figure 2. Motional narrowing of the ESR lines of adjacent A and B ET layers in κ-ET 2 -Cl under pressure (after [7]).B ϕ ab = 45 • .The spectra were measured at 420 GHz, T = 250 K and pressures of 0, 0.32 and 1.04 GPa.KC 60 is an ESR reference.
Figure 5 .
Figure 5. (a) ESR spectra of κ-ET 2 -Cl at 222.4 GHz and 250 K after different doses of X-ray irradiation.There is a small impurity line at 7.935 T in the spectrum of the sample irradiated for 180 h, not present in the other spectra.The signal at 7.943 T is the KC 60 reference; (b) Irradiation dose dependence of T x and T 2 at 250 K.
Figure 6 . 4 . 2 .Figure 6
Figure 6.(a) ESR spectra of κ-ET 2 -Cl at 222.4 GHz and 50 K as a function of X-ray irradiation dose.The line at 7.935 T for 180 h irradiation, not present in the other spectra, is an impurity line.The signal at 7.943 T is a KC 60 reference; (b) Irradiation dose dependence of T x and T 2 at 50 K. | 4,412.8 | 2012-05-24T00:00:00.000 | [
"Physics"
] |
Astragalus polysaccharide (APS) exerts protective effect against acute ischemic stroke (AIS) through enhancing M2 micoglia polarization by regulating adenosine triphosphate (ATP)/ purinergic receptor (P2X7R) axis
ABSTRACT Clinically, the effective treatment for patients with acute ischemic stroke (AIS) is very limited. Therefore, this paper aims to investigate the mechanism how astragalus polysaccharide (APS) exerts protective effect against AIS and provide a new method for the treatment of AIS. Cell surface antigen flow cytometry and immunofluorescence were used to identify M1 and M2 microglia. Western blot was used to evaluate the expression of associated protein. Oxygen-glucose deprivation (OGD) was used to simulate the effect of AIS on rat microglia. The middle cerebral artery occlusion (MCAO) model was established to simulate the effect of AIS in vivo. Evans blue dye (EBD) was used to evaluate the permeability of blood–brain barrier (BBB). Western blot and cell surface antigen flow cytometry results showed that APS promoted the M2 polarization of rat microglia by inhibiting the expression of purinergic receptor (P2X7R). APS reversed the effect of OGD on the polarization of rat microglia M1/ M2 by regulating P2X7R. APS reversed the effect of MCAO on the polarization of rat microglia M1/ M2 in vivo. Furthermore, APS inhibited the expression of P2X7R by promoting the degradation of adenosine triphosphate (ATP) in the cerebral cortex of MCAO rats. In addition, APS contributed to maintain the integrity of BBB. Summarily, APS can reduce brain injury by promoting the degradation of ATP in microglia and inhibiting the expression of P2X7R after AIS.
Introduction
China is one of the countries with the highest incidence of stroke [1]. At present, the disease is the main cause of death in China, with an annual increase of 2.5 million cases and 1.6 million deaths due to stroke. Acute ischemic stroke (AIS) accounts for about 70% of stroke in China, and the mortality within one month is about 2.3-3.2% [2]. Therefore, effective prevention and treatment of AIS is of great significance to patients and their families.
Studies have shown that inflammation/ immune response runs through the whole process of AIS. The main causes of nerve injury after AIS are the failure of neurons to complete mitochondrial aerobic respiration, the decrease of intracellular pH value, the change in ion gradient of cell membrane and cytotoxic edema caused by apoptosis swelling [3]. Damaged neurons and glial cells release high levels of adenosine triphosphate (ATP), activate purinergic receptor (P2X7R), and release a large number of inflammatory mediators to induce neuroimmune disorders and inflammation [4]. Currently, the treatment of AIS mainly focused on reflow and brain protection, and the clinical problem have not been solved: intravenous thrombolysis and mechanical thrombectomy within time window can only solve the cerebral vascular reflow of very few patients and cannot interfere with cascade events of brain injury events, the treatment of most patients still depends on brain protection measures [5]. Unfortunately, there are still few effective neuroprotective measures in clinic, which highlights the importance of targeted inflammatory/ immune interventions in current neuroprotection studies after AIS.
At present, pharmacological studies have found that astragalus polysaccharide (APS) has a wide range of immune regulation, anti-tumor, antioxidation, antihypertensive, hypoglycemic, liver and kidney protection, and has broad application prospects in anti-atherosclerosis (AS) disease [6]. Inflammatory mediators play an important role in the progression of ischemic penumbra injury. By reducing the neuro-inflammatory response, neuroprotection after stroke can be achieved [7], and microglia is the most effective regulatory target for brain repair and regeneration [8]. Under different cell microenvironment conditions, microglia can differentiate into two phenotypes, namely M1 type with pro-inflammatory effects and M2 type with anti-inflammatory effect [9]. In the early stage after ischemic injury, the locally activated microglia exhibited M2 phenotype, but M2 type polarization response was transient, which was replaced by an inflammation and harmful reaction dominated by M1 polarization cells within a few days after ischemic injury. The phagocytosis and release of neurotoxic mediators, such as TNF (tumor necrosis factor), IL-1 (interleukin-1), IL-6, McP-1 (monocyte chemotactic protein-1), MIP-1 (macrophage inflammatory protein-1), ROS (reactive oxygen species), NO (nitric oxide), matrix, and MMPs (metalloproteinases), were decreased in M1 polarized cells. In the later stage, in order to fight against this inflammatory and harmful process, damaged neurons in the penumbra area produce IL-4, an effective M2 polarization promoter [10]. At present, the difference between M1 and M2 microglia depends on their respective characteristic surface markers, M1 markers: HLA-DR (human leukocyte antigen), CD16 (cluster of differentiation), CD32, CD86, etc., and M2 surface markers: CD209, CD206, CD301, CD163, Arg-1, and Ym-1 [11,12]. Studies have found that antibacterial drugs (minocycline and azithromycin), eplerenone and spironolactone [13,14], metformin [15], rosiglitazone [6], etc., which can't reduce the M1/M2 ratio after AIS [16][17][18], play a protective role in the brain. This suggests that inhibiting the differentiation of microglia M1 and promoting the differentiation of M2 to achieve neuroprotective effect could be a new strategy for the treatment of AIS. However, the effect of APS on M2 microglia polarization remains unknown.
In AIS, blood-brain barrier (BBB) damage leads to neurological dysfunction [19]. Therefore, the preservation of BBB integrity could ameliorate AIS-induced brain injury [20,21]. Moreover, numerous studies have indicated that M2 microglia polarization plays a protective role in BBB integrity [22,23]. Whereas, the role of APS in BBB integrity is not clear.
This study aimed to investigate the mechanism how APS exerts protective effect against AIS and verify whether APS enhanced the M2 polarization of microglia through suppressing ATP-mediated P2X7R expression to maintain the integrity of BBB. Results of the present study would provide a new method for the treatment of AIS.
Ethnics statement
All mice were placed in a pathogen-free environment of Model Animal Research Center of Nanjing University. All protocols were approved by the Institutional Committee for Animal Care and Use at Model Animal Research Center of Nanjing University. All animal works were carried out in accordance with the approved protocol.
Reagent
Penicillin, fetal bovine serum (FBS), DMEM/ F12 medium, and PBS were purchased from GE™ Hyclone. APS was purchased from Beijing Solarbio Science & Technology Co., Ltd. ATP was purchased from Shanghai Zeye Biological Technology Co., Ltd. Rat ATP ELISA Kit and rat ADP ELISA Kit were purchased from Beijing winter song Boye Biotechnology Co. Ltd. EBD was purchased from Real-Times (Beijing) Biotechnology Co., Ltd. EDTA antigen repair solution was purchased from Servicebio. Antibodies
Western blot
Western blot was used to detect target proteins extracted from HAPI cells. Whole cell lysates were extracted with lysis buffer: 50 mM Tris pH7.4, 150 mM NaCl, 1 mM EDTA, 1% Triton, and 10% glycerol and a mixture of protease and phosphatase inhibitor cocktail (Roche, Basel, Switzerland), protein concentrations were determined by the Bradford assay. Soluble protein (30-40 μg) was subjected to SDS-polyacrylamide gel electrophoresis. Separated proteins were electrophoretically transferred to polyvinylidene difluoride (PVDF) membranes (Millipore, Billerica, MA, USA). Primary antibody used in this study was diluted into 5% nonfat milk at a ratio of 1: 500 [24].
Cell surface antigen flow cytometry
HAPI cells in logarithmic growth phase were seeded on a 6-well plate with a density of 2 × 10 5 / well. They were treated with PBS (0.01 mol/ L), ATP (3 mmol/ L) or ATP (3 mmol/ L) for 24 h respectively, and then APS (100 mg/ L) for 48 h. The cells in each group were collected, digested with 0.05% trypsin, and then resuspended into single cell suspension. The cells were washed twice with PBS (centrifugation at 800 rpm for 5 min), and the cell concentration was adjusted to 1 × 10 6 cells/ml with medium. 500 μl cell suspension was added into each measuring tube, and then PE or APC labeled HLA-DR (1.5 μg/ mL), CD86 (50 μg/ mL), CD206 (12 μg/ mL), and CD163 antibodies (50 μg/ mL) were added. The cells were then washed twice with PBS (centrifuged at 800 rpm for 5 min). Flow cytometry was used to obtain the results. Besides, a logical gating strategy was applied. The M1 microglia cell subpopulation refers to dot plot HLA-DR versus CD86, while the M2 microglia cell subpopulation refers to CD163 versus CD206.
Oxygen-glucose deprivation (OGD)
HAPI cells in logarithmic growth phase were seeded on a 6-well plate with a density of 2 × 10 5 / well, then cultured in a sugar-free and serum-free DMEM medium at 37°C in 95% N 2 and 5% CO 2 for 6 hours, and then cultured at 37°C under normal condition (95% O 2 , 5% CO 2, DMEM medium containing 10% FBS) for 72 h before collecting samples for detection.
In vitro model of brain-blood barrier (BBB)
The inserter was precoated with 2% gelatin. Next, HAPI cells were evenly seeded in a 24-well cell inserter with a density of 200,000 cells/cm 2 and culture in the CO2 incubator (5% CO 2 , saturated humidity, 37 ) until confluence. After observing the confluence of HAPI cells under the microscope, the cell culture medium was added into the donor pool of inserter to make the liquid level difference between the donor pool and the recipient pool >0.5 cm. Subsequently, leakage test was utilized to identify whether BBB was established. If there was still obvious liquid level difference between the two cisterns of the inserter after 4 h, it was considered that the HAPI cells had been completely converged and BBB was established.
Construction and Longa score of MCAO rat model
Male SD rats aged 4-6 weeks (Model Animal Research Center of Nanjing University), were fed adaptively for 1 week. The rats were divided into five groups with three rats in each group, which were labeled as control group, MCAO group, MCAO+ normal saline group, MCAO+APS lowdose group, and MCAO+APS high-dose group. The standard MCAO model construction steps [25] were followed to operate and refer to the Longa scoring method for neurobehavioral scoring. Scoring criteria: normal, no neurological deficit, 0 point; when lifting tail, contralateral forelimb of the brain lesion cannot be fully extended, mild neurological deficit, 1 point; when walking on the ground, the rats turned to the contralateral side of the brain lesion and had moderate nerve function defect, 2 points; while walking on the ground, the rat's body falls to the opposite side of the brain lesions and had severe neurological deficits, 3 points; unable to walk independently and have consciousness loss, 4 points. The success criteria of modeling: neurobehavioral score 1-3, 0, and 4 were eliminated. Longa score are shown in Table 1.
A total of 20 rat models, one died of asphyxia due to accidental injury of vagus nerve. Two bleeding rats were removed. 17 models were completed. According to Longa scoring method, the success rate of the model was 94.1%, and the success rate = number of successful models/total number of models× 100%.
Administration
After MCAO rats were successful established, the rats were intraperitoneally injected with APS (lowdose group: 22.5 mg/kg, high-dose group: 45 mg/ kg), The control group was given an equal volume of normal saline, once a day, according to groups, they were administered for 1 day, 3 days, or 5 days. After the administration, the rats were decapitated, and the brain tissue and serum were collected for subsequent experimental detection.
ELISA detection
SD rat cortical brain tissues of control group, MCAO group, MCAO+normal saline group, MCAO+APS low-dose group, and MCAO+APS high-dose group were collected and preserved. After the sample is cut, it is weighed, frozen rapidly with liquid nitrogen, and homogenized fully with homogenizer. Add appropriate amount of PBS, centrifuge for 20 min (2000-3000 rpm), and carefully collect the supernatant. Rat ATP ELISA KIT (DG20151D, Beijing Dongge Biology) and rat ADP ELISA KIT (DG20962, Beijing Dongge Biology) were used for detection. The absorbance (OD value) of each well was measured at the wavelength of 450 nm.
Immunofluorescence
Paraffin sections were dewaxed sequentially in the order of 15 min for xylen I, 15 min for xylen II, 5 min for absolute ethanol I, 5 min for absolute ethanol II, 5 min for 90% alcohol, 5 min for 80% alcohol, and 5 min for 70% alcohol. Finally, the sections were washed with distilled water. After the sections were repaired with EDTA antigen recovery solution, they were incubated in 5% BSA for 30 min at room temperature. Gently shaked off the blocking solution, added anti-CD163 (ab182422, Abcam, 1:100), anti-CD206 (sc-58,986, Santa Cruz, 1:50), anti-CD86 (NBP2-25,208, Novus, 1:100), anti-HLA-DR (MA5-32,232, Invitrogen, 1:100). Then incubated overnight in a wet box at 4°C. The slices were placed in PBS (pH 7.4) and washed with decolorizing shaker for 3 times, 5 min each time. After the slices were slightly dried, the secondary antibody (dilution ratio: 1:200) corresponding to the primary antibody was dropped into the circle to cover the tissue, and incubated in the dark at room temperature for 50 min. DAPI counter-stained nuclei: the slices were placed in PBS (pH 7.4) and washed with decolorizing shaker for 3 times, 5 min each time. After the slices were shaken dried, DAPI dye solution was added dropwise to the circle, and incubated in the dark for 10 min at room temperature. Sealed the film and took pictures under the microscope.
Detection blood-brain barrier (BBB) integrity in rat by Evans Blue Dye (EBD)
EBD (EB001, Real-Times (Beijing) Biotechnology Co., Ltd.) was injected intravenously 2 h before death. Normal saline was perfused into the heart to remove the residual blood in the cerebral vessels. The brain tissue was quickly taken out, weighed, then placed in 37°C formamide (1 ml/ 100 mg) for 48 h. After centrifugation, the supernatant was taken out and the absorbance of the supernatant was measured at 620 nm by spectrophotometer. According to the standard curve, the absorbance value was converted into EBD content to evaluate the permeability of BBB [26,27].
Statistical analysis
In this study, all experiments were repeated at least twice, and average value of the three experiments was presented by the mean standard deviation (SD) calculated by STDEV formula in Excel. Shapiro-Wilk test was utilized to assess data distribution while data that do not exhibit a normal distribution was analyzed via rank sum test. The significance of all data was estimated by a Tukey's multiple-comparison test in the ANOVA analysis using the SigmaStat 3.5 software. Statistical significance was accepted when P< 0.05.
Results
This paper aims to investigate the mechanism how APS exerts protective effect against AIS and provide a new method for the treatment of AIS. We hypothesized that APS enhanced the M2 polarization of microglia through suppressing ATP-mediated P2X7R expression to maintain the integrity of BBB.
APS promotes the M2 polarization of rat microglia by inhibiting P2X7R expression
After AIS, the damaged neurons and glial cells can release high concentration of ATP, which can activate P2X7R, release a large number of inflammatory mediators, and induce neuroimmune disorder and inflammation [4]. Then, we used different concentrations of ATP to detect the expression of P2X7R in rat microglia cell line HAPI at different time. As shown in Figure 1(a), compared with the control group, ATP significantly up-regulated the expression level of P2X7R in HAPI cells. In addition, treatment with 3 mmol/ L ATP for 24 h had the most significant effect on the expression level of P2X7R (Figure 1(a)). This treatment condition was used in subsequent experiments. Then we examined the effect of APS on P2X7R expression under ATP stimulation. We found that APS significantly reduced the increase of P2X7R expression induced by ATP stimulation (Figure 1(b)). Previous studies have shown that P2X7R promotes the activation of M1 microglia [28,29]. Therefore, we tested the effect of APS on M1/ M2 polarization of HAPI cells. The results showed that APS significantly inhibited the promoting effect of ATP on HAPI M1 polarization, thereby increasing the proportion of HAPI M2 polarization (Figure 1 (c-d)). Taken together, these results indicated APS promotes the M2 polarization of rat microglia by inhibiting the expression of P2X7R.
APS reverses the effect of OGD on rat microglia M1/M2 polarization by regulating P2X7R
We use HAPI cells OGD model to simulate the effect of AIS on rat microglia. Western blot results showed that compared with the control group, OGD treatment significantly increased the expression level of P2X7R in HAPI cells (Figure 2(a)). Furthermore, OGD treatment for 72 h had the most significant effect on the expression level of P2X7R (Figure 2(a)). This treatment condition was used in subsequent experiments. We found that APS significantly reduced the increase of P2X7R expression induced by OGD treatment (Figure 2 (b)). In addition, OGD treatment significantly promoted M1 polarization of HAPI cells, which was similar to the effect of ATP stimulation on M1/M2 polarization of HAPI cells (Figure 2(c-d)). However, the effect of OGD treatment on HAPI M1 polarization were reversed by APS, which significantly increased the proportion of HAPI M2 polarization (Figure 2(c-d)). These data suggested that APS reversed the effect of OGD treatment on the M1/M2 polarization of rat microglia by regulating P2X7R.
APS ameliorates ATP or OGD-repaired BBB integrity
To further identify the effect of APS on ATP or OGD-repaired BBB integrity, leakage test was used to evaluate the permeability of in vitro model of BBB. Results showed that there was still obvious liquid level difference between the two cisterns of the inserter after 4 h in the control group, suggesting that in vitro model of BBB was established (Figure 3(a)). Besides, both ATP treatment and OGD reduces liquid level difference between the two cisterns of the inserter (Figure 3(a)), suggesting that ATP treatment and OGD might promote the permeability of in vitro model of BBB by repairing integrity. However, APS abolished the effects of ATP treatment and OGD on the permeability of in vitro model of BBB (Figure 3(a)). In addition, results of LC-MS indicated that APS could permeate through the in vitro model of BBB (Figure 3(b)). Therefore, these results suggested that APS could attenuate ATP or OGD-repaired BBB integrity.
APS inhibits P2X7R expression by promoting ATP degradation in the cerebral cortex of MCAO rats
The MCAO model of SD male rats was used to simulate the effect of AIS in vivo. As shown in Figure 4(a), compared with the control group, the expression level of P2X7R in the cortex of MCAO model rats was significantly increased. APS significantly reversed the increase of P2X7R expression in cerebral cortex of MCAO model rats (Figure 4 (a-b)). In addition, 45 mg/kg APS treatment for 3 days had the most significant effect on the expression level of P2X7R (Figure 4(a-b)). This treatment condition was used in subsequent experiments. ELISA results showed that APS significantly reduced the ATP concentration in the cortex of MCAO model rats (Figure 4(c)). Meanwhile, APS significantly increased the concentration of ADP in the cortex of MCAO model rats (Figure 4(d)). Taken together, these data indicated that APS inhibited the expression of P2X7R by promoting the degradation of ATP in the cerebral cortex of MCAO model rats.
APS reverses the effect of MCAO on M1/ M2 polarization of rat microglia in vivo
Immunofluorescent analysis was performed on paraffin sections of cerebral cortex of MCAO model rats under different treatment conditions. We used specific cell surface markers to distinguish M1 microglia from M2 microglia. As shown in Figure 5a, MCAO treatment significantly promoted the polarization of M1 rat microglia, and APS reversed the promotion effect of MCAO treatment. In addition, APS treatment significantly increased the proportion of M2 microglia in MCAO model rats ( Figure 5(b)). In general, MCAO treatment significantly increased the polarization of M1 microglia, and APS reversed the effect. Furthermore, APS treatment promoted the polarization of M2 microglia, thus increasing the proportion of M2 microglia ( Figure 5(c)).
APS contributes to maintain the integrity of BBB
P2X7R is activated after AIS and releases proinflammatory mediators, such as TNF-α, IL-1β, ROS, MMPs, etc. [30,31]. These mediators promote the recruitment of white blood cells and degrades the extracellular matrix, resulting in the destruction of the BBB [32,33]. Therefore, we speculate that APS might participate in maintaining the integrity of BBB. To verify this hypothesis, we performed EBD to determine the integrity of BBB in vivo. EBD data showed that compared with the control group BBB permeability level of MCAO model rats was significantly increased, which was reversed by APS ( Figure 6(a-b)). As shown in Figure 6(c-d), the expression level of MMP-9 protein in MCAO model rats was significantly higher than that in the control group and APS reversed the increase on MMP-9 protein expression. Taken together, these data indicated that APS maintained the integrity of BBB by inhibiting the expression of MMPs protein.
Discussion
Despite the clinical treatment of AIS has made great progress in recent years, there is still a lack of effective treatment for AIS [34][35][36][37]. The difficulty of AIS treatment is due to our limited understanding of the mechanism of brain protection after AIS at the molecular level. Thus, it is of great significance to elucidate the targeted inflammation/immune interventions in neuroprotection after AIS [34,38]. Here, we reported the mechanism of APS promoting M2 polarization of rat microglia, which could be a therapeutic opportunity for AIS patients. Traditional Chinese Medicine (TCM) has a history of thousands of years. It has accumulated rich experience and information in the prevention and treatment of stroke and has played an active and important role in the treatment of stroke. 'Buyang Huanwu decoction' is a classic treatment of stroke in TCM. Its clinical effect is reliable and has been widely accepted by the industry [39,40]. The characteristic of this prescription is that APS is the absolute main drug (original prescription composition: APS 120 g, Angelica 6 g, Red peony 5 g, Earthworm, Ligusticum chuanxiong, Safflower and Peach kernel 3 g, Astragalus accounted for 84%), and there is the therapeutic effect was in a dose effect relationship with the amount of astragalus [39,40]. These suggests APS plays an important role in clinical treatment of AIS, which is consistent with our finding that APS promotes the M2 polarization of rat microglia by inhibiting the expression of P2X7R.
Studies have found that in the process of inflammatory response, high extracellular ATP mediated inflammatory cells participate in the immune response mainly due to the over activation of P2X7R [41], and ATP is also the only natural agonist of P2X7R, which is consistent with our findings that APS inhibited P2X7R expression by promoting the degradation of ATP in cerebral cortex of MCAO model rats. Studies have found that antibacterial drugs (minocycline and azithromycin) can reduce the ratio of M1/ M2 after AIS, indicating that inhibiting the differentiation of microglia M1, promoting the differentiation of M2 and achieving neuroprotection may be a new strategy for the treatment of AIS in the future. And our data demonstrated that APS could promote M2 polarization of rat microglia and decreased M1/M2 ratio after AIS, both in vitro and vivo model. In addition, APS also upregulated the expression of CD36, IL-12, and IL-27 on the surface of DCs membrane and down regulated the expression of IFI16, indicating that APS promotes the maturation and differentiation of DCs and has a positive intervention effect on the occurrence and development of AS [6].
The main component of basement membrane is extracellular matrix (ECM) molecules (main substances are IV collagen and laminin), which is an important part of BBB and maintains the integrity of BBB [42][43][44][45][46]. MMPs are the main degradation enzymes of ECM, which are closely related to the destruction and reconstruction of ECM in vascular wall [43,47]. In addition, P2X7R, activated by AIS, releases proinflammatory mediators, such as TNF-α, IL-1β, ROS, MMPs, etc. [30,31]. Our data show that APS reverses the increase of MMP-9 protein expression induced by MCAO by inhibiting P2X7R expression. Furthermore, EBD data showed that APS maintained the integrity of BBB by inhibiting the expression of MMPs protein. These results suggested the importance of APS in maintaining the integrity of BBB and clinical of treatment of AIS.
Conclusions
Our current study demonstrated the APS function of promoting the M2 polarization of rat microglia and reducing the ratio of M1/M2 after AIS. Given the M1/M2 ratio after AIS is closely related to neuroprotection, our current work may offer a therapeutic opportunity for AIS patients.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Funding
This work was supported by the Natural Science Foundation of Guangdong Province, China (Grant No. 2019A1515011668). | 5,649 | 2022-02-01T00:00:00.000 | [
"Biology"
] |
Numerical study of instability of nanofluids: the coagulation effect and sedimentation effect
This study is a numerical study on the coagulation as well as the sedimentation effect of nanofluids using the Brownian dynamics method. Three cases are simulated, focusing on the effects of the sizes, volume fraction, and ζ potentials of nano-particles on the formation of coagulation and sedimentation of nanofluids. The rms fluctuation of the particle number concentration, as well as the flatness factor of it, is employed to study the formation and variation of the coagulation process. The results indicate a superposition of coagulation and sedimentation effect of small nano-particles. Moreover, it is stable of nanofluids with the volume fraction of particles below the limit of "resolution" of the fluids. In addition, the effect of ζ potentials is against the formation of coagulation and positive to the stability of nanofluids.
Introduction
The nanofluid is characterized by the fluid with nanometer-sized solid particles dispersed in solution [1], which can increase the heat transfer coefficient [2][3][4][5][6], enhance the critical heat flux in boiling heat transfer [7][8][9], reduce the wall friction force [10], improve the optical characteristics [11], etc. Nano-sized particles are utilized because of its better stability than the suspension of micro-sized particles. For a badly stable suspension, sedimentation or coagulation (agglomeration) may occur. It compromises the above-mentioned advantages of the nano-suspension.
As is well known [12], the occurrences of coagulation and sedimentation are the two main factors for the instability of nanofluid. The phenomenon of coagulation is characterized by the formation of particle clusters, i.e., particles are in contact with each other and the cohesion takes place. Then, the clusters grow up. Many researchers investigated the coagulation effect of particles by the Brownian dynamics simulation, focusing on the formation of gelation [13], coagulation rates [14], particle network [15], etc. For example, Hütter [14] identified the characteristic coagulation time scales in colloidal suspensions, and measured their dependencies on the solid content and potential interaction parameters. He also deduced different cluster-cluster bonding mechanisms in the presence of an energy barrier, etc. Besides, the sedimentation always occurs after a big particle cluster is established, i.e., the particles within the cluster sediment flow downward because of the increased effect of the gravity of the cluster over the buoyancy force of it, and reduced the effect of Brownian motion to the big cluster. Many researches were devoted to the sedimentation [16][17][18] using the Brownian dynamics simulation too. For example, Soppe and Jannsen [17] studied the sediment formation of colloidal particle by a process of irreversible single-particle accretion. They used the algorithm of Ermak and McCammon, incorporating the inter-particle forces and hydrodynamic interaction on the two-particle level, and analyzed the effect of two-particle hydrodynamic interactions on the sediment structure, etc. They found that the process of sediment formation by colloidal particle is the result of a delicate balance of sediment field strength, DLVO interactions, and hydrodynamic interactions.
However, there is an important issue about which few researches have been concerned: the interaction between the coagulation and sedimentation for the instability of nanofluids. For example, the processing of coagulation causes the particle clusters to grow up, and then the large clusters are more prone to sedimentation than that of small clusters because of the intensive gravity effect. In other words, the coagulation effect is able to augment the sedimentation effect. Thus, this study is intended to carry out some research on this issue, exploring the complex interaction as well as the close relation between the coagulation and the sedimentation phenomena.
Governing equation
In this study, the Brownian dynamics technique is employed to investigate the motion of nanoparticles. The governing equation is the so-called Langevin equation, which is formulated as follows [19]: where the superscript 0 indicates that the variable is corresponding to the beginning of the time step Δt; r i is the ith component of the position vector of particle; D ij is the element of the diffusion tensor indexed by (i, j); F j is the force experienced by the jth particle; k B is the Boltzmann constant; and T denotes the temperature. The displacement R i (Δt) is a random displacement of a Gaussian distribution with a zero expectation and a where h is the viscosity, a is the particle radius, δ ij is the Kronecker delta, r ij is the vector from the center of particle i to the center of particle j, and I is the unit tensor.
Moreover, three forces, i.e., the attractive Van der Walls force, f v , the repulsive electrostatic force by the electric double layer, f e , and the gravity force f g are considered as the forces experienced by any particle, which are formulated as follows [12]: (4) where A, d, ε, , ζ, r f , r, and g are the Hamaker constant, the particle diameter, the electric permittivity of the fluid, the inverse of the double-layer thickness, the zeta potential of the suspension, the density of the fluid, the density of the particle, and the gravity acceleration, respectively.
It is noted in Equation (4) that the results for r ij -d = 0 is meaningless when the contact between the two particles occur, and they will adhere to each other or rebound back. Thus, we treat the condition with as the situation when the two particles are separated, so that Equation (4) works. Otherwise, it results in coalescence between the two particles. Once the coalescence between colliding particles takes place, the clusters start growing up.
Simulation conditions
In this study, three cases with different diameters of particles (Case 1), different volume fractions (Case 2), and different zeta potentials (Case 3) are simulated, respectively (Table 1). For these cases, the parameters of the material, as well as other relevant parameters, are illustrated in Table 2.
In this simulation, the boundary conditions in the x and y directions (Table 2) in the horizontal plane are both periodic, whereas the top and bottom walls of the simulation domain in the z-direction are treated as adhesive walls to which the particles adhere immediately once they come into contact with them. It is reasonable to conclude thus, since the agglomerated particle clusters always adhere to the bottom walls or the top interfaces. Initially, a random distribution is given to the particles. As time advances, the possible movements of particles are computed through solution of the governing equations.
Simulation results
Case 1: effect of particle sizes This section deals with the effect of particle sizes on the coagulation and sedimentation. Figure 1 shows the simulation results at t = 0, 5, 10, and 50 μs for Case 1 due to the effects of different sizes of particles. Figure 1a, b, c, d shows the results of d = 10 nm at t = 0, 5, 10, and 50 μs, respectively. Similarly, Figure 1e, f, g, h shows the results for d = 25 nm, whereas Figure 1i, j, k, l shows them for d = 50 nm. It is seen that the coagulation takes place the most intensively and rapidly for the smallest size of particles (Figure 1a, b, c, d), moderately for the intermediate size of particles (Figure 1e, f, g, h), and weakly and slowly for the largest size of particles (Figure 1e, f, g, h, i, j, k, l). More importantly, the results of the intermediate sizes are due to coagulation but with weak sedimentation, whereas the results of the smallest sizes are due to both the effects of coagulation and sedimentation. For the largest size of the particles, it is neither due to coagulation nor sedimentation. It looks complicated. As is known, the larger particles bear the major effect of gravity, and they are the most prone to sedimentation. However, it is only true of the single particle without coagulation. With the superposition of the coagulation effect, it can amplify or augment the trend of sedimentation through coagulation. Owing to the increasing agglomeration of the particles, the gravity effect may play an important and even a dominant role, which causes possible sedimentation of the whole agglomeration (the upper part of the agglomeration in Figure 1d is due to the adhesive boundary on the upper wall). In other words, there exists a balance between the sedimentation effect of the large-sized individual particles and the sedimentation effect of small-sized aggregated particles. The former is caused solely by the gravity effect, whereas the latter is caused by the superposition of the coagulation and the gravity effects, i.e., the amplification and augmentation of the gravity effects of the aggregated particles due to coagulation. It is necessary to mention that Figure 1i, j, k, l does not indicate the stability of the nanofluids. Alternatively, it indicates a relatively stable characteristic compared to Figure 1a, b, c, d. After the evolution over a long time, possible coagulation or sedimentation may also occur.
In order quantify the degree of coagulation, we need to define some functions. Let us divide the simulation domain L x × L y × L z by N x × N y × N z cubic meshes by the cell volume (δ x × δ y × δ z ). The mean number concentration c of particles is the mean number of particles within each mesh volume (δ x × δ y × δ z ). Then, the rms value of the particle concentration R 1 and the flatness factor of the number concentration R 4 are formulated as follows: The rms of concentration means the fluctuation of the number concentration of particles, and it is closely related to the formation of particle clusters due to coagulation. The flatness factor means the intensity of fluctuation of the number concentration, thereby indicating the intensity of coagulation. Thus, these two functions are helpful in enabling the quantification of particle coagulation. Figure 2a, b shows the R 1 and R 4 for Case 1. It is seen from Figure 2a, b that the coagulation of the small particles is the fastest. They become almost totally coagulated immediately even at the beginning. Comparatively, the coagulation of the larger particles takes place slowly and increasing steadily. However, the final level of coagulation of the larger particles is greater than that of smaller particles. In addition, the degree as well as the rapidity of the coagulation of the intermediate particles is intermediate between that of the smaller and the larger particles.
Case 2: effects of volume fractions
In this section, the effect of volume fraction, i.e., the concentration of particles, is studied. As aforementioned, the smaller particles are more prone to coagulate than the larger particles, under the same condition of the volume fractions. However, the process of coagulation is also closely related to the number of particles contained in it.
For example, Figure 3a, b, c, d shows one of the results of Case 2 where only n p = 400 particles are simulated. Compared to Figure 1a, b, c, d, it is seen that the coagulation does not takes place at t = 50 μs. It says that the coagulation is to be regarded appropriately as a process with an excess of particle content. When the particle numbers go beyond the superior limit of the resolvent, then the coagulation will certainly take place. Thus, when the n p = 4200 particles are simulated, more intensive coagulations are observed correspondingly (Figure 3e, f, g, h).
In addition, Figure 4a, b shows the R 1 and R 4 of Case 2. It is seen from Figure 4 that the R 1 and R 4 for n p = 400 are always relatively of small value, indicating a stable status almost without coagulation and sedimentation, although R 4 is slightly fluctuated when t < 0.06. Moreover, compared to n p = 1200, it is seen that the concentration fluctuation R 1 and flatness factor of concentration R 4 for n p = 400 are relatively of lower values. It validates the conclusions derived from the observation of Figure 3a, b, c, d, in comparison with Figure 1e, f, g, h.
With the increased number of particles, it is seen that the R 1 and R 4 are increased too (n p = 1200 and 2100, respectively, Figure 4). However, when the particle number is extremely large, all the spaces are almost stuffed with particles, leading to a homogeneous distribution and a low fluctuation in the number concentration (n p = 4200, in Figure 4).
Case 3: effects of ζ potentials
The previous sections showed the results with ζ = 0 eV. As seen from Equation (5), no repulsive effect has been considered between the particles since f e = 0. Thus, this section will focus on the effect of the repulsive effect by varying the ζ potentials.
Comparing with Figure 1e, f, g, h, it is seen that the degree of coagulation with ζ = 0.01 eV is attenuated ( Figure 5a, b, c, d), and it almost disappears with ζ = 0.05 eV (Figure 5e, f, g, h). It indicates that the repulsive effect induced by the ζ potentials is beneficial to the stability of nanofluids, since it acts against the coagulation process.
This conclusion is also validated by the variations of R 1 and R 4 (Figure 6a, b, respectively). It is seen that the fluctuation of the number concentration of particles with ζ = 0.05 eV increases much more slowly than the cases with smaller ζ. Although the R 1 and R 4 for ζ = 0.05 eV are still increasing with time, which indicates that the coagulation process may still take place after a fairly long time, their effects against the formation of coagulation are very clear. In other words, it is positively beneficial for the stability of nanofluids.
Conclusions
The findings of this study are briefly summed up as follows: 1. A complicated superposition of the coagulation and sedimentation effects for small particle is observed. The mechanisms of sedimentation for the larger and the smaller particles are different. The former is caused mainly by the great gravity effect of any individual particle, whereas the latter is mainly due to the coagulation process, and the superposition of coagulation causes the sedimentation of the whole agglomeration of particles. 2. There exists a superior limit of the fluid for particle content. When the volume fraction is below the limit, it is hard for the coagulation to occur. In contrast, the coagulation will certainly take place when the concentration of nanoparticles is beyond the capacity of "resolution" of the fluids. 3. The effect of ζ potentials is beneficial for the stability of nanofluid, since it resists the formation of coagulation. In other words, increase in the value of ζ potentials is helpful to make the nanofluid more stable. | 3,437 | 2011-02-28T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
Identification and characterization of circRNAs in Pyrus betulifolia Bunge under drought stress
Circular RNAs (circRNAs) play important roles in miRNA function and transcriptional control. However, little is known regarding circRNAs in the pear. In this study, we identified circRNAs using deep sequencing and analyzed their expression under drought stress. We identified 899 circRNAs in total, among which 33 (23 upregulated, 10 downregulated) were shown to be dehydration-responsive. We performed GO and KEGG enrichment analysis to predict the functions of differentially expressed circRNAs. 309 circRNAs were predicted to act as sponges for 180 miRNAs. A circRNA-miRNA co-expression network was constructed based on correlation analysis between the differentially expressed circRNAs and their miRNA binding sites. Our study will provide a rich genetic resource for the discovery of genes related to drought stress, and can readily be applied to other fruit trees.
Introduction
According to the recent research, eukaryotic genomes encode a large number of ncRNAs (non-coding RNAs) [1,2]. ncRNAs have no or little protein-coding potential, but play roles in various biological processes [3]. Circular RNA (circRNA) is an ncRNA molecule that is devoid of the 5' cap and poly A tail. circRNA is a ring structure formed by a covalent bond, and cir-cRNAs were first discovered in yeast [4] and humans [5] in 1980 and 1993, respectively. Circular RNAs are generated during splicing through various mechanisms [6]: most circular RNAs are created from back-spliced exons, while others originate from introns [7]. circRNAs are more stable than linear RNA molecules in cells, have a longer half-life, and can resist RNAase R degradation [8]. According to their general location in the genome, there are five types of circRNAs: exonic circRNAs, intronic circRNAs, sense overlapping circRNAs, antisense cir-cRNAs, and intergenic circRNAs [9].
CircRNAs are ubiquitous among all domains of life, and can fulfill a diverse range of biological functions. They can serve as competing endogenous RNAs to bind micro RNAs (miRNAs) [10,11], gene transcription and expression regulators [12], and RNA binding proteins (protein sponges) [13]. For example, more than 70 miRNA target sites (selectively conserved) are present at ciRS-7 and ciRS-7 is stongly linked with Argonaute protein in an miR-7-dependent style in humans [10]. circRNAs also play vital roles in the stress response. circRNAs were significantly differentially expressed in tomatoes exposed to cold stress compared to a control [14]. Aided by the development of high-throughput sequencing platforms and bioinformatics methods, an abundance of circRNAs have recently been identified in Archaea [15], humans, mouse [16], Arabidopsis [17,18], and rice [12].
Pear (Pyrus spp.) is among the most important fruits in the world, and its production is strongly affected by drought. As one of the main varieties of pear, the birch-leaf pear (Pyrus betulifolia Bunge) exhibits high disease resistance and has a high tolerance of abiotic stresses such as drought and salinity [19]. These qualities make it an important source of valuable drought tolerance genes for improving both fruit quality and tree resistance to drought. Therefore, identifying the drought resistance genes of P. betulifolia holds great value for molecular breeding efforts.
Several studies have shown that multiple circRNAs are induced by stress. In this study, we used RNA sequencing (RNA-seq) to analyze differentially expressed circRNAs in birchleaf pear leaves under drought stress. Our results will facilitate the development of pear breeding programs and provide a foundation for the identification of circRNAs in other fruit trees.
Birch-leaf pear samples
Birch-leaf pear (P. betulifolia Bunge) seedlings were grown in seedling beds at the national germplasm orchard of the Institute of Horticulture of Jiangsu Academy of Agricultural Sciences, Nanjing, Jiangsu, China. Seedlings were placed in a growth chamber under a 24-h cycle: 14 h at 25˚C in light and 10 h at 20˚C in the dark, as per our previous report [20]. Six-leafstage seedlings were inserted into a beaker containing distilled water for 2 d before exposure to dehydration treatment. Seedlings were then transferred into a 1/2 Murashige and Skoog Basal (MS) solution containing 15% polyethylene glycol (PEG) to simulate drought stress. The leaves of control and treatment seedlings were collected at 48 h after treatment in triplicate, rinsed with distilled water, frozen in liquid nitrogen, and stored at -80˚C until further use.
RNA preparation
Total RNA from each of the six samples was extracted using a Takara Mini BEST Plant RNA Extraction Kit following the manufacturer's protocol. RNA quality was verified using formaldehyde agarose gel electrophoresis, and RNA quantity was determined using a NanoDrop ND-1000 spectrophotometer. A total of 5 μg of RNA per sample was used for the experiment. To obtain ribosomal RNA (rRNA)-depleted RNA, rRNAs were depleted using the Epicentre Ribo-zero rRNA Removal Kit (Epicentre, USA). Subsequently, sequencing libraries were generated from the rRNA-depleted and RNase R-digested RNAs using the NEBNext Ultra Directional RNA Library Prep Kit for Illumina (NEB, USA) following the manufacturer's recommendations. Finally, the library was purified using the AMPure XP system and qualified using the Agilent Bioanalyzer 2100 system.
Clustering and sequencing
The clustering of index-coded samples was performed on a cBot Cluster Generation System using the HiSeq PE Cluster Kit v4 cBot (Illumina) according to the manufacturer's instructions. After cluster generation, the libraries were sequenced on an Illumina Hiseq 4000 platform and 150-bp paired-end reads were generated.
Quality control
Raw data (raw reads in fastq format) was first processed using a custom Perl script. Clean data (clean reads) were obtained after removing adapter-containing reads, poly-N-containing reads, and low quality reads from the raw data. The Q20, Q30, and GC content of the clean data were calculated. All downstream analysis was based on the clean, high-quality data generated in this step.
Mapping to reference genome
The reference genome and gene model annotation files were downloaded from the pear genome website [21] (http://peargenome.njau.edu.cn/). The reference genome index was built using Bowtie v2.0.6 software, and paired-end clean reads were aligned to the reference genome using TopHat v2.0.9 software.
CircRNA identification
We used find_circ version 1.1 [11] and CIRI version 2.0.5 [22] to identify circRNAs. For find_circ, unmapped reads were retained, and 20-mers from 5' and 3' end of these reads were extracted and aligned independently to reference sequences using Bowtie v2.0.6. Anchor sequences were extended by the find_circ algorithm such that the complete read aligned and the breakpoints were flanked by GU/AG splice sites. Next, back-spliced reads with at least two supporting reads were identified as circRNAs. The CIRI algorithm was another tool to identify circRNAs, it scans SAM files twice and collects sufficient information to identify and characterize circRNAs. CIRI was performed with default options, Counts of identified circRNA reads were normalized by read length, and the number of reads mapping was determined after CIRI prediction. Finally, the sequences of intersection by the two approach were identified to be circRNA.
Differential expression
Differential expression between the two groups 0h and 48h) was performed using DESeq2 version 1.6.3 software [24]. P-values were adjusted using the Benjamini and Hochberg method. By default, the corrected p-value threshold for differential expression was set to 0.05.
GO and KEGG enrichment analysis
Gene Ontology (GO, http://www.geneontology.org/) enrichment analysis for the source genes of differential circRNAs was performed using GOseq software (version 1.18.0). KEGG [25] is a database for understanding the high-level functions of biological systems (http://www. genome.jp/kegg/). The KOBAS web server [26] was used for the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis.
MiRNA binding site prediction
CircRNA could inhibit the function of miRNA by binding to miRNA. In order to further research the function of circRNA, we analysis the miRNA binding site of the identified circRNA using PsRobot software. This software is a web-based easy-to-use tool that used miRNA and circRNA sequence (including junction site) for target gene prediction, and it is performs fast analysis to identify miRNA with stem-loop shaped precursors among batch input data using a modified Smith-Waterman algorithm [27].
RT-qPCR validation
The expression profiles of drought-responsive circRNAs were validated through quantitative PCR. Total RNA for use as a template was extracted from leaves using the Total RNA Kit (Tiangen, Beijing, China) according to the manufacturer's instructions, the first cDNA strand was synthesized from 1000ng total RNA in a volume of 20μl using the PrimeScriptTM RT reagent Kit with gDNA eraser (Perfect Real Time), (Clontech, Shiga, Japan), according to the manufacturer's protocol. Eight circRNAs were randomly selected from the differentially expressed circRNA and analyzed using RT-qPCR. Primers were designed using Primer5 software [28], and RT-qPCR was performed on a 7500 Real-Time PCR System (Applied Biosystems, CA, USA). The total reaction volume was 20 μl, containing 10 μl 2X SYBR Premix Ex Taq™ (TaKaRa Bio Inc., Japan), 1 μl complementary DNA (cDNA) reaction mixture, 0.5 μl of each primer, 0.5 μl ROX Reference DyeII, and 7.5 μl ddH 2 O. Glyceraldehyde-3-phosphate dehydrogenase (GAPDH) gene was used as housekeeping gene for normalization [29]. The primers sequence used in our PCR experiments are described in S1 File. PCR was performed as follows: pre-denaturation at 95˚C for 30 s, denaturation at 95˚C for 3 s, annealing at 60˚C for 30 s, and 55-95˚C for melting curve analysis. All reactions were performed using biological triplicates. The 2 -ΔΔCT method was used to calculate relative changes in gene expression between control and treatment plants [30].
Identification of circRNAs in pear
To investigate the circRNAs involved in drought stress, we conducted an RNA-seq experiment. Six cDNA libraries were constructed from leaves of birch-leaf pear plants exposed to drought stress and from control plants. After removing adaptors and primer sequences, as well as short low-quality sequences, we obtained 854,138,996 clean reads in total. The Q20 and Q30 scores were all greater than 90% (Table 1), indicating that the quality of sequencing was high. After analysis of the sequences, 889 non-redundant circRNAs with an average length of 5,375 bp were obtained (S2 File), of which 614 (69.07%) ranged in length from 150 to 5,000 bp, 124 (13.95%) ranged in length from 5,000 to 10,000 bp, and all others were greater than 10,000 bp in length (Fig 1). Clean read have been deposited in the National Center for Biotechnology Information (NCBI) database and are accessible through the accession number, SRP 150567.
CircRNAs differentially expressed in response to drought stress treatment
The expression pattern of a gene can be used as an indicator of its putative biological function.
To determine which of the circRNAs were differentially expressed between the control and treatment groups (three biological replicates per condition), circRNAs were filtered based on statistical thresholds (FDR 0.05 and |log2 (ratio)|! 1). 33 circRNAs was found to be expressed at significantly different levels in the treatment group compared to the control group. 10 genes were downregulated under drought stress, while 23 were upregulated (S3 File). The differentially expressed genes were visualized using a heatmap (Fig 3).
Functional analysis of circRNAs involved in drought stress
To further understand the potential function of circRNAs, we predicted and analyzed their host genes. The host genes of 33 differentially expressed circRNAs were categorized into 171 functional groups (S4 File), clustered into three main GO classification categories ("biological process", "cellular component", and "molecular function"), which contained 100, 18, and 53 functional groups, respectively. Oxidation reduction process (GO: 0055114) consisted of 11 genes was dominant in the Biological process category. Plasma membrane (GO: 0005886) with 6 genes was dominant in the Cellular component. The top GO enrichment analysis result showed in Fig 4. We further analyzed the host genes of differentially expressed circRNAs for KEGG pathway enrichment and annotated them based on their involvement in 10 pathways (S5 File). Among these pathways, three host genes were assigned to "metabolic process", while other pathways were associated with one or two genes (Fig 5). The most common pathways included "biosynthesis of secondary metabolites", "ribosome", "protein processing in endoplasmic reticulum", "ubiquitin mediated proteolysis", "RNA transport".
The target miRNAs of circRNAs
Recent studies have demonstrated that circRNAs can bind miRNAs to prevent them from targeting mRNAs, thereby regulating gene expression [13]. To identify the pear circRNAs that target miRNAs, we used psRobot software to predict potential miRNA binding sites. In total, 309 circRNAs were predicted to act as sponges to a corresponding 180 miRNAs (S6 File). Among these 309 circRNAs, 146 had more than two miRNA binding sites. This implied that these circRNAs could serve as miRNA sponges. circRNA322 had 15 miRNA binding sites: miR5658, miR169a-5p, miRNA169b-5p, miR169c, miR169d, miR169e, miR169f-5p, miR169g-5p, miR169h, miR169i, miR169j, miR169k, miR169l, miR169m, and miR169n. This demonstrated that a single circRNA can target various miRNAs, and that a single miRNA can be targeted by different circRNAs. For example, miR414 could be targeted by 20 circRNAs, and miR-156 could be targeted by 52 circRNAs. The potential target miRNAs of 33 differentially expressed circRNAs are partially shown in Fig 6. Identification circRNAs in Pyrus betulifolia Bunge
RT-qPCR validation
To validate the reliability of the transcriptome gene expression profiles, 8 differentially expressed circRNAs were randomly selected for expression analysis through RT-qPCR (Fig 7). The expression patterns shown in the RT-qPCR results were consistent with the RNA-seq results. For example, the relative expression of circRNA527 was increase after drought stress, but the expression of circRNA822 was decreased, this was consistent with the RNA-seq result.
Suggesting that the results of this experiment data analysis were reliable.
Discussion
In the past year, circRNAs were considered to be RNA splicing errors [31]. Recent studies had identified a large number of circRNAs in mammals, and demonstrated that natural circRNAs play an important role in biological and developmental processes in animals [10,32]. However compared with animal circRNAs, little attention has been given to plant circRNAs [17,33]. Here, we reported the first identification and characterization of circRNAs in pyrus betulifolia, we obtained 889 circRNAs with an average length of 5,375 bp in total. Among other plants, 6,012, 12,037, 854, 113, and 496 circRNAs were previously identified in Arabidopsis [18], Oryza sativa [17], tomato [14], Setaria italic, and Zea mays [34], respectively. The number of circRNAs we identified in pear was greater than have been identified in tomato, S. italic, and Z. mays, but fewer than in Arabidopsis and O. sativa. In Arabidopsis and O. sativa, the cDNA libraries were generated from multiple tissue types; conversely, the cDNA libraries in the present study were generated only from leaves. Therefore, we likely identified only a subset of the total circRNAs in birch-leaf pear.
As circRNAs do not have 5' and 3' ends, the associated inherent stability makes them strong candidates for maintaining homeostasis in the face of environmental challenges [35]. circRNAs also often follow tissue-and stage-specific expression patterns [11,36]. In humans, all the circular transcripts to data are expressing at low levels compared to the dominant canonical linear isoform [37]. The members of a subset of circRNAs were significantly upregulated in the brain tissue of old versus young mice, whereas some were downregulated [38]. circRNAs have also been shown to be highly abundant and dynamically expressed in animal brains [39], nonalcoholic steatohepatitis samples [40], vascular cells [41], Arabidopsis under various stresses [42], and in various rice tissues [12]. In this study, we showed that 33 circRNAs were differentially expressed under dehydration stress and may therefore play important roles in drought-stress tolerance in pear. The apparent regulation of circRNAs appears to be a general phenomenon.
The response to drought stress in plants is a complicated process, involving several genes and metabolic network, hormones synthesis is one of important factor [34]. 2 host genes were associated with response to stress. Oxidation-reduction processed pathway is another important pathway on drought stress, in this study, a total of 11 host genes associated with this pathway. we also identified 1 host genes linked with ubiquitin, which may take part in signal transduction and the degradation of protein in response to stress [43]. "metabolic process" was the dominant subcategory in the 100 subgroups related to "biological process". This results consistent with those of previous research [44,45]. This provided further insight into the role of circRNAs in the drought response.
CircRNA can serve as competing endogenous RNA to bind miRNAs. In mammals, ciRS-7 has over 70 potential miRNA binding sites for miR-7. In tomato, 61 circRNAs functioned as sponges for 47 miRNAs [46]. In our study, 309 circRNAs were predicted to act as sponges for 180 miRNAs. We can therefore infer that various circRNAs containing common miRNA binding sites might act as miRNA sponges to regulate the response to drought stress in pear.
In summary, we identified 889 circRNAs in birch-leaf pear, among which 33 circRNAs were shown to be dehydration-responsive. Functional analysis showed that differentially expressed circRNAs were involved many dehydration-responsive processes, such as metabolic pathways, protein processing in the endoplasmic reticulum, and biosynthesis of amino acids. A circRNA-miRNA co-expression network indicated that the circRNAs were involved in drought-responsive processes. Our results provide a rich genetic resource for the discovery of genes related to drought stress, and can readily be applied to other fruit tree species. | 3,915.8 | 2018-07-17T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
A survey of beetles (Coleoptera) from the tundra surrounding the Nunalleq archaeological site, Quinhagak, southwestern Alaska
Abstract This paper presents the results of a survey of beetles conducted in the vicinity of the archaeological site of Nunalleq, a pre-contact (16th-17th century AD) indigenous forager settlement located near the modern Yup’ik village of Quinhagak, in the Yukon-Kuskokwim delta, southwestern Alaska. Records and habitat data are reported for 74 beetle taxa collected in tundra, riparian, aquatic and anthropogenic environments from a region of Alaska that has been poorly studied by entomologists. This includes the first mainland Alaskan record for the byrrhid Simplocaria metallica (Sturm). Beyond improving our knowledge of the local beetle fauna’s diversity and ecology, this survey provides the basis for comparisons between modern and sub-fossil beetle assemblages from Nunalleq and Quinhagak.
Introduction
Until recently, the arthropod fauna of Alaska has been comparatively less well studied than that of other states and provinces of Canada and the USA. In part this is caused by the fact that some regions are particularly difficult to access due to their topography, hydrology or remoteness to urban agglomerations. This is the case of the Yukon-Kuskokwim (Y-K) delta, a flat, treeless area of south-western Alaska where the tundra environment is dissected by numerous rivers, streams, lakes and ponds and underlain by discontinuous permafrost. Travel by motorised vehicle is impractical over most of the delta's expanse, making small boats and planes the most reliable transport means within the area. Karl Lindroth and Georges E. Ball are two of the few entomologists known to have visited the region, when, as part of Lindroth's seminal study of the ground beetles of Canada and Alaska (Lindroth 1961, Lindroth 1963b, Lindroth 1966, Lindroth 1968, Lindroth 1969a, Lindroth 1969b, they conducted a survey of the carabids of the western Alaskan tundra. Little entomological work has been done in the area since this date (although see Hayford et al. 2014 for a survey of chironomids (Diptera: Chironomidae) from the region).
From 2013, one of the team (VF) became engaged in a scientific, community and heritage project involving the excavation of Nunalleq, a pre-contact Thule-era (16 -17 century AD) site located on the Bering Coast of the Yukon-Kuskokwim delta, approximately 20 km south of the Yup'ik village of Quinhagak. One of the objectives of the project was to reconstruct past climatic conditions and human environment-interactions on the basis of ecological information derived from beetle remains preserved in the archaeology (Forbes et al. 2015). Similar palaeo-environmental reconstructions based on sub-fossil insect fauna have been conducted in Alaska, although on older deposits, most of which were not directly connected to human occupations (e.g. Bigelow et al. 2014, Elias 2000, Elias 2001, Elias and Matthews 2002, Elias et al. 1996, Elias et al. 1997, Elias et al. 1999, Elias et al. 2000, Matthews et al. 2003, Reyes et al. 2010, Reyes et al. 2011, Wilson and Elias 1986, Wooller et al. 2011. Sound knowledge of the insect fauna of the study locale is required to both successfully identify disarticulated sub-fossil remains and derive ecological information from them (Elias 2010, Forbes et al. 2016. For this reason, it was decided to conduct a small-scale entomological survey concurrently with the archaeological excavations at Nunalleq. The objective of the survey, beyond familiarisation with the local fauna, was to obtain a sample of the modern beetle fauna from the coastal tundra and anthropogenic habitats with which the beetle sub-fossils from Nunalleq could be compared. This paper reports the list of taxa obtained and discusses their significance for palaeo-entomological and archaeo-entomological research. th th
Material and methods
Fieldwork was conducted during two consecutive field seasons within a 5 km radius of the Nunalleq archaeological site (59°42.559' N 161°53.510' W WGS84, Fig. 1). The first season ran from the 28 of July to the 28 of August 2014 and the second, from the 2 of July to the 10 of August 2015. In order to obtain a sample as representative as possible of the local beetle fauna's diversity (within the constraint imposed by logistics and timing), different techniques were selected (Table 1). Pitfall traps were used to capture beetles crawling on the tundra, the seashore and disturbed ground around the archaeological excavation. They were made of plastic cups (7 cm diameter and 9 cm deep) half-filled with seawater, to which a few drops of dishwashing liquid were added (Fig. 2a). To capture flying beetles, flight interception traps were used, made up of a black mosquito mesh (1.2 m high and 1.4 m wide) stretched vertically between two posts. A set of plastic food containers half-filled with seawater was placed beneath the mesh to collect the insects (Fig. 2b). The pitfall and interception traps were emptied twice per week. Additional sampling techniques included beating vegetation, dipping and sifting and/or separation using a mini-Winkler extractor ( Fig. 2c) and hand collecting. Identifications of the beetle taxa were achieved through anatomical comparisons with specimens from the University of Alaska Museum Insect Collection (UAM) in Fairbanks, the Canadian Collection of Insects, Arachnids and Nematodes in Ottawa (CNC) and the Laurentian Forestry Centre's René-Martineau Insectarium in Quebec City (LFC). For some specimens, such as those belonging to the Staphylinidae subfamilies Aleocharinae, Omaliinae and Staphyliniinae, as well as the Pterostichus subgenus Cryobius Chaudoir, this was facilitated by microdissections to allow observation of the genitalia. Identifications were aided by consultation of identification keys and descriptions in entomology publications (Anderson and Peck 1985, Arnett and Thomas 2001, Arnett et al. 2002, Askevold 1991, Bousquet 1990, Bousquet 2010, Bright 1987, Buchanan 1927, Campbell 1978, Campbell 1984, Campbell 1991, Campbell 1983, Campbell 1982, Campbell 1973, Casey 1884, Downie and Arnett 1994, Downie and Arnett 1996, Gusarov 2004, Hatch 1971, Johnson 1991, Larson et al. 2000, Lindroth 1961, Lindroth 1963b, Lindroth 1966, Lindroth 1968, Lindroth 1969a, Lindroth 1969b, Klimaszewski et al. 2011, Klimaszewski et al. 2008, Lohse et al. 1990, O'Brien 1970, Ratcliffe 1996, Shavrin 2016, Smetana 1971, Wells 1991. Many specimens were identified, or had their identification confirmed, by taxonomic specialists. The taxonomic classification of Bousquet et al. (2013) was used.
Data resources
Vouchers specimens were donated to UAM, CNC and LFC and the remaining specimens are currently in the care of the first author. Data for specimens that were donated to UAM (accession: UAM-2014.20-Forbes-Ento) can be accessed through the Arctos database using the following link http://arctos.database.museum/saved/QuinhagakColeoptera. The full dataset is archived online and can be accessed at: doi.org/10.6084/ m9.figshare.5630296.v1.
Result & Discussion
This survey recovered a total of 500 beetle specimens belonging to 74 different taxa and spanning 15 families (Suppl. material 1, Figs 3, 4, 5). In total, 61 of the 74 taxa collected were successfully identified to species level. Of those, 50 are Holarctic in distribution, with the remaining consisting of Nearctic species. One species, Simplocaria metallica (Sturm), is of Holarctic distribution but considered adventive in North America (Bousquet et al. 2013), where it has been recorded from Labrador, Newfoundland, Atlantic Canada and Greenland (Majka and Langor 2011). In Alaska, it has previously been collected from St. Matthew Islands ), but the record from Quinhagak is the first for mainland Alaska.
Thirty-four of the identified taxa may be first records for the Y-K delta region. Many of these were collected in regions adjacent to the Y-K delta (e.g. the Seward, Alaska and Kenai peninsulas as well as central Alaska). This applies to Notiophilus borealis Harris, Elaphrus lapponicus Gyllenhal, Hydroporus lapponum Gyllenhal, H. morio Aubé, H. striola (Gyllenhal), Acidota quadrata (Zetterstedt) and Eucnecosum cf. tenue LeConte). They have probably been established in the Y-K delta for a long time, but perhaps were never collected before simply due to geographical sampling bias.
This survey also produced several records of Amara alpina (Paykull). This species is generally considered an indicator of cold climates in palaeo-entomological studies (Elias 2010). Seventeen specimens were collected at Quinhagak, which is characterised by a subarctic coastal climate. The nearest observational data comes from Bethel Airport (approximately 115 km northeast of Quinhagak) where mean winter (January) and summer (July) temperatures for the period 1987-2016 are -14.2 and 13.4°C respectively (NOAA 2017). Additional records in Alaska include those from St. George Island, ca. 50 km off the coast of mainland Alaska and Round Island, just 15 km south of the Y-K delta. Several records from the coast of the Y-K delta also appear in a distribution map for the species provided in Lindroth (1963a). The subfossil record suggests that the species occupied unglaciated regions of Alaska and the Yukon throughout the Quaternary (Reiss et al. 1999). Palaeo-entomological and genetic data identify this region as the principal centre-oforigin for A. alpina and other arctic and subarctic species that dispersed throughout northern areas of North America following the last glacial maximum (Ashworth 1996, Ashworth 1988, Reiss et al. 1999). Three dytiscid specimens were identified as Ilybius angustior (Gyllenhal) complex. These appear to be closely related to the species I. angustior, a Holarctic species occuring in still water with abundant vegetation (Larson et al. 2000). However, the Quinhagak specimens differ in size and colour as well as in the shape of the male metatarsal claws and aedeagus (according to Larson, personal communication, 2016).
Ecological grouping of taxa
Each identified taxon has been classified into an ecological group (Fig. 6). The 'Xeric' group contains ground beetle species that prefer dry conditions and live in the open, on ground with little to no vegetation cover. Taxa that are typical of mesic tundra habitats, which encompasses the shrub tundra but also moderately moist areas of the open tundra, have been attributed to the 'Mesic' group. This includes members of the subgenus Cryobius and rove beetles such as Eucnecosum spp. This group is dominant in this assemblage, totalling about half the total beetle specimens captured Fig. 7. Beetles preferring wet habitats and the banks of lakes or rivers were placed in the group 'Hygroriparian', which includes several carabids and rove beetles, but also the scirtid Cyphon variabilis (Thunberg) and the brachycerid Notaris aethiops (Fabricius). The elaterid Hypolithus littoralis Eschscholtz occurs on ocean beaches and is the only species attributed to the group 'seashore-associated'. The 'Aquatic' group contains the eight predacious scavengers beetle taxa identified in this study. Most aleocharines have been placed in the group 'In decomposing matter', alongside other rove beetles as well as mycetophagous (e.g. Atomaria sp. and Corticaria sp.) and carrion beetles (Colon politum Peck & Stephan, Catops alpinus Gyllenhal, Thanatophilus lapponicus (Herbst) and T. sagax (Mannerheim)). The click beetle Neohypdonus restrictulus (Mannerheim) was also placed in this group as it is believed to be omnivorous, feeding on decaying animal and Photographs of some of the beetle species identified from Quingahak (continued). Plateumaris flavipes plant matter (DSS, unpublished data). The 'Plant-associated' group includes taxa that feed directly on plants, but also the predator Coccinella trifasciata perplexa Mulsant, which preys on arthropods closely associated with plants (cf. Matthews 1983).
Figure 6.
Grouping of identified taxa according to their habitat/ecology. In red font are those taxa that belong to mesic, hygrophilous and riparian environments, but that are known to be associated with microhabitats available in decaying plant matter (e.g. leaf litter, floor debris).
Figure 7.
Relative proportion of each ecological group (as defined in Figure 6) represented in the sample. Percentages were calculated on the basis of the total number of Coleoptera within the sample.
Taxa that are typical of mesic to wet tundra habitats and which occur on both sides of the Bering and Chukchi Seas, are the most represented in this survey (Fig. 7). About a third of the species identified have been recorded from other late Quaternary sites also located in south-western Alaska, both inland and just west of the Alaska Peninsula . The proportion occupied by hygrophilous and riparian beetles is, however, surprisingly limited in view of the abundance of water in the area. This is no doubt in part due to the fact that sampling took place during the driest period of the year, although sampling bias probably also played a role, given that mesic tundra habitats were more easily accessible and sampled than wetter ones. Notable absentees from this study include several Stenus species, which are common in other subfossil assemblages of Alaska and Siberia (e.g. , Elias et al. 2000, Elias et al. 1996. Here, only two specimens were collected. Many Stenus species are riparian, hunting prey at the muddy banks of ponds or streams and are better retrieved through hand collection -by lifting rocks or streamside washing, for example. Diverse Stenus species probably occur around the Nunalleq archaeological site, but unfortunately, their niche(s) seem to have escaped the authors' attention. It is also interesting that the only Bembidion species identified in this survey is described by Lindroth as 'not at all riparian' (Lindroth 1963b), given that the majority of species from this genus live close to water.
Many of the taxa included in the 'Xeric' and 'Mesic' groups are typical of tundra environments, but appear to exploit niches provided by decomposing organic matter, for example rotting wood, leaf litter and flood debris (Fig. 6). This poses an interesting problem for archaeo-entomological interpretations. The most common aim of such studies is to produce high (spatial and temporal) resolution reconstructions of ecological conditions and activity areas within settlements. In this context, the importance of a species' microhabitat preference (e.g. decaying vegetation) may outweigh that of its macrohabitat (e.g. tundra). Human settlements have been shown to generate an abundance of nutrient-rich ecological niches which is unmatched in natural situations (Forbes et al. 2014, Forbes et al. 2017). Indeed, subfossil insect faunas extracted from floors and middens on archaeological sites are typically dominated by predators and mould-feeders in decomposing vegetation, many of which are known to occupy similar niches in forest litter, mammal and bird nests and burrows or tree hollows (Kenward and Allison 1994). This is the case not only for permanent urban and rural settlements, but also for the seasonally-occupied houses of arctic and subarctic foragers, which are strongly dominated by taxa such as Aleocharinae indet., Eucnecosum spp. and Pycnoglypta spp. (Forbes et al. 2017). It is therefore likely that tundra species known to inhabit decomposing matter in natural situations were able to colonise the nutrient-rich niches available inside and around sod dwellings in the past. It is worth noting that this survey collected several species typical of tundra environments (e.g. Carabus truncaticollis Eschscholtz, Diacheila polita (Faldermann), Pterostichus (Cryobius) similis Mannerheim and Pterostichus agonus Horn) in synanthropic situations. Future archaeo-entomological analyses at Nunalleq will hopefully clarify the significance of these beetles in the reconstruction of past foraging lifeways and ecology. | 3,388 | 2018-02-03T00:00:00.000 | [
"Biology",
"Environmental Science",
"History"
] |
Intrinsic dimensionality of human behavioral activity data
Patterns of spatial behavior dictate how we use our infrastructure, encounter other people, or are exposed to services and opportunities. Understanding these patterns through the analysis of data commonly available through commodity smartphones has become an important arena for innovation in both academia and industry. The resulting datasets can quickly become massive, indicating the need for concise understanding of the scope of the data collected. Some data is obviously correlated (for example GPS location and which WiFi routers are seen). Codifying the extent of these correlations could identify potential new models, provide guidance on the amount of data to collect, and even provide actionable features. However, identifying correlations, or even the extent of correlation, is difficult because the form of the correlation must be specified. Fractal-based intrinsic dimensionality directly calculates the minimum number of dimensions required to represent a dataset. We provide an intrinsic dimensionality analysis of four smartphone datasets over seven input dimensions, and empirically demonstrate an intrinsic dimension of approximately two.
Introduction
How people move through and make use of space is fundamental to disciplines as disparate as Architecture, Public Health, Civil Engineering and City Planning [1], and informs outcomes from the spread of contagious disease to the role of transit in moving people from place to place [2,3]. Models of how space is accessed and used have traditionally been derived from survey [4] or observational data [5], but in the last 20 years, electronically mediated data acquisition has become the norm [6,7], enabled by advances in GPS positioning [8], indoor positioning [9] and distributed sensing devices through body sensor networks and the Internet of Things [10,11].
The mass adoption of the smartphone over the last decade has opened new research horizons for researchers interested in studying spatial behaviour, given the diverse array of sensors available on the phone and the ability of the phone to issue short surveys [6], often known as ecological momentary assessments. The radios from the GPS, cellular and WiFi sensors can be a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 leveraged through trilateration or related techniques to provide location [12,13]; accelerometers, gyroscopes and magnetometers can be used to infer types of activity [14,15] and modes of transportation [16]. Screen state, camera and app usage data can support inference of aspects of a person's affect [17] and personality [18]. Even the charging behaviour can be linked with important metrics, such as those related to sleep patterns [19].
The change in data volume and quality has changed the way that analysis must unfold. While traditional survey and observational-based methods could be treated with frequencybased statistical tests [4,5] and simple analysis of location traces gathered through technologies such as GPS can be manipulated using spatial statistics [12], diverse datasets employing data from a number of high velocity sensor can be more difficult to analyze at scale [20]. A myriad of statistical and machine learning techniques have been used to map smartphone-derived sensor data to actionable outcomes [1], but most make the assumption that each channel offers incremental information beyond the last. While some measures are almost certainly correlated (for example, GPS location and the visibility of specific cellular towers or WiFi hotspots), occurrence of other co-variations are less obvious, and likely to be more textured (for example, app use and activity level). As described by Camastra [21], the use of unnecessary dimensions can result in many problems such as space to store the data, speed performance of algorithms, and building good classifiers due to the curse of dimensionality.
Some initial studies on the incremental information value of different spatio-temporally correlated measures have been contributed. In particular, linear methods such as principal component analysis (PCA) have been employed to determine which dimensions of behaviour account for most of the variance within different measured dimensions of spatial behaviour [14]. However, these methods do not paint an accurate picture of the fundamental correlation between dimensions as they are limited by their underlying assumptions, often linearity, normality or stationarity [22,23]. Neural networks are a common alternative approach for nonlinear PCA but also showed performance limits [24]. On the other hand, applying PCA locally in non-linear manifolds can be found in algorithms such as local-PCA [25] and OTPMs PCA [26]. However, these algorithms do not guarantee to cover the whole dataset. Isomap [27] and C-PCA [23] algorithms, based on some PCA features and the nearest neighbor (NN) distances, do guarantee a global estimation by preserving the distances of the original data. While these methods provide algorithmically actionable measures of dimensionality, they explicitly do not provide insight into the minimum possible number of dimensions required to represent a function, stochastic variable or dataset [21,28], also called intrinsic dimensionality (ID).
More reliable ID global estimators have been proposed. Costa and Hero [29] extended the ISOMAP algorithm by creating a graph of the data sample and then pruning it to the minimal spanning graph (GMST). The ID is estimated based on the GMST length. Maximum likelihood estimator [28] is another global estimator based on probabilistic assumptions. Among these ID estimators, fractal-based methods are extensively explored in the literature and have been shown to be a good ID estimators [30,31]. As shown by Lee and Verleysen [31], among PCA, local-PCA, "trial and error" method, and a fractal-based method, the latter clearly resulted in the best estimation of ID. They also have a well-established record of utility outside of GIS and has been used for to inform both state space and time series models of behaviour [32], particularly in the health science [33]. The Box Counting dimension is one of the most popular fractal-based methods [21]. Camastra and Vinciarelli [30] proposed a fractal-based method to estimate the ID of a dataset using the correlation dimension definition [34] as a substitute for the Box Counting dimension due to its computational simplicity. On the other hand, Traina Jr. et al. [35] addressed the Box Counting dimension by developing a multi-level grid structure. Each cell of the grid is represented by the number of data points that are within the location of the cell, and if a cell has at least one data point, a new grid is generated for the next level with the cells containing half of the size. Their algorithm showed a linear computational cost in relation to the number of dataset points (N), that is, O(N). However, storage cost still remained a problem with a complexity of O(N) and Wong et al. [36] proposed a fractalbased algorithm, Tug-of-War, with the same computational cost, O(N), but with a storage cost of O(1).
When narrowing the use of ID for human behavior, previous studies have usually focused on visual tracking, using body-sensors or cameras [37][38][39]. There is a lack of studies applying intrinsic dimensionality for smartphone sensors to understand human behavioral activity more broadly. Investigating ID of datasets is the province of complexity theory [23]. Using a single metric for analyzing human behavior and spatial complexity of data is desirable as it can describe the overall complexity of a data set and inform model design, data collection and interpretation. For instance, large dimensionalities indicate low predictability, and low likelihood of model building success. Lower dimensional spaces are more likely to be amenable to modelling and analysis. ID for human behavior can also be used as a metric to represent relative information content between populations. Populations which exhibit little covariance across measured dimensions are represented by larger dimensions and conversely, populations which exhibit strong correlation between measured features exhibit lower dimensionality.
In this paper, we establish the viability of the approach of analyzing smartphone mobility and activity data using dimensionality and provide baseline insight into its value and form over seven features and four datasets. To calculate their ID, we apply a similar Box Counting dimension implementation to the one described by Traina Jr., Traina, and Wu [35] but, unlike Tug-of-War [36], we do not improve the storage cost, because our datasets are not large enough to encounter memory issues on modern computers. Using a tree decomposition, we are able to probe the structure of the dimensionality, a novel step in dimensionality analysis of datasets. We demonstrate that for the datasets under consideration between 1.82 and 1.90 dimensions are fundamentally required to represent the data. Post-hoc PCA analysis leads to some additional insight, but is inconclusive on the structure of the dimensions, indicating that the relationships between the human behavioral activity dimensions in the limit are non-linear. With the post-hoc PCA analysis, we show that researches need to exercise care when assuming that human behavioral data is linearly correlated, and a non-linear method is preferable but do not propose such a method because Box Counting provides an estimate of the ID, but not the parameters which contribute to that minimum representation of the data. We use PCA to contrast the difference between the number of dimensions computed under linear and non-linear assumptions for the data in question.
Experimental setup
Our investigation sought to determine the intrinsic dimensionality (ID) of four different datasets containing smartphone sensor metrics using the Box Counting dimension (this algorithm is explained under the section Intrinsic Dimensionality). We selected 7 dimensions that were presented across the four datasets and described human behavioral activity: time (only hour), latitude, longitude, acceleration, standard deviation of acceleration, battery status, and WiFi connectivity. We aggregated, merged, filtered, and normalized the datasets. The final number of records of each dataset varied from 109,986 to 231,382. We then inserted these records in a dataset-specific n-D Tree structure to calculate the required parameters for the Box Counting dimension. The slope of the log-log plot of these data points was calculated to estimate the ID as proscribed in [21,35,40].
Datasets
Several human behavioral studies [3,6,7,9] have used the Saskatchewan Human Ethology Datasets (SHEDs) that contain various smartphone-sourced sensor records. In this paper, we used the four most recent SHEDs: 7, 8, 9, and 10, which were collected via the Ethica mobile app [41]. The study was approved by the University of Saskatchewan Behavioural Ethics Board, with reference to file number BEH 14-293. The form of consent was written. Each participant had a unique code assigned to his/her name for the sake of anonymity and the data were structured in MySQL (version 14.14, distribution 5.5.24) database. The collection duration and count of participants and records for each SHED is provided in Table 1.
Ethica [41] collects sensor data using a duty cycle, typically for one minute every five, over a number of smartphone measures including but not limited to location, activity and phone use. Additional datastreams such as phone orientation, activity type, and GIS measures such as convex hull are either directly available or computable through post-processing. Because this work is meant to establish the viability of the Box Counting approach and to study its utility, we selected seven relatively low level features available across all datasets. Because geographers have long established the importance of time and place [42,43], we selected hour of the day, latitude and longitude as key indicators. We additionally report the number of WiFi nodes visible in a duty cycle. While this is not directly a measure of location, it is likely to correlate with location, as participants are likely to see the same WiFi routers in the locations which they regularly visit. Phones are also often used to measure activity [1,14,15]. Raw measurements from the accelerometer and gyroscope are seldom useful for direct conclusions about activity, but the mean and variance of the norm of the acceleration has been shown to loosely correlate with whether or not a participant is physically active. Finally, we record the battery state (charging or discharging). This will loosely correspond to both time and location (people tend to plug phones in at the same time and place, for example, at bedside before sleeping), as well as phone use (greater use leading to faster depletion and more frequent charging) [19]. Between these columns we have captured, at least tangentially, the three most common uses of smartphone telemetry: mapping people through time and space, recording and reporting on activity, and tracking use of the phone itself. On the surface, these would appear to be distinct classes of measurement, only tangentially related, which can be appropriately probed with a dimensionality analysis, as opposed to a set of parameters that were clearly correlated, or impossible to correlate. Latitude and longitude are measurements of continuous variables, at least to the noise floor of the sensor, as are the norm and variance of the accelerometer, further indicating the potential utility of ID as a measure of dataset complexity.
Data preprocessing. We chose to use the the five-minute Ethica duty cycle noted above as the time quantum for analysis. However, most data streams report multiple values over the minute that the duty cycle is active. We therefore aggregated each data stream for each participant to the duty cycle level for each metric, such that for each duty cycle there was at most a single entry for a participant. By aggregating, we achieved a consistent number of records per duty cycle and summarized the information. Consequently, the noise in signals, such as GPS, were suppressed which is likely beneficial as we are interested in analyzing human mobility, not GPS precision. In contrast to the other data sources, the GPS table is not guaranteed to contain an entry for every participant duty cycle, because the participant could have been indoors and unable to receive a GPS satellite signal. In cases where no GPS data was reported, the entire record was ignored, biasing our analysis towards outdoor behavior. Data was then merged across participants and duty cycles such that an entire dataset with all seven features was contained in a single table. The resulting datasets were filtered and normalized. Normalization bounds were determined within a dataset, but across participants. Python 3.6 with pandas 0.20.3 was used for the filtering process, and Java 1.8 for the aggregation and normalization. The criteria for aggregation, filtering, and normalization (feature scaling) for each of the 7 features are described below: Timestamp For aggregation, we took the most recent timestamp and mapped it to the hour of the day, which was normalized to the range hour 2 [0, 1].
Longitude and Latitude
We took the last record (the most recent record) when aggregating to the duty cycle level. Due to limitations of GPS localization caused by nominal array of satellite connectivity [3], we decided to filter our data to the bounds of Saskatoon, Saskatchewan, Canada, where the SHED datasets were predominantly collected. Therefore, any longitude values less than -106.7649138128 and greater than -106.52225318, and any latitude values less than 52.058367 and greater than 52.214608, were removed. After the filtering, the values of lon and lat were normalized, lon and lat 2 [0, 1].
Acceleration Norm
Since this metric is composed of acceleration in the x, y and z directions with respect to the phone, acceleration was averaged over each duty cycle and combined across spatial dimensions using the L2 norm. Outliers greater than three standard deviations from the mean were removed. The acceleration norm was then normalized between 0 and 1 for each dataset.
Standard deviation of acceleration
The standard deviation of the acceleration over a time window can be used as a simple measure of whether a person is active. We calculated the L2 norm for each accelerometer reading during each timestep as above, and then calculated their standard deviation over a single duty cycle. Outliers greater than three standard deviations from the mean were removed. Acceleration standard deviation was then normalized between 0 and 1 for each dataset.
Battery Status
The most recent record was considered when aggregating the data. The battery status records whether a mobile device is "Charging AC", "Charging USB", "Charging Wireless", or "Not Charging", and is a useful proxy for whether the phone is being carried by the participant. With the normalization, 0 was assigned to "Not charging", 0.25 to "Charging AC", 0.5 to "Charging USB", and 1 for "Charging Wireless".
WiFi connectivity For aggregation, we counted the number of unique WiFi router MAC addresses (wifi) that a participant observed in a given duty cycle. Any entry that had a wifi count of 0 was removed. Then, the value of wifi was normalized, wifi 2 [0, 1].
The final step was to remove duplicates, since there were rare cases where two or more data points presented the same values for the 7 selected dimensions, which can confuse the Box Counting algorithm. Table 2 shows the number of participants and records for each dataset before and after selecting the sensor metrics of interest, filtering and normalizing the data.
Intrinsic dimensionality
Intrinsic dimensionality (ID) is the minimum number of dimensions required to represent a dataset without losing information [30]. Based on the definition provided by [25,30], an ID of a dataset with d nominal dimensions is equal to M only if all of the dataset elements lie within A popular approach to obtain ID is the Box Counting dimension, a fractal-based method, which is a simplified version of the Hausdorff dimension [21]. For the rest of the document, we refer to Box Counting dimension as dimensionality for brevity. To calculate the dimensionality of a set F, we draw a number of hypercubes of side length ε and count the number of hypercubes, N(ε), that cover the set F for decreasing values of ε. The dimensionality is defined as follows: where ε represents the side length of the hypercube and N(ε) is the number of hypercubes of side length ε containing data. Since the dimensionality is applied for different sizes of ε, ID is estimated based on how N(ε) changes as ε becomes finer. In addition, as pointed by Camastra [21], it has been proven that the following should be true to obtain an accurate ID estimation: where d is the nominal number of dimensions and n is the number of records. The inequality 2 is satisfied for all of the datasets in consideration in our work, SHED7-10, with d = 7 and n as shown in Table 2, where 2 log 10 n is always greater than 10. Based on Eq 1, the data needs to be structured in hypercubes for different ε values. Therefore, we developed an n-Dimensional (n-D) Tree, which allowed us to regularly specify a hypercube in more concrete terms for different values of ε. The following subsections explain in details the n-D Tree decomposition and how we estimated the ID. Our algorithm is presented in Algorithm 1, which was developed in Python 3. while hypercube H with side length ε is not a leaf do 3: Insert t into H [35], we represented a d dimensional hypercube as a node of a tree structure that we termed an n-Dimensional (n-D) Tree, which allowed the insertion of participant records (data points) according to the coordinate ranges of the hypercubes to which they belonged. However, while Traina Jr. et al. [35] assumes that the hypercubes' positions are always known, we calculated their positions for each ε. In their methodology, each hypercube only informs the sum of the data points it possesses, in contrast, we chose to insert the data points into the hypercubes so that we have the structure of our data. Therefore, our participant's records and hypercubes' coordinates had the general form (r 1 , r 2 , . . ., r d ) and [(a 1 , b 1 ), . . ., (a d , b d )], respectively, where d is the maximum number of dimensions, in our case 7, r d is the participant record at dimension d, and a d and b d are the minimum and maximum values of the coordinates for the respective dimension d. A data point belongs to a hypercube if it is greater than a d and less than or equal to b d , which allowed the special case of having a data point on one of the hypercube's axis. For the sake of efficiency, we constrained each coordinate axis to the range of [0, 1], motivating normalization of our data.
All data points are encompassed by the root node as it spans [0, 1] on all dimensions. The root node of the n-D Tree represented the spanning hypercube, which was then subdivided into smaller hypercubes. The condition of containing more than one data point splits the hypercube into 2 d hypercube children with the half of the ε size. Therefore, a new set of range coordinates had to be calculated for each child, based on the previous hypercube's coordinates. For every axis representing a range, we used the lower bound and the upper bound to calculate the new coordinates. The lower and upper bounds were added and divided by 2 to obtain the middle point. As a result, new coordinates were a tuple set of the form [(lower, middle), (middle, top)]. Once we created a list of all the new possible coordinates, we used the Python package Itertools (version 2.3) to return the Cartesian product of the new coordinates. Finally, the inserted points on the previous hypercube were re-inserted into the newly allocated and corresponding hypercube children. This method of continuously partitioning the initial hypercube allowed us to keep the originally inserted data points in the leaves since they are the nodes containing only one data point. It is worth noting that this method has a larger memory footprint than other methods, but provides the benefit of post-hoc analysis of the tree structure. With the n-D Tree, it was possible to calculate the parameters in Eq 1. The proportion containing data is critical for calculating the N(ε) parameter, reflecting the fact that where l is the current level and d is the nominal dimension of the dataset. Since our algorithm does not expand nodes with only one data point (leaf), this data is not re-inserted in the next levels. Therefore, N(ε) was calculated by summing the number of nodes with data at the corresponding level plus the number of previous leaves, so that all the data points were considered for each ε. Determining ID. From different proposed methods, the most widely used one in the literature to estimate the Box Counting is the slope of the linear part of log(N(ε)) vs. log(1/ε) [21,35,40]. To select the linear part of the log-log plot, we found the level of the tree where the number of nodes containing data begins to decrease, because after this level, N(ε) increases slowly as ε decreases, resulting in an asymptotic rather than linear curve. Because the asymptotic curve starts between the points where the tree switches from increasing to decreasing number of nodes with data, we fitted a line for both cases and took the average of their slopes as the final ID value. We used the numpy (version 1.13.1) function polyfit to fit the lines.
Results
We calculated the intrinsic dimensionality over the four datasets: SHED7, SHED8, SHED9, and SHED10 on a MacBook Pro 2.7 GHz Intel Core i7 with 16 GB 1600 MHz DDR3 RAM. Table 3 shows the runtime and memory usage of our algorithm for each dataset, not including data download time from the database. Run time accounts for the building of the tree and
n-D Tree structure
Analyzing n-D Tree sparsity is important to identify how the data points are concentrated and, therefore, how the dataset can be better described. As a result, we first analyzed how our 7-D Tree partitioned the data points for each dataset. Fig 2 shows how the latitudes and longitudes Intrinsic dimensionality of human behavioral activity data were partitioned for the first four levels of SHED10, the most recent one of the SHEDs table.
Each cell represents a node of the tree, and the ones in bold are the nodes containing data in the respective level. We can see that the tree becomes sparser as it becomes deeper. All datasets exhibit the following trend: in levels 0 and 1, 100% of the nodes contain data, while in levels 2 and 3, this is not true. We then further investigated this trend for all the 7 dimensions by calculating the proportion of nodes containing data (n data ) in relation to the total number of nodes (n cells ) in each level. As shown in Fig 3, all of the original data points were inserted into the leaves with no more than 64 levels, ε = 7.74E-21. The proportion of nodes with data decays abruptly until the 8 th level (ε = 0.0005), indicating that the rate of insertion is declining. Some of these data were similar because at least two data points were inserted in the same hypercube between the 24 th and 48 th levels (ε = 8.51E-09 and ε = 5.07E-16, respectively). Only after the 48 th level, was the range sufficiently fine to separate these data points into different hypercubes, showing an increase of the proportion of nodes with data. We believe that these similar data were the result of GPS and accelerometer noise. Because dimensionality is only valid for data above the noise floor, we ignore tree levels greater than 24 in subsequent trend analysis. The trend in Fig 3 can be described as n data n cells ¼ c level , where 0.14 < c < 0.3, according to Eureqa [44], with a R 2 goodness of fit around 0.99. However, we note that the R 2 value must be regarded with caution as the involved equations are not linear.
Knowing at which level our 7-D Tree presented the greatest number of data points is important for the algorithm because this level indicated the break between the linear and asymptotic portion of the curves of the log-log plot of log(N(ε)) vs. log(1/ε). Fig 4 shows the total number of nodes containing data per level as a TreeMap. We only considered the levels between 3 and 14, as smaller or greater levels were marginal contributors in the top right corner of the diagram, and cluttered the presentation. Fig 4 shows that, regardless of dataset, the 8 th level, ε = 0.0005, contains the plurality of the data points. In addition, we can see in Fig 5 that the tree reached its maximum expansion around the 8 th level, which indicates that the tree starts to shrink after this level. (N(ε)) vs. log(ε) plots. As described above, after the 8 th level, an asymptotic curve is formed. Since the asymptotic curve starts between the 8 th and 9 th levels, we considered the first 8 th and 9 th points as the linear part of our graph, and took the average between their slopes as the ID, which showed to be between 1.82 and 1.90. The ratio of the total points from the dataset inserted into the leaves until the 8 th and 9 th levels as well as the ID for each dataset can be seen in Table 4.
Intrinsic dimensionality
We ran a PCA analysis using Python 3.6.1 with sklearn to determine the extent to which a linear correspondence could be found in the data. The PCA method showed that approximately 90% of the data from the SHEDs datasets can be described with 3 or 4 dimensions deviation (stddev) presented a strong linear correlation across all the datasets The majority of the correlations were close to 0 but, such as battery and hour had weak correlations. PCA did not provide consistency on its selection of the dimensions that better describe the datasets for the second and third principal component, PC2 and PC3, as shown in Table 5. For SHED7, the battery is the primary dimension for PC2 while for the other datasets, the longitude showed greater variance. For PC3, the opposite occurred. Due to the lack of correlation, and inconsistency of the eigenvectors, we conclude the ID of human behavioral activity is based on nonlinear mappings between dimensions. These results do not invalidate the utility of the PCA or similar algorithms results which provide the new dimensions as well as an estimate of dimensionality, but instead serves to illustrate the difference between what the linear estimate returns and the intrinsic dimensionality of the data. Intrinsic dimensionality of human behavioral activity data
Discussion
In this paper, we have demonstrated the utility of the Box Counting dimension for the characterization of diverse spatial datasets. Results indicated that, for the datasets considered, the fractal dimension of the dataset was remarkably consistent, with values between 1.8 and 1.9. A continuous dimension may appear a strange result, but it indeed describes the minimum dimension of the data. A famous example is the Sierpinsky triangle which has an ID between 1 and 2 because it is not an 1-D object due to its infinite perimeter nor a 2-D due to the lack of an area [35,36]. In a human movement context, a person moving in a building would present an embedding dimension of 3: x, y, and z coordinates, where z can be discrete because it is attached to the floor. As a result, the full z dimension is not required. Fractal dimension attempts to quantify exactly how many dimensions are needed. As the data described here is spatial, one would initially expect a dimension of greater than two, representing longitude, latitude and the time of day. However, because both location and time are bounded and discrete (time explicitly by hour and space implicitly by the resolution of GPS), they can each be represented as finite arrays (time explicitly as a 24 bin array, and space as the total area divided by the resolution squared). Because all dimensions in the dataset can be regarded in this manner, as limited by intent or sensor resolution, all dimensions in the dataset are representable as single-dimensional entities, although the tables would be quite large for the continuous sensed phenomenon (position and activity). Because all measured dimensions (distinct from the continuous dimensions they represent) are finite and countable, they can be represented in the limit using less than a full dimension. However, we suspect that this does not describe the entirety of the dimensionality. Logical correlations between space-time and other human activities (for example one is unlikely to engage in calisthenics in the washroom), which would lead to mappings and constrains between dimensions reducing the dimension due to behavior. While some of the dimensionality can be attributed to the structure of the data, a significant Intrinsic dimensionality of human behavioral activity data portion must be attributed to the non-linear correspondences between features. The dimesionality is a reflection that human activity is inherently bounded (for example by the surface of the earth for most people), that sensors have a fundamental limit to their resolution, and that, as established by Song et al. [45], human spatial behavior has a high degree of predictability in the limit. What this dimensionality expresses is the extent of those intrinsic constraints for these particular datasets.
Analysis of the structure of the tree indicated that a maximum proportion of the data was contained in leaf at the 8th expansion, or 2 56 discrete bins (the total number of hypercubes in this level if all the nodes were divided, (2 d ) l ). After this level, the Box Counting algorithm did not locate a substantial number of new datapoints for an expansion. This indicates the minimum countable set that is required to reasonably represent the data, because aggregation to this level provides the most efficient and diverse representation of the data. While this approach does not describe the structure of the correlation, it can still be employed when evaluating features, as ID with and without a given feature can be calculated to establish the incremental impact of that feature [35].
A correlation analysis of the data indicated no obvious linear correlations, except between the mean and variance of the accelerometer norm, as expected. A PCA analysis similarly was inconclusive, with different datasets ranking different combinations of parameters as important in successive eigenvectors. The inconsistency of the PCA analysis across datasets was at odds with the consistency of the dimensionality. Box Counting by its nature is more accurate in determining the ID, as it does not make the linear assumptions of PCA. However, Box Counting only returns the ID in the limit, but provides no information on what those dimensions are. If the ID from Box Counting and PCA were the same, then the linear assumption would be valid, and the dimensions with most variances returned by PCA could be used with confidence. Because they were not the same with our datasets, the number of dimensions of PCA did not represent the smallest possible number. We would recommend that researchers employing PCA or similar techniques to provide a more concise representation of their data should also include an estimate of ID to establish the degree to which the representation approaches the theoretical optimum.
As a results of this analysis, we have made the following contributions to human behavioral activity data:
Dimensional Analysis
We are the first, to the best of our knowledge, to propose and describe using the Box Counting dimension the structure of spatial behavior datasets that are now possible to collect from smartphones. This dimensionality is low considering the complexity of the variables involved, but sensible given the bounded, countable and predictable nature of sensed human behavior. These results additionally imply that much of human behavior is correlated, even if those correlations are not linear, or accessible to traditional statistical modeling.
Tree Analysis If the Box Counting dimension is generated using an n-D Tree decomposition, then the structure of the tree can be probed to generate insight into the structure of the data. This is a novel use of the Box Counting dimension and constitutes a methodological contribution in and of itself. The analysis of the datasets using the techniques showed that the maximum ratio of data-containing boxes occurred at 32 to 58 % of the total number of data points.
Non-linearity Both correlation and PCA were unable to consistently account for the low dimension across datasets, providing strong evidence that any correlations are non-linear. Furthermore, the fractal dimension obtained through Box Counting dimension is indicative of systems typified by feedback models. This result is sensible given the nature of the data studied, where future actions are strongly contingent on past states.
The technique we have described here is generic and could be employed for similar datasets. Code required for this analysis can be found at [46]. We would anticipate different results for intrinsic dimensionality and different tree structure for different input dimensions and participant demographics. The similarity of the dimensionality between datasets for the intrinsic dimension were striking, but these datasets were all built from similar demographics (students from a Canadian Prairie university). While we expect that the technique will generalize, and suspect that the intrinsic dimensionality of students at similar-sized universities in developed countries will likely be similar, we do not believe that the empirical results are universally true for all of human endeavour. The datasets we chose were of modest size. While Box Counting dimension itself scales relatively well, holding the tree in memory does not. Blindly using the code described here on large datasets could result in memory issues. This is trivially solved by not building the tree, or by using more eloquent coding techniques to manage tree size and expansion.
Limitations and future work
While our paper constitutes an important contribution to the methodology of studying human spatial behavior, several limitations should be noted which point the way towards future inquiries. The four similar datasets studied demonstrate that the approach is technically sound, provides to be consistent and is computable in a reasonable amount of time for the scale of the datasets in question. However, the scope of human endeavour encompasses more than the habits of North American university students, and the results presented here must be regarded as a baseline. Further analysis of this method to determine its consistency across a wider variety of human (or non-human) populations to establish the variability of the measure, and larger datasets to establish the extent of computable solutions, constitute a potentially impactful body of future work. Because the IDs over the four similar datasets were consistent, an opportunity stemming from this work lies in studying dimensionality as a feature capable of meaningfully distinguishing between populations or behaviors. If the ID is constant for all demographics, that would be an interesting finding regarding human behavior. On the other hand, if the ID varies with demographics or behavior, then ID becomes a plausible feature for measure the human mobility complexity.
We treated datasets in their entirety and did not investigate the distribution of dimensionality at the participant level. This could be a fruitful avenue for future research, as dimensionality may be diagnostic of individual or population differences in the same manner as entropy rate [45]. We selected what we believed to be interesting and distinct features from the dataset to examine, but this set is illustrative rather than exhaustive or authoritative. Subsequent analyses across different or larger input vectors of human behavior could yield additional insight. Finally, this work ignores the likely impact of scale of analysis, often referred to in geography as the modifiable areal unit problem (MAUP) [47]. Because the dimensionality is related to the countability of the measurements, elements which change countability, like increases in spatio-temporal resolution, or the bounds of time and space considered could have an impact. The relationship between scale and dimension is a promising future research area.
Conclusion
In this paper, we provided the first calculation of intrinsic dimensionality for human behavioral activity by analyzing seven smartphone sensor metrics over four datasets. We applied the Box Counting dimension to calculate the intrinsic dimensionality. By using a tree structure, the data was organized in a meaningful way that allowed us to compute the required parameters for the Box Counting dimension. Our methodology showed that the human behavioral activity can be described with a low dimension, between 1.8 and 1.9, while the linear dimensionality reduction technique PCA resulted between 3 and 4 dimensions to describe 90% of the data, indicating that the correspondence between dimensions was non-linear. Further work considering a dataset including diverse occupations and locations as well as analyzing the human activity by individual level can provide more insight of the intrinsic dimensionality of human behavior. | 8,930.6 | 2019-06-27T00:00:00.000 | [
"Computer Science",
"Psychology"
] |
Toxic , antimicrobial and hemagglutinating activities of the purple fluid of the sea hare Aplysia dactylomela Rang , 1828
The antimicrobial, hemagglutinating and toxic activities of the purple fluid of the sea hare Aplysia dactylomela are described. Intact or dialyzed purple fluid inhibited the growth of species of Gram-positive and Gram-negative bacteria and the action was not bactericidal but bacteriostatic. The active factor or factors were heat labile and sensitive to extreme pH values. The fluid preferentially agglutinated rabbit erythrocytes and, to a lesser extent, human blood cells, and this activity was inhibited by the glycoprotein fetuin, a fact suggesting the presence of a lectin. The fluid was also toxic to brine shrimp nauplii (LD 50 141.25 μg protein/ml) and to mice injected intraperitoneally (LD 50 201.8 ± 8.6 mg protein/kg), in a dose-dependent fashion. These toxic activities were abolished when the fluid was heated. Taken together, the data suggest that the activities of the purple fluid are due primarily to substance(s) of a protein nature which may be involved in the chemical defense mechanism of this sea hare. Correspondence
Introduction
Many sea hares, which are opistobranch molluscs, discharge a fluid from the purple gland when disturbed.This reaction suggests that this fluid contains bioactive factors which may act against potential enemies since the defense mechanisms of the sea hare differ from those of highly developed vertebrates (1).Sea hare species have attracted the interest of many workers investigating the chemical compounds secreted by the purple gland or present in different tissues, possibly involved in the defense of these invertebrates.Thus, some sea hare species have been shown to contain low molec-ular mass substances with antimicrobial (2)(3)(4)(5)(6) and antitumor activities (7)(8)(9)(10) and also high molecular mass compounds such as those from Aplysia kurodai, Aplysia juliana and Dolabella auricularia with similar activities, and which were named aplysianins (11,12), julianins (13,14) and dolabellanins (15,16), respectively.This study describes some biological properties (antibacterial, antifungal, hemagglutinating and toxic activities) of the purple fluid of the sea hare Aplysia dactylomela, which is an opistobranch widespread along the Brazilian coast and also occurring in the intertidal zones from Southern Florida in the United States to Eastern India (17).
Collection of the purple fluid
Specimens of Aplysia dactylomela Rang, 1828 were collected at Pacheco Beach, Caucaia, State of Ceará, Brazil, in June and July.The purple fluid was obtained by irritating the hare and squeezing it gently for a few minutes outside the water.The secretion was collected into a sterile bottle and subsequently frozen at -10 o C until used.
Protein determination
A manual colorimetric procedure for measuring ammonium nitrogen in Kjeldahl digests (18) was used for the determination of total nitrogen and protein content, which was calculated using a nitrogen conversion factor of 6.25.
Antibacterial assays
Inhibition of bacterial growth by the purple fluid samples was determined as described by Bauer et al. (19).Briefly, bacterial cultures were maintained in Müller-Hinton broth (Difco Laboratories, Detroit, MI).Sterile swabs were immersed in the microbial suspensions (10 8 cells/ml) and evenly applied to Petri dishes containing Müller-Hinton agar.Sterile Whatman AA filter paper disks (6 mm in diameter) were fully imbibed with 30 µl of the purple fluid samples and placed over the agar in the plates.Tobramicin disks (10 µg; Cecon, São Paulo, SP) were used as positive control.The plates were incubated overnight at 35 o C and then examined for zones of growth inhibition around each disk.The bacteria used were Serratia marcescens, Citrobacter freundii, Vibrio cholerae, Salmonella thyphimurium and Proteus vulgaris (all from the collection of Universidade Federal do Ceará), Bacillus subtillis (ATCC 6633), Escherichia coli (ATCC 13863), Staphylococcus aureus (ATCC 6538), and Pseudomonas aeruginosa (ATCC 25619).To investigate whether the antibacterial action was bacteriostatic or bactericidal, fluid samples were serially diluted in 1% peptone broth and incubated with cells of Staphylococcus aureus and Pseudomonas aeruginosa for 18 h at 35 o C.After this period, the minimum inhibitory concentration (MIC) was determined (20) and the mechanism of growth inhibition evaluated by subculturing the cells in media without purple fluid.
Antifungal assays
Growth inhibition of Candida albicans and Saccharomyces cerevisiae (all from the UFC collection) by the purple fluid was determined as described by Roberts and Selitrennikoff (21).Briefly, agar assay plates were prepared by autoclaving agar Sabouraud medium (Difco Laboratories).After cooling to 45 o C, the yeasts were added to a final concentration of 10 7 cells/ml.Fifteen-milliliter aliquots of the suspension were dispensed into 100-mm diameter Petri dish and allowed to solidify before placing 6-mm diameter sterile paper disks on the surface of the agar.Thirty microliters of the purple fluid was added to each disk, and plates were incubated overnight at 35 o C. Plates were examined as described for the antibacterial assay.Hyphal extension inhibition assays were done essentially as described by Mirelman et al. (22).Thus, hyphae-containing agar plugs (Aspergillus niger, Penicillium herguei and Trichophytum mentagrophytes -all from UFC collection) were placed in the center of agar plates and the test samples were added to wells surrounding the plugs.Plates were incubated at 30ºC for 48-72 h and examined for crescents of hyphal inhibition.Nistatin (10,000 IU; Neoquímica, Anápolis, GO) was used as positive control.
Effects of dialysis, heat treatment and pH on antibacterial activity
Aliquots of the fluid were dialyzed (cutoff 12,000) thoroughly against water, at 4ºC, and subsequently tested for antibacterial activity (Pseudomonas aeruginosa and Staphylococcus aureus), as described before.Fluid samples were heated at 80 o C for 2, 5 and 15 min.After heating, the samples were cooled and centrifuged and the supernatants tested for activity.For the pH stability test, aliquots were adjusted to pH 2.0 (HCl) and 12.0 (NaOH) and kept in a refrigerator for up to 30 min.After this period, the fluid had its pH adjusted back to its original value of 6.4 and was tested for antibacterial activity.
Erythrocyte agglutination and inhibition assays
The hemagglutinating activity was assayed according to Vasconcelos et al. (23).Serial 1:2 dilutions of the fluid dialyzed against 25 mM Tris-HCl, pH 7.5, were mixed in small glass tubes with 0.25 ml of a 2% suspension of untreated or trypsin-treated erythrocytes (horse, chicken, pig, cow, rabbit or human).The enzyme-treated cells were obtained by incubation of trypsin (0.1 mg; Type I, Sigma Chemical Co., St. Louis, MO) with 25 ml of a 2% suspension of cells in 150 mM NaCl for 60 min at 4 o C.After washing six times, a 2% suspension was prepared in 150 mM NaCl.The extent of agglutination was monitored visually after the tubes had been left at 37 o C for 30 min and subsequently at room temperature for a further 30 min.The results are reported as the number of hemagglutination units (HU) per mg of fluid protein able to induce visible erythrocyte agglutination.One HU was defined as the minimum protein concentration required to produce visible agglutination.The carbohydrate-binding specificity of the protein was assessed by the ability of sugars or glycoproteins in 150 mM NaCl to inhibit agglutination of rabbit erythrocytes.The fluid was added to each tube at a concentration of 0.4 µg protein/ml, the minimum concentration required to produce visible agglutination.The lowest glycoprotein or sugar concentration giving full inhibition of agglutination was determined by two-fold serial dilution of solutions at 1 mg/ml initial concentration.
Toxicity bioassay against brine shrimp nauplii
A method using brine shrimp (Artemia sp), proposed as a simple bioassay for research on natural products, was employed (24).Brine shrimp eggs (5 mg) were hatched in a rectangular dish (32 x 22 x 10 cm) filled with 5 l of sea water.A plastic sieve was clamped to the dish to form two unequal compartments.The eggs were sprinkled into the larger one which was darkened, while the smaller one was illuminated.After 48 h, the phototrophic larvae (nauplii) were collected with a pipette from the lighted side and transferred (10 shrimps) to vials filled with sea water (5 ml) containing 1 drop of casein peptone solution (3 mg/5 ml) as food.Dialyzed fluid was added to the vials to final concentrations of 50, 500 and 5000 µg protein/ml.As control, a group of vials was filled with sea water containing the casein peptone solution.The vials were kept illuminated during 24 h of contact with the substances, and survivals were counted with the aid of a magnifying glass.This assay was carried out three times with five replicates for each fluid concentration tested.To calculate the LC 50 (mean lethal concentration) the results were plotted as logit % mortality vs log concentration.Logit is defined as ln (% mortality/% survival) (25).
Mouse toxicity assay
Toxic activity was defined as mortality observed in mice within 24 h after intraper-itoneal injections of the fluid exhaustively dialyzed against 25 mM Tris-HCl, pH 7.5.One LD 50 unit (26) was taken as the amount of protein (in mg protein/kg body weight) producing death of 50% of the tested animals (six doses; six mice per dose).
Results and Discussion
Both the crude (16.0 ± 0.7 mg protein/ ml) and the dialyzed purple fluid (10.1 ± 0.6 mg protein/ml) inhibited the growth of all species of Gram-positive and Gram-negative bacteria tested, with Staphylococcus aureus, Pseudomonas aeruginosa and Proteus vulgaris being the most sensitive (Table 1).The antibacterial activity of the fluid against P. aeruginosa was reduced by 40% after dialysis, suggesting that low molecular mass components are involved in this activity.The fluids stopped the growth of S. aureus and P. aeruginosa but the bacteria grew again after their removal from culture, indicating a bacteriostatic and not a bactericidal action of the fluids, with a minimum inhibitory concentration of 0.625 mg protein/ml.The heat treatment of the fluid at 80 o C for 2 min eliminated the inhibitory activity against P. aeruginosa and S. aureus (the species selected as target cells).Similarly, after acid treatment at pH 2.0 the inhibitory action against the two species was completely lost.Nevertheless, after alkaline treatment (pH 12.0) the inhibitory action against P. aeruginosa was lost while that against S. aureus was only reduced (the inhibition zone of 16.8 ± 4.6 mm was reduced to 9.0 ± 0.9 mm).Taken together, these data suggest that the active factor(s) probably is(are) protein.In fact, Yamazaki et al. (29) have reported that a glycoprotein is responsible for the antibacterial activity of the purple fluid of Aplysia kurodai.Likewise, other antibacterial glycoproteins have been reported to be present in different secretions of sea hares (14,30,31).
The fluid was not active against the yeasts Candida albicans or Saccharomyces cerevisiae nor against the filamentous fungi Aspergillus niger, Penicillium herguei and Trichophytum mentagrophytes.
The results of the hemagglutination assays are shown in Table 2.The dialyzed fluid preferentially agglutinated rabbit erythrocytes and, to a lesser extent, human erythrocytes (ABO).Treatment of the cells with trypsin revealed the agglutinating activity of the fluid against chicken erythrocytes and increased the sensitivity of human cells.Nevertheless, no activity was detected when the fluid was tested against cow, pig and horse erythrocytes, even when using enzyme-treated cells.This selective agglutination may be due to the different nature of the glycoproteins protruding on the cell surface of the erythrocytes tested.The activity of the fluid varied from 0.4 to 11.4 µg/ml depending on the cell used, thus being comparable in potency to the agglutinin purified from Aplysia kurodai eggs, which was shown to react with B cells and rabbit blood cells at concentrations as low as 0.06 µg/ml (32).These authors also reported agglutinins in the serum of Aplysia dactylomela which strongly agglutinated human erythrocytes, but reacted weakly with rabbit blood cells, contrary to that observed in the present study.These findings suggest that the hemagglutinating activity may be due to different proteins or that other constituents of these fluids may interfere with this activity.Various simple sugars were reported to be potent hemagglutinin inhibitors (33).In the present study, the agglutination of rabbit erythrocytes by the fluid was inhibited by the glycoprotein fetuin, but not by glucose, mannose, galactose, N-acetylglucosamine, N-acetyl-galactosamine or sialic acid (Table 3).
The purple fluid was shown to be toxic to the brine shrimp nauplii, with a calculated LD 50 of 141.25 µg protein/ml.This effect was dose dependent and despite its unknown mechanism, may well involve the fluid lectin as observed for lectins of plant origin, such as those of Dioclea guianensis (23) and Cratylia floribunda (34).
The fluid was also highly toxic to mice when injected intraperitoneally (ip), within 1 to 12 h, depending on the dose used.The LD 50 found was 201.8 ± 8.6 mg protein/kg body weight (20 ml of fluid/kg body weight).The typical effects observed invariably in-cluded dyspnea and convulsions preceding the death of the animals.These acute effects were very similar to those produced by soyatoxin (SYTX), a seed protein purified from mature commercial soybean sold in Brazil, which is a mixture of undefined cultivars (35).The toxic activity present in the fluid was susceptible to inactivation by heating at 92 o C for 5 min.
The electrophoretic profile of the crude and dialyzed fluids (Figure 1) showed a simi- lar distribution of the protein bands.In both fluid samples there was predominance of proteins with apparent molecular mass of 66.0 kDa and below 20.1 kDa.Nevertheless, in the dialyzed fluid proteins between 18.0 kDa and 36 kDa were not observed.This presumably could be due to the different amounts of protein applied in the electrophoresis, since it was our intention to maintain the same volume (30 µl) used in the antibacterial assay.These preliminary data do not allow us to establish that all of these interesting activities presented by the purple fluid of Aplysia dactylomela are due to the same component (s).Further studies are needed on this point.Nevertheless, the protein nature of the active component(s) is clear, as also is the presence of a lectin whose hemagglutinating activity is inhibitable by fetuin, a specific glycopro- tein.This is the first time that lectins are reported to be present in the purple fluid of sea hares and also that the purple fluid is toxic to living systems such as brine shrimp nauplii and mice.This study supports the role of the fluid as part of a chemical defense mechanism since it inhibited the growth of many Gram-positive and Gram-negative bacteria and showed toxicity to other living systems.
Figure 1 -
Figure 1 -SDS-polyacrylamide gel electrophoresis of the crude and dialyzed purple fluid of the sea hare Aplysia dactylomela.Lane 1, Standard protein markers; lane 2, crude fluid; lane 3, fluid dialyzed against water.
Table 1 -
Antibacterial activity of the crude (0.48 mg protein) and dialyzed (0.30 mg protein) purple fluid of Aplysia dactylomela.Data are reported as the mean ± SD for 3 experiments carried out in duplicate.a Nitrofurantoin for Vibrio cholerae and tobramicin for all the others.
Table 2 -
Agglutination of erythrocytes from various species by the purple fluid of Aplysia dactylomela. | 3,392.2 | 1998-06-01T00:00:00.000 | [
"Biology"
] |
Stochastic Electrical Detection of Single Ion‐Gated Semiconducting Polymers
Semiconducting polymer chains constitute the building blocks for a wide range of electronic materials and devices. However, most of their electrical characteristics at the single‐molecule level have received little attention. Elucidating these properties can help understanding performance limits and enable new applications. Here, coupled ionic–electronic charge transport is exploited to measure the quasi‐1D electrical current through long single conjugated polymer chains as they form transient contacts with electrodes separated by ≈10 nm. Fluctuations between internal conformations of the individual polymers are resolved as abrupt, multilevel switches in the electrical current. This behavior is consistent with the theoretical simulations based on the worm‐like‐chain (WLC) model for semiflexible polymers. In addition to probing the intrinsic properties of single semiconducting polymer chains, the results provide an unprecedented window into the dynamics of random‐coil polymers and enable the use of semiconducting polymers as electrical labels for single‐molecule (bio)sensing assays.
Introduction
Semiconducting polymers are fascinating electronic materials and key to rapidly evolving technologies including organic electronics, [1] solar cells, [2] and light-emitting diodes. [3][11] Here, we explore single-polymer-chain transport properties using coupled ionic-electronic charge transport, a technique for controlling the conductance in organic fieldeffect transistors (OFET) and organic electrochemical transistors (OECTs). [12]he conductance of a semiconducting conjugated polymer thin film in contact with a liquid electrolyte can be modulated-DOI: 10.1002/adma.202307912 or gated-via the electrostatic potential of the liquid.When the polymer film is impermeable to electrolyte ions, applying such a gate potential forms an electrical double layer (EDL) at the film surface.The EDL consists of a thin charged ionic sheet on the electrolyte side that is compensated by a quasi-2D layer of electrons or holes at the surface of the semiconductor (Figure 1a), resulting in conventional OFET operation.The charge carrier density achieved by liquid gating is significantly higher than in conventional FETs because of the high capacitance of the EDL (1-10 μF cm −2 ).When the semiconductor is permeable to ions, on the other hand, an accumulation-type OECT is formed. [13]Here ions infiltrate the 3D bulk of the semiconductor where they induce electrons or holes so as to maintain charge neutrality (Figure 1b).Because of this penetration, the effective gate capacitance of ECTs can be as high as 10-100 μF cm −2 . [14]The transport properties of single polymer chains in electrolyte represent the convergence between OFET and OECT modes of operation: the single polymer chain is electrostatically doped by a well-defined EDL, as in an OFET, yet it is simultaneously permeated by ions due to its open coil structure, as in an OECT (Figure 1c). [15]Contrary to both OFETs and OECTs, where hopping between polymer chains plays an important role, here transport can occur primarily along the backbone of the polymer in a quasi-1D fashion.
Our approach for interrogating single polymers is sketched in Figure 1d,e.We employ pairs of electrodes separated by a ≈10 nm-thick insulator layer to form a vertical OFET (VOFET [16][17][18] ) and immerse this structure in an inert electrolyte.The electrode-insulator-electrode nanogap geometry is readily achieved in microfabricated devices by carefully controlling the thickness of an insulating layer and using the top electrode as a mask in a self-aligned process. [19]The resulting open architecture allows exposing the electrodes to a solution in which polymers undergoing Brownian motion can intermittently make contact with the drain and/or the source electrode.Through the action of the potential applied to a reference electrode immersed in the solution and acting as a gate, the molecules become p-doped and enter their conducting state upon contact. [20]The temporary conducting pathway thus created between the source and drain is detected amperometrically via a small potential difference b) An accumulation-type OECT permeable to electrolyte ions.Infiltrating ions accumulate in the bulk of the semiconductor, where they induce electronic charge carriers to maintain charge neutrality.c) A single polymer chain electrostatically doped by a well-defined EDL and simultaneously penetrated by ions due to its open structure.d) Sketch of our electrochemically gated experimental configuration.An electrical current flows between two electrodes separated by a thin insulator when they are connected by a polymer coil.The hole density in the polymer is controlled by the electrostatic potential of the solution, which is set via a reference (gate) electrode.We monitored the current at all three electrodes to disentangle the contributions from polymer conduction, electrochemical reactions, and any eventual parasitic leakage.e) Expanded illustration of a polymer configuration forming three distinct conductive pathways between the drain and source electrode (red, blue, and yellow).f) Photoluminescence from polymers that were drop cast from a high (μm) concentration solution.Visible are the bottom electrode (top right), the top electrode (top left), and the overlap region where a nanogap geometry is formed.Polymers adsorbed to both the Pt electrodes (red spots) and the surrounding SiO 2 substrate (yellow spots).The change in color is attributed to quenching of photoluminescence by Pt. [24] Bright yellow emission is also visible from polymers adsorbed to the insulating silicon nitride spacer between the electrodes (white region in Figure 1e).The density of spots was approximately uniform, suggesting a comparable propensity for adsorption on both electrodes and substrate.(20 mV) symmetrically applied between the source and drain electrodes.
As a prototypical system, we used regio-regular poly(3butylthiophene-2,5-diyl) (P3BT).The polymer was first solvated in chloroform, then mixed with tetrabutylammonium perchlorate (TBAP) in acetonitrile (ACN) as a supporting electrolyte (final salt concentration 2 mm).The individual chains had an average length of 〈N mer 〉 = 407 monomers (contour length 155 nm) and a solvated radius of gyration R hyd = 4.7 nm (Section SI, Supporting Information).This is larger than expected for a closepacked chain (2.6 nm based on bulk density), [21] indicating an open coil geometry.The size of the coil also matched the spacing between our electrodes, permitting simultaneous contact to both electrodes.P3BT is highly fluorescent, [22,23] which allows its visualization on the surface of the devices by optical mi-croscopy (Figure 1f).Fluorescence measurements on polymers deposited at low concentration provided further confirmation that the polymers did not aggregate (Section SII, Supporting Information).
Results
Before attempting single-polymer measurements, we first characterized our devices in the classic VOFET configuration.To do so, we exposed the nanogap region to a 5 μm solution of P3BT in chloroform mixed with a 20 mm solution of TBAP in ACN in a ratio of 1:4.This high polymer concentration led to the formation of a semiconducting channel between the source and drain electrodes in ≈1 h (Figure S4c, Supporting Information).These films were stabilized by refilling the solution volume lost by evaporation with pure ACN in which P3BT is much less soluble.The stable channel allowed us to record the output characteristics of the 10 nm short channel VOFET (Figure 2).In each curve the drain-source was scanned with respect to a fixed gate-source voltage.The drain current rose linearly at low drain-source voltages and saturated at drain-source voltages above pinch-off, as expected (Figure 2a).Saturation became more pronounced with increasing gate-source voltages (Figure 2b).Exposing the electrodes to much lower nm-level polymer concentrations still led to the gradual formation of a semiconducting channel between the electrodes (Section SIII, Supporting Information).In this case, the source-drain current was however smaller by 3 orders of magnitude, comparable to the change in concentration.
The character of the electrical response changed dramatically at lower (100 pm-level) polymer concentrations, however, as il- lustrated in the amperometric data of Figure 3.Here a constant baseline current was measured at the source electrode during quiescent periods that lasted anywhere from a few seconds to a few minutes.On occasion, the current quickly jumped to a new, approximately constant value of order 100 fA before switching back to the off state a few seconds later (Figure 3a).The drain current simultaneously underwent similar transients with the opposite polarity (Figure 3b). Figure 3c shows that the sum of the source and drain currents was constant apart from the summed noise, indicating that the current transients were fully anticorrelated.This implies that these transients correspond to currents flowing between the source and drain electrodes, presumably due to the presence of polymer material temporarily bridging the electrodes.The simultaneously measured gate current remained constant during the transients (Figure 3d), indicating that any current resulting from polymer oxidation or reduction remained undetectably small.The current noise at the gate electrode was somewhat higher than the summed current noise in Figure 3c; this was caused by unused nanogaps on the same chip that were also exposed to the electrolyte-polymer solution.
The temporal evolution of the stochastic signals exhibited a broad range of behaviors.In the simplest instances, single pulses were observed, as illustrated in Figure 4a.In other instances (Figures 3 and 4b), multiple consecutive telegraph-like switches between the conducting and non-conducting states were observed instead.More complex events were also regularly recorded involving multiple, well-defined current plateaus, as shown in Figure 4c,d.
We attribute these abrupt, reversible changes in the current to individual polymer molecules temporarily bridging the two electrodes.Considering the observed current levels and assuming a uniform longitudinal electric field along the single-molecule channel, we estimate the charge carrier mobility to be in the 10 −7 to 10 −5 cm 2 V −1 s −1 range, which is consistent with values reported for highly amorphous polythiophene films [25,26] (Section SVI, Supporting Information).We attribute the multilevel current fluctuations to variations in the internal conformation of a single polymer caused by Brownian motion.This can be expected since the polymer contour length (≈155 nm) much longer than the electrode spacing (≈10 nm), allowing the formation of multiple contacts.While the participation of more than one molecule can never be excluded entirely in any given amperometric trace, the long quiet periods interspersed with bursts of complex activity are incompatible with a scenario where multilevel fluctuations are predominantly caused by multiple molecules (Section SIV, Supporting Information).This configuration-driven switching mechanism at constant potential is also conceptually distinct from the redox mechanism proposed earlier to explain voltagedependent switching in surface-polymerized molecular bridges or short anchored molecules. [9,27]he telegraph-like current fluctuations occurred on a time scale of seconds, which is far too long to represent the conformational fluctuations of a fully solvated chain.For comparison, the Rouse time for polymer relaxation is ≈4 μs for our molecules, [28] five orders of magnitude shorter than our observed events.These slow dynamics are consistent with the molecules being reversibly adsorbed on the surface, slowing their diffusion and permitting the observation of extended, relatively stable current plateaus.Based on the fluorescence data (Figure 1f), we infer that this adsorption can take place on both the electrodes as well as the SiN surface between them.On the other hand, the observed time for switching between two plateaus is limited by our transimpedance amplifier rise time of 17 ms.We therefore do not resolve the dynamics during the establishment of contact, as discussed further in Section SVII (Supporting Information).
The question arises as to the extent to which the observed behavior is dictated by contact resistance.Many reported OFETs with sub-micrometer channels lengths show deteriorated output characteristics referred to as short-channel behavior.This is characterized by an absent or strongly tilted saturation region in the output curves. [17,18,29]OFETs suffer from short-channel behavior when the contact resistances become comparable to the channel resistance.This is often attributed to an injection barrier (Schottky barrier, [30] or Fermi-level pinning [31,32] ) at the polymerelectrode interface.An advantage of our ion-gated configuration is that contact resistance is not expected to be significant. [33,34]he mobile ions in solution provide a high degree of screening (Debye length ≈2.2 nm), making the transversal electric field induced by the gate electrode much higher than the longitudinal field along the channel.This is known to suppress the formation of injection barriers at the polymer-electrode interface. [33,35]ndeed, the output characteristics at higher gate-source voltages (Figure 2a) do not exhibit short channel behavior.At less negative gate-source voltages, the output curves beyond pinch off become slightly tilted despite higher channel resistance.This is clearest in Figure 2b, which shows normalized drain currents.The transverse electric field at these low gate-source voltages is then insufficient to fully suppress short-channel effects.The short-channel effects are nonetheless greatly reduced and comparable with the behavior observed in long-channel OFETs. [29]Additionally, we regularly observed very slow fluctuations in the current of otherwise stable long plateaus, as illustrated in Figure 4d.This can be interpreted as slow variations in the length of the conducting pathway(s) as the polymer rearranges itself between the electrodes.This again suggests that transport along the backbone, as opposed to the contacts, dominates the overall device resistance.
Discussion
To elucidate the origin of the current fluctuations, we first performed an autocorrelation analysis on long amperometric traces.As seen in Figure 5a, the autocorrelation function (ACF) (Section SVIII, Supporting Information) has the form ACF() ∼ C − ln() with C a constant and the time delay or lag.This indicates that the abrupt switching events do not have a characteristic time scale and instead exhibit a broad range of relaxation times.Analyzing the corresponding power spectrum of the current fluctuations provides further evidence for a mechanism distinct from conventional low-frequency electronic noise.Figure 5b shows the current noise power spectral density (PSD) (S I (f)) at different values of the average current 〈I〉 corresponding to different amounts of adsorbed polymers (Section SIX, Supporting Information).The spectrum has the form S I (f) ∝ f − with ranging from 0.83 to 1.13.Such 1/f noise, also known as flicker or pink noise, is ubiquitous in electrical conductors. [36]The logarithmic form for the ACF in Figure 5a is consistent with a stationary stochastic process with this spectrum. [37]1/f noise is commonly described in terms of Hooge's empirical model, with N C the number of charge carriers and H an empirical constant. [38]We estimate from the measurement at the lowest current level (350 fA) that H ≈ 0.02 (Section SX, Supporting Information), which falls within the broad range of values reported for disordered organic conductors (0.01-20). [39,40]igure 5c (black squares) however shows that the measured PSD scales essentially linearly with ⟨I⟩ 2 .According to Equation (1), this would imply that the number of charge carriers N C remains constant even as the current increases by four or-ders of magnitude due to the accumulation of additional material between the electrodes, an implausible scenario.In contrast, we also evaluated the rms noise current, I rms , within individual, stable plateaus exhibiting no switching events (Section SXI, Supporting Information).Figure 4c (blue symbols) shows that, to a very good approximation, I rms ∝ ⟨I⟩ 1/2 within these plateaus.This behavior agrees with the Hooge model under the assumption that ⟨I⟩ ∝ N C and is consistent with changes in ⟨I⟩ being regulated by the amount of polymer material between the electrodes.Noise on each current plateau thus behaves as in conventional (semi)conductors, but the overall spectrum is dominated by excess noise with a different origin.This supports the interpretation that the plateaus correspond to configurations where the polymers behave as stable wires, as also occurs in thin films, whereas the abrupt switches and excess noise correspond to rearrangements of the polymer conformation exhibiting different conductive pathways between the electrodes (Figure 1d).
To gain further insight into these conformational fluctuations, we employed a Monte Carlo method based on the 2D worm-like chain (WLC) polymer model (Section SXII, Supporting Information). [41,42]Figure 6 shows typical examples of random adsorbed polymer configurations ranging from compact (a) to somewhat extended (b) and highly extended (c).While extended configurations can span electrodes with a larger spacing, compact configurations instead provide more conduction pathways between closely spaced electrodes.
Based on the simulations, we determined the number of conducting pathways between two electrodes for 5000 randomly generated polymer configurations.The position of the electrodes was scanned relative to each polymer configuration to account for translation along the surface (Figure 6d). Figure 6e shows histograms of the relative probability of finding a particular number of pathways as a function of nanogap size.For a 10 nm gap electrode spacing, the number of conducting paths is limited to 4 or fewer.The average number of pathways increases slightly from 1.4 for 10 nm gaps to 1.9 for 5 nm gaps, and the probability of finding n conducting pathways decreases exponentially with n (Figure 6e). Figure 6f shows the corresponding distributions of simulated path lengths for the different gap sizes.The conducting pathway lengths exhibit a broad distribution, consistent with the experimentally observed variations in plateau conductance (the experimental plateaus are however too short to explore the full distribution within a single plateau).Not unexpectedly, smaller gaps favor shorter pathways: the average path length decreases from 25 nm for 10 nm gaps to 10 nm for the smallest 5 nm gaps.Importantly, the model predicts that only a few conducting paths can be expected for each individual polymer molecule, consistent with the typical number of plateaus typically observed in the amperometric measurements.This further supports the hypothesis that the observed switching behavior is due to rearrangements of the polymer configuration due to Brownian motion.
Conclusion
Our experiments demonstrate that individual semiconducting polymer chains can be electrically addressed while in a random coil configuration using mixed ionic and electronic transport.This allows probing the electrical properties of individual and c) highly extended configurations.d) 5000 random configurations were generated and for each the number and length of pathways were determined for different relative positions of the electrodes (electrodes shifted in increments of 0.38 nm).Here this process is illustrated for one particular polymer configuration exhibiting 2, 3, or 4 conducting pathways depending on the position of the electrodes.Since the polymers are adsorbed to the surface, the 3D geometry of the device was simplified to a 2D geometry in which the electrodes and the gap in between are coplanar.e) Distribution of the number of conducting paths for gap sizes of 10 nm (black), 8.3 nm (purple), 6.7 nm (blue), and 5 nm (red).The solid lines are exponential fits.f) Corresponding distributions of the path length.The solid lines are fits to the biphasic Hill equation (Section SXII, Supporting Information).
molecules.Surprisingly, it also permits observing their otherwise inaccessible internal conformation fluctuations.We envision that our approach can be further extended by ligating semiconducting polymers to analytically relevant receptors such as nucleic acids and antibodies.Doing so will turn the polymers into the electrical equivalent of fluorescent labels for a new class of single-entity (bio)sensing assays based on all-electrical signal transduction.
P3BT was originally dissolved in chloroform.20 mm TBAP in ACN was mixed with the chloroform solution in a 1:9 ratio to form the supporting electrolyte (final salt concentration 2 mm).Syringe filters with pore size 0.2 μm (Whatman SPARTAN RC 30) were used to remove most of the remaining undissolved polymers or contaminants.As a result of filtering, the actual polymer concentration may be lower than the nominal concentration.
Dynamic Light Scattering: Dynamic light scattering (DLS) measurements were carried out with a Malvern Zetasizer Nano ZS instrument equipped with a 633 nm laser set at an angle of 173°.Analysis was performed using software provided by the manufacturer (Zetasizer Software, Malvern).
Fluorescence Microscopy: Light emitted from photoexcited polymers was recorded with a reflex camera (Pentax, model K5) and a fluorescence microscope (Zeiss, Axio Scope Vario).The excitation wavelength was between 450 and 490 nm using Zeiss Filter set 09.
Nanogap Electrode Fabrication: Lithographically fabricated nanogap electrodes were employed with an open architecture.The nanogaps consisted of a pair of Pt thin-film electrodes separated by a thin, low-stress silicon nitride insulating dielectric layer, as sketched in Figure 1.The process flow for fabricating these devices is described in Section SXIII (Supporting Information).
Electrochemical Measurements: Prior to experiments, the chips were consecutively cleaned ultrasonically in acetone and 2-propanol at 50 °C.The clean chips were placed in a custom-made socket (Section SXIV, Supporting Information) and connected to transimpedance amplifiers (Femto DDCPA-300) operating as source meters.A positive current corresponded to current injected into the cell for all three electrodes.The resistance of each nanogap was measured in the dry state by applying a small voltage (20-100 mV) across the nanogap to ensure leakage currents were negligible (<100 fA).Occasionally the device resistance fluctuated resulting in small anti-correlated currents (tens of fA).These devices were excluded to ensure that anticorrelated currents in measurements were solely due to polymers spanning the gap electrodes.A poly(dimethylsiloxane) (PDMS) reservoir was positioned on top of the chips and filled with the electrolyte solution without polymers.The source and drain electrodes were biased at ±10 mV with respect to circuit ground.A Pt wire inserted into the fluid was biased at −500 mV and served as a liquid gate electrode.This was sufficient to cause P3BT to become oxidized (p-doped) upon establishing an electrical contact with either of the electrodes since the onset of oxidation occurs at a gate potential of ca.−200 mV.While a more negative gate potential would yield a higher source-drain current, which would improve the signal-to-noise ratio, it was observed that at these potentials the events became shorter and ultimately undetectable, presumably due to desorption.It was first checked that no switching events took place over a period of at least 10 min in the presence of supporting electrolyte only.Finally, the polymer solution was added to the electrolyte in a 1:9 ratio and the subsequent current-time response was observed.The source, drain, and gate currents were monitored separately.Any gate current resulting from polymer oxidation was undetectably small at the low concentrations employed here while the source and drain currents had the same magnitude but opposite signs, as described in the main text.We refer to this current as the source-drain current.
Control Measurements: Section SV (Supporting Information) describes control measurements for electrolyte solutions without polymers to exclude features not related to conducting polymer molecules, and measurements on P3BT, poly(3-hexylthiophene-2,5-diyl) (P3HT), and poly(3octylthiophene-2,5-diyl) (P3OT) polymers to show that the experiment is robust enough to discriminate between slightly different molecules.
Figure 1 .
Figure 1.Experimental configuration.a) Schematic illustration of an OFET based on a thin polymer film impermeable to electrolyte ions.Applying a gate potential induces an EDL consisting of ions in the solution and compensating charges carriers (here shown as holes) in the semiconductor.b)An accumulation-type OECT permeable to electrolyte ions.Infiltrating ions accumulate in the bulk of the semiconductor, where they induce electronic charge carriers to maintain charge neutrality.c) A single polymer chain electrostatically doped by a well-defined EDL and simultaneously penetrated by ions due to its open structure.d) Sketch of our electrochemically gated experimental configuration.An electrical current flows between two electrodes separated by a thin insulator when they are connected by a polymer coil.The hole density in the polymer is controlled by the electrostatic potential of the solution, which is set via a reference (gate) electrode.We monitored the current at all three electrodes to disentangle the contributions from polymer conduction, electrochemical reactions, and any eventual parasitic leakage.e) Expanded illustration of a polymer configuration forming three distinct conductive pathways between the drain and source electrode (red, blue, and yellow).f) Photoluminescence from polymers that were drop cast from a high (μm) concentration solution.Visible are the bottom electrode (top right), the top electrode (top left), and the overlap region where a nanogap geometry is formed.Polymers adsorbed to both the Pt electrodes (red spots) and the surrounding SiO 2 substrate (yellow spots).The change in color is attributed to quenching of photoluminescence by Pt.[24] Bright yellow emission is also visible from polymers adsorbed to the insulating silicon nitride spacer between the electrodes (white region in Figure1e).The density of spots was approximately uniform, suggesting a comparable propensity for adsorption on both electrodes and substrate.
Figure 2 .
Figure 2. Output characteristics of the VOFET with a channel length of 10 nm.a) Gate-source voltages varied between −300 and −1000 mV.b) Normalized drain currents, gate-source voltages varied between −50 and −750 mV.The channel was formed in ≈1 h (adsorption) using a 5 μм solution of P3BT in chloroform mixed with a 20 mm solution of TBAP in ACN in a ratio of 1:4.The scan rate of the drain-source voltage was 10 mV s −1 .
Figure 3 .
Figure 3. a-c) Amperometric responses at the source (a), drain (b), and gate electrode (d).For clarity, a DC baseline current has been subtracted from each trace.c) Summed amperometric responses of the source and drain electrodes.
Figure 4 .
Figure 4. Amperometric response (source current) from single polymer molecules.These typical current-time traces are organized in order of increasing complexity.a) Single-plateau event.b) Telegraph-like signal consisting of a train of similarly-sized plateaus.c,d) Events exhibiting multiple current levels.
Figure 5 .
Figure 5. Dynamical properties.a) Autocorrelation function of an amperometric trace.The correlation decays linearly with log, where is the time delay, indicating a broad distribution of relaxation times.b) PSD for different average current levels spanning four orders of magnitude.In each case, the PSD exhibits a 1/f-like behavior.c) PSD at 1 Hz for complete traces (black squares) and mean square current noise I 2rms on current plateaus (blue symbols) vs ⟨I ⟩ 2 .The noise scales differently with current for the complete traces (red line, slope 0.94 ± 0.03) and on the plateaus (slopes 0.53 ± 0.03 and 0.60 ± 0.03 for qualitative and automated determination methods, respectively, as described in Section SXI, Supporting Information).
Figure 6 .
Figure 6.WLC model.a-c) Typical examples of polymer configurations from the 2D WLC model.Shown are: a) compact, b) somewhat extended,and c) highly extended configurations.d) 5000 random configurations were generated and for each the number and length of pathways were determined for different relative positions of the electrodes (electrodes shifted in increments of 0.38 nm).Here this process is illustrated for one particular polymer configuration exhibiting 2, 3, or 4 conducting pathways depending on the position of the electrodes.Since the polymers are adsorbed to the surface, the 3D geometry of the device was simplified to a 2D geometry in which the electrodes and the gap in between are coplanar.e) Distribution of the number of conducting paths for gap sizes of 10 nm (black), 8.3 nm (purple), 6.7 nm (blue), and 5 nm (red).The solid lines are exponential fits.f) Corresponding distributions of the path length.The solid lines are fits to the biphasic Hill equation (Section SXII, Supporting Information). | 5,986.4 | 2023-09-27T00:00:00.000 | [
"Physics",
"Materials Science"
] |
A homology independent sequence replacement strategy in human cells using a CRISPR nuclease
Precision genomic alterations largely rely on homology directed repair (HDR), but targeting without homology using the non-homologous end-joining (NHEJ) pathway has gained attention as a promising alternative. Previous studies demonstrated precise insertions formed by the ligation of donor DNA into a targeted genomic double-strand break in both dividing and non-dividing cells. Here, we demonstrate the use of NHEJ repair to replace genomic segments with donor sequences; we name this method ‘Replace’ editing (Rational end-joining protocol delivering a targeted sequence exchange). Using CRISPR/Cas9, we create two genomic breaks and ligate a donor sequence in-between. This exchange of a genomic for a donor sequence uses neither microhomology nor homology arms. We target four loci in cell lines and show successful exchange of exons in 16–54% of human cells. Using linear amplification methods and deep sequencing, we quantify the diversity of outcomes following Replace editing and profile the ligated interfaces. The ability to replace exons or other genomic sequences in cells not efficiently modified by HDR holds promise for both basic research and medicine.
Introduction
RNA-guided nucleases [1][2][3] have rapidly become foundational tools in facilitating genomic manipulations [4,5]. These nucleases target specific genomic loci and form a double-strand break (DSB). DNA repair processes are then leveraged to produce the desired outcome of the gene editing. Conventionally, specific genomic changes are made using homology directed repair (HDR) [6,7] with exogenously introduced DNA containing flanking sequences homologous to the targeted locus. One limitation of HDR-mediated genome editing is its restriction to the S/G2 phase, reducing or abolishing efficacy in slowly or non-dividing cells [8]. HDR, when used for gene editing, can be precise, but recent reports demonstrate greater error than often assumed, as incomplete or extraneous portions of the delivery vector can be copied into the genome [9][10][11][12][13]. On the other hand, the canonical non-homologous end-joining (NHEJ) pathway is traditionally viewed as error prone and relegated to disrupting gene function by inducing small insertions and deletions (InDels) during DSB repair. However, the highfidelity aspects of NHEJ repair are often underappreciated as mutant InDels are easily observed, whereas non-mutagenic repair is indistinguishable from the original allele [14]. Furthermore, non-mutagenic repair by NHEJ reforms the Cas9 target site allowing for continued DSB formation. This may result in a final genomic population containing majority InDels despite NHEJ repair being predominately error-free.
Targeted deletions are produced by forming two DSBs with loss of the intervening sequence during repair. The ubiquitous nature of the NHEJ pathway allows for deletions in zygotes, as well as in adult tissue such as in vivo exon deletion in a mouse muscular dystrophy model [16,18]. Additionally, exogenously introduced dsDNA donor sequences can efficiently ligate into a single DSB by NHEJ (herein referred to as Insert targeting) [15,17,[19][20][21][22][23][24][25][26]. With the NHEJ pathway conserved broadly, Insert targeting has been shown in plants [25], fish [19], cell lines [20][21][22][23][24]26], nondividing neurons and in vivo mouse tissues [15,17]. The ability to effectively integrate DNA across cell types has been used to tag genes with fluorophores [15,24,26], identify off-target CRISPR cleavage sites [27] and as a strategy for gene therapy by inserting functional coding sequences upstream of a disease causing exon [17].
Leveraging NHEJ repair to create large deletions and insert exogenous DNA posits the possibility of NHEJ-based sequence replacement; two DSBs are produced and a donor sequence without homology is ligated between the two breaks. This approach would enable the replacement of defective exons or regulatory sequences in a wide range of resting or dividing cells. NHEJ-based replacement has been demonstrated in plants, where HDR is often infeasible [28,29]. In order for NHEJ-based replacement to be considered a viable approach in human cells, demonstration of its efficiency and a thorough understanding of the editing outcomes is required. Here, we demonstrate efficient replacement of genomic sequences and exons with a donor sequence in human cells using NHEJ repair; we call this method Replace (Rational end-joining protocol delivering a targeted sequence exchange). Analysis of single-cell-derived clones provides conclusive evidence of Replace editing and efficiency. We further introduce sequencing pipelines for the precise quantification of the structural variants produced during Replace targeting and the InDels at the ligated interfaces. Together, our results and analysis strategies lay the groundwork for future applications of NHEJ-based Replace editing in gene therapy and research.
Results
Replace targeting (figure 1a) aims to exchange a genomic sequence with a double-stranded donor sequence without the use of homology. In this strategy, undesired products such as deletions or inverted donor sequences reform the Cas9 gRNA target sites and can be further targeted by Cas9, while the desired integration is captured. For initial validation, we used a fluorescence-based reporter system (figure 1b). The synthetic reporter system was created and integrated into two AAVS1 loci in a HeLa cell line. The reporter system contains a CAG promoter upstream of a BFP fluorophore. The BFP prevents the expression of a downstream Venus-pA. The cells initially are BFP + . Replace targeting exchanges the BFP cassette with a mCherry donor. Reporter HeLa cells were lipofected to deliver the donor sequence and Cas9 plasmid containing a puromycin resistance gene. Cells were selected for 48 h to ensure construct delivery and analysed after two weeks. Replacement targeting cleaved both sides of the BFP-pA cassette, with the excised sequence exchanged with the linearized mCherry donor sequence. Correct ligation of mCherry resulted in the loci expressing only mCherry. Deletion of the BFP cassette without replacement resulted in expression of the downstream Venus. Some alleles lost expression due to mutations or incorrect donor ligation.
Replace targeting of the reporter locus resulted in 34% mCherry + cells (figure 1c). We compared the effect of delivering donor sequences within a plasmid or in the form of minicircles as a previous report showed minicircles to increase Insert efficiency [17] (figure 1d). Minicircles are minimal plasmids and contain only the donor sequence and require only a single Cas9 DSB for linearization, whereas plasmids require two DSBs to excise the donor. Donor sequences delivered as minicircles resulted in a sixfold increase in cells with mCherry expression compared to plasmid delivery. We therefore used minicircles for Replace targeting in the remainder of this work. To address if mCherry expression was driven in part by off-target integration of the donor sequence, we Replace targeted, in an otherwise identical manner, wild-type HeLa cells. As these cells do not contain the AAVS1 integrated promoter and target site, only off-target integration could result in mCherry expression (figure 1d). Wild-type HeLa cells showed no mCherry expression indicating that the 34% mCherry + cells in our original experiment are the result of correct integration at the target loci. mCherry + cells were singlecell sorted, expanded and genotyped to check for correct sequence replacement. Twenty-four out of 25 analysed clones (i.e. 32% of all cells) contained the anticipated exchange of BFP with mCherry, while one clone contained an allele with mCherry insertion upstream of BFP (figure 1e). As HeLa reporter cells contained two copies of the reporter locus, we quantify the frequency of homozygous knock-in by simultaneously transfecting two donor sequences (mCherry and miRFP670) (electronic supplementary material, figure S1). By measuring the mCherry + , RFP + and dual-positive populations, we calculated an average of 5% homozygous knock-in. Taken together, Replace targeting in our reporter system occurs as a major outcome, with a successful sequence exchange of at least one allele in 32% of cells.
During the ligation of the donor sequence into the genome, InDels may occur at the interface. To quantify short InDels, the gDNA of targeted and unsorted HeLa reporter cells was PCR amplified using primers flanking the ligated interface. The deconvolution of the Sanger traces of these amplicons provides an InDel estimate of the bulk population of Replace targeted cells ( figure 1f ). This analysis shows that short resection occurs in a minor (less than 16%) fraction of these small amplicons. The majority contained no InDel or a small, non-random insertion. Sanger sequencing of cloned individual alleles supports the bulk analysis (electronic supplementary material, figure S2A). The one or two nucleotide insertions were striking in that they matched the protospacer sequence downstream of the break site. It is known that SpyCas9 does not always form a canonical blunt end break three nucleotides downstream of the PAM, but can, at some frequency, form a staggered cut [30][31][32][33]. These non-random insertion InDels are probably caused by NHEJ acting on a Cas9-formed staggered cut (electronic supplementary material, figure S3). In this model, the sticky end cutting causes the PAM side of the break to contain extra nucleotides. These overhangs are filled during repair and appear as insertions when the two PAM sides are ligated in Replace targeting (figure 1f ). This produces insertions in the interfaces of the PAM sides and not in ligated interfaces of two Protospacer sides of the break (electronic supplementary material, figure S3B).
royalsocietypublishing.org/journal/rsob Open Biol. 11: 200283 To test Replace targeting of an endogenous gene, we targeted three ubiquitously expressed loci in K562 cells: Polymerase Beta (POLB) exon 5, CCNA1 exon 2 and LMNA exon 2. We replaced exons with a splice acceptor-2A-mCherry-pA donor sequence ( figure 2a,b). Replace targeting resulted in reporter expression stable over weeks (figure 2c). Genotyping of mCherry + single-cell derived colonies showed mCherry integration into the targeted locus in 100% of colonies. Correct replacement ranged from 60% to 93% of the colonies, but in some cells, the donor mCherry sequence inserted next to the original exon without replacing it (figure 2d). Sanger sequencing of the genome-donor sequence interface of individual PCR amplified alleles showed modest InDel formation in the correctly exchanged alleles (figure 2e; electronic supplementary material, figure S2). Replicate targeting experiments gave an average of 58%, 39% and 19% mCherry + cells for POLB exon 5, CCNA1 exon 2 and LMNA exon 2 respectively (figure 2f ). All three targeted loci are triploid in K562 [34], assuming independence in the editing events, we can estimate the corresponding diploid cells would measure 44%, 28% and 13% mCherry + for POLB, CCNA1 and LMNA, respectively. Combining FACS and single-cell genotyping data allowed an It is known that large-scale deletions may follow a single Cas9-driven DSB [35], and Replace targeting further complicates analysis due to the structural variants formed by the two genomic breaks and donor sequence integration. In order to quantify large deletions and the directionality of donor integration, we performed long-read deep-sequencing on amplicons of the targeted loci from unsorted Replace targeted HeLa cells and Replace targeted K562 cells (figure 3). We used primers 800-2000 bp away from the DSBs to generate long amplicons that were sequenced with PacBio technology. A bioinformatics pipeline was built to analyse large deletion and structural outcome frequencies (figure 3a) figure S4). While a donor sequence with no homology is expected to integrate equally in both directions, inspired by the work of Suzuki et al. [17], we designed a preferred orientation into our donor sequence without the use of homology (electronic supplementary material, figure S5). When the donor integrated in the undesired direction the ligated interface reform the Cas9 target site, whereas the desired orientation is unable to be further cut. Long-read deep sequencing measured the desired orientation of mCherry in 79% of reads where BFP was replaced in HeLa and 89% of alleles with POLB exon 5 replacement in K562 (figure 3b). Even alleles containing unintended donor insertion of mCherry into a DSB flanking the targeted sequence integrated preferentially in the designed orientation.
Alignment of the reads showed alleles with large-scale deletions (greater than 500 bp) occurred (figure 3c). Notably, individual reads showed that large-scale resection was frequently asymmetric with one side of the break undergoing dramatically larger resection. Viewing the frequency of a deletion at each base along the amplicon creates an averaged deletion profile and shows that the majority of loci experienced small-scale resection (figure 3d). Specifically, in successfully Replace targeted alleles, deletion mutations at the ligated junctions was smaller than 30 bp in greater than 90% of HeLa reporter reads, and smaller than 30 bp in greater than 95% of the POLB exon 5 reads. The ligated interface containing the protospacers were InDel-free in 79% of the correctly targeted reads of the HeLa reporter (electronic supplementary material, S1) and 63% InDel-free in the reads of the correctly targeted POLB K562 alleles, as measured by collapsing the long-read data (electronic supplementary material, S2).
Linear PCR methods requiring only one gene-specific primer, such as UDiTaS [36] and LAM-HTGTS [37], offer more complete and quantitative measurements of DNA repair outcomes following a DSB. A gene-specific primer binds upstream of the targeted break site and a universal primer binding sequence is integrated downstream. Subsequently, the PCR amplifies the region across the break regardless of the structural variant, deletion size or translocation (figure 4a; electronic supplementary material, figure S4C, D). The UDiTaS method also contains a robust computational pipeline for CRISPR analysis. We modified this pipeline to extend the capabilities for Replace targeting with two pipelines (electronic supplementary material, figure S4E,F). Pipeline 1 closely follows the published UDiTaS pipeline; it aligns reads to the in silico reconstructed expected outcomes, performs InDel analysis and quantifies these measurements. The results of Pipeline 1 showed that at the targeted POLB locus donor sequence integrated in the preferred orientation at a 5 : 1 ratio to an inverted orientation ( figure 4a,b). At 39% of all POLB alleles, the integration of the donor sequence in the desired orientation is the single most frequent outcome measured. Strikingly, more than one-third of these donors were integrated without an InDel formed at the ligated interface. This highlights both the efficiency and fidelity of Replace targeting for exon replacement. As exogenously introduced DNA is known to integrate randomly into the genome [38], we developed Pipeline 2 to quantify and map the integration location of the donor sequence (electronic supplementary material, figure S4F). Using a primer that binds the donor sequence and points towards the ligated interface, we generated amplicons that contain the flanking genomic sequence. These amplicons were Illumina sequenced, and the genomic sequences beyond the end of the donor sequence were aligned to the human genome (figure 4c; electronic supplementary material, figures S4C and S5). Sequence alignment showed 55% on-target integration into the POLB locus. Thirty-four per cent of all measured donor sequences had formed concatenations; it remains to be determined where these concatenated sequences are integrating within the genome, but concatenation of exogenous dsDNA itself is a known phenomenon [9,[39][40][41][42]. The donor sequences were shown to be integrated into the genome at more than 28 loci (figure 4d). Interestingly, none of the off-target integration mapped to any of the 293 predicted [43] SpyCas9 off-target sites.
Discussion
This work demonstrates that NHEJ-based genomic sequence exchanges are feasible and efficient in human cells. In the four loci tested, replacement was successful in 16-54% of cells; in one case, the desired product was the major outcome. We furthermore demonstrated targeted exon replacement via NHEJ in three widely expressed human genes. Based on the comprehensive analysis of our targeted alleles, we arrive at three design principles to guide future Replace work.
The first design aspect ensures the correct orientation of the donor sequence in the genome. Linearizing the donor sequence with the same gRNA that cuts the target locus allows incorrectly ligated donors to be re-cut and excised (electronic supplementary material, figure S6). It is crucial to add a gRNA targeting the sequence formed during a deletion. This gRNA re-opens alleles that form a deletion and also excises out incorrectly ligated donor sequences. The minimal requirement for this design is two gRNAs (electronic supplementary material, figure S6B,C). Long-read sequencing confirmed 89% of the donor sequences integrated in the designed orientation after POLB exon 5 Replace editing.
The second design principle is to avoid gRNAs that are involved in non-canonical SpyCas9 sticky end cutting. The frequency of 'InDel free' ligated interfaces measured in this work supports the idea that NHEJ repair is often not mutagenic [14].
We believe breaks introduced by Cas9 are often re-ligated to reform the original sequence, which can then be cleaved again-forming a break ligation cycle. This cycle continues until the Cas9 is no longer active or the target site forms an InDel during repair and disrupts Cas9 binding. For efficient Replace targeting, prolongation of this cycle provides more time to acquire and ligate the donor sequence in the correct orientation. InDel mutations remove alleles from the ligation cycle and thus decrease efficiency. One avoidable driver of InDel formation is non-canonical SpyCas9 cutting in which a staggered cut is formed [30][31][32][33]. The staggered cut is filled in and then ligated, duplicating the staggered nucleotide(s). The resulting small insertions are easily identifiable as they match the nucleotides of the protospacer sequence beyond the expected break site (electronic supplementary material, figure S3). Data from large gRNA screens suggest this mechanism as the predominant driver of +1 insertions [44]. The non-canonical cutting of SpyCas9 may be sequence or loci-dependent. Empirical testing of a gRNA by measuring InDel outcomes [45], therefore, allows us to avoid sites that incur staggered cuts.
The third concept is to design sacrificial sequences around the ligated regions to buffer possible resection and sequence deletions. While the overall rate of InDels and large-scale deletion is low, detrimental effects can be further reduced. During exon Replace targeting, we cut in intronic regions outside the splice site as short intronic InDels are less likely to be detrimental to gene function. Long-range deep sequencing showed that in our systems, the vast majority of the InDels are less than 30 bp long. Considering this, we recommend a sacrificial buffer 30 bp or greater be included on the flanks of the Replace construct to protect the splicing donor/acceptor and coding sequence. We currently use minicircles but also recommend such buffers on AAV delivered donor sequences too.
NHEJ-based sequence replacement has previously been explored using PCR fragments as donor sequence [46]. However, the genetic analysis in that study was not sufficient to distinguish successful replacement from other possible editing outcomes, such as unintended Insert targeting, structural rearrangements and off-target integration. Therefore, it remains to be confirmed and quantified in future work, if Replace editing with PCR donor templates is a viable strategy.
Measuring the outcomes of Replace targeting is complicated by the various structural rearrangements formed. Additionally, a growing body of literature documents complex outcomes following even simple Cas9-formed DSBs. These can include large-scale resection [35], chromosomal fusions [36], mis-spliced mRNA [47] and unintended vector integration into the break site [16]. In working towards a full understanding of the outcomes of Replace targeting, we developed multiple deep sequencing pipelines. Long-read sequencing of PCR amplicons of the targeted loci proved useful in illuminating resection profiles and gives insight into the orientation of the structural variants produced. However, samples prepared for long-read sequencing used two gene-specific primers and so suffered from PCR bias, over-representing the shorter amplicons, making quantitative comparisons of alleles of different lengths impossible. Traditional two primer PCR also requires both intact binding sites, and unable to amplify more complex repair products. To address these shortcomings, we turned to single primer amplification methods such as UDiTaS and LAM for quantitative analysis, as they amplify all outcomes approximately equally and measure more complex repair events. This allowed us to measure the frequency of deletions royalsocietypublishing.org/journal/rsob Open Biol. 11: 200283 in POLB editing to be 26% of all alleles and only 16% of alleles maintained their wild-type allele. A total of 39% of alleles show correct integration of the donor, and the rest would not produce functional protein (structural inversions or deletions). This ability to measure knock-in and knock-out rates concurrently is helpful in understanding the function at the cellular level. In contrast with other studies measuring repair outcomes of a Cas9 DSB [36], we did not detect chromosomal fusions at our break points. However, this may be due to our analysis time point three weeks post-targeting, where alleles could have been selected out of the population. Beyond the utility for quantitative measurements on-target, these single-gene primer protocols are powerful for measuring unintended integration of introduced DNA sequences. For example, in treating a mouse model of muscular dystrophy, linear amplification measurements showed the therapeutic AAV unintentionally integrated into the Cas9 break site and throughout the genome [16]. These unintended integration of AAV in human cells may have a carcinogenic potential [48]. Others have recently demonstrated high rates of unintended on-and offtarget integration of AAVs using single primer amplification [49]. Replace donor sequences have the potential to integrate into the target site or off-target into the genome. To our knowledge, this is the first work to map and quantify off-target integration or concatenation of donor sequences following NHEJ Insert or Replace targeting. Using a primer on the donor sequence, we detected substantial off-target integration of the donor. Strikingly, none of these off-target integration loci were within 5000 bases of the top 293 predicted Cas9 off-target sites. Rates of off-target integration may be similar for doublestranded HDR templates, but to our knowledge, off-target integration mapping by linear amplification has not been done after an HDR editing making comparison difficult. Singlestranded donor templates are known to integrate off-target less frequently [7,50], but off-target quantification has mainly relied on integration of large fluorescent cassettes and could benefit from using single primer amplification approaches.
There are currently over 3800 genes known to cause monogenic diseases with mutations often spread across multiple exons. Gene editing holds great potential for the treatment of such diseases, but reversing the genetic defects in terminally differentiated or resting cells remains a major challenge [51]. HDR is unable to target non-dividing cells [8], but the NHEJ pathway is known to be preserved across cells types and cycle [52]. NHEJ-based Insert targeting had previously been shown efficient in a wide variety of nondividing and dividing cells in vivo and in vitro. The use of such NHEJ Replace editing holds the most potential for therapies looking to correct mutations in non-dividing cells by the replacement of exons. However, it was not clear if the NHEJ repair would allow for effective genetic replacement, but instead result in majority deletions, inserts or InDels. Additionally, the size variation between possible repair outcomes (i.e. deletions, insertions, replacements) makes their quantitative analysis challenging. In this work, we have demonstrated that the kinetics and fidelity of the NHEJ pathway allows for efficient replace targeting in human cells, and that a thorough understanding of the edited population can be achieved based on single primer PCRs, long PCRs and tailored analysis pipelines. While many questions, such as optimal donor delivery, remain to be addressed, our work provides the foundation for future applications of Replace editing for genome engineering.
DNA constructs
Cas9-2A-puro targeting plasmid is Addgene ID 62988 with F1 sequence removed. The AAVS1 targeting fluorescent reporter system was modified from Addgene ID 60431. The neomycinR sequence was modified to a more robust form [53]. The RTTA3 gene was replaced by a BFP-pA-Venus-pA where the BFP is flanked by Rosa26 sequences constructed by Gibson Assembly. Guide RNA target sequences were ligated into BbsI cleaved plasmids using synthetic oligonucleotides (table 1). When more than one guide was necessary, the plasmids were combined using Gibson Assembly.
Minicircles are produced in engineered bacteria using arabinose-induced recombination to remove the plasmid backbone [54,55]. The ZYCY10P32T E. coli strain and the minicircle backbone were purchased from System Bioscience. After cloning in the sequence into the specific minicircle backbone, the plasmid is transformed into the ZYCY strain. The 200 ml culture was grown in TB media for 16 h. Then 200 µl of 20% L-arabinose was added and adjusted to pH 7 and 200 ml LB were added. The culture was then shaken at 32°C for 4 h to induce minicircle formation and slow cell division. An endotoxin-free purification kit (Macherey Nagel) was used following the protocol for low copy number plasmids. The resulting product contained plasmid and gDNA contamination. Restriction enzymes cutting the backbone and gDNA were added for 2 h. Then the resulting fragmented DNA was digested with PlasmidSafe DNase for 16 h (Epicure).
Cell culture and targeting
HeLa cells were cultured in DMEM, 10% FBS, 1% Penicillin/ Streptomycin and passaged with trypsin every 3-4 days. To generate the fluorescent reporter line, plasmid #208 was cloned. Successful integration into the AAVS1 loci generated neomycin resistance. Cells were selected with 0.6 mg ml −1 G418 for one week. Single cells were FACS sorted into a 96-well plate and expanded. Colonies were checked for correct integration by genotyping and a clone with inserts on both alleles was expanded and used. Targeting of Reporter HeLa: 50 000 cells were reverse-transfected with 1.5 µg of Cas9_2A_puro/guide plasmid + 1.5 µg of MC or plasmid complexed with Lipofectamine 3000. The next morning 1.5 µg ml −1 puromycin was added for 48 h. Cells were then FACS analysed. mCherry + cells were single cell sorted into a 96-well plate and expanded for genotyping. For the HDR targeting experiment, the guide RNA targeting the Insert site was used together with the donor plasmid.
Genotyping
For single-cell clones or bulk sequencing, genomic DNA (gDNA) was extracted by quick extract (Lucigen). PCR amplification was performed with LongAmp Polymerase (NEB) or PrimerStar GXL (Takara). Primer pairs flanking the upstream cut site or downstream cut site were used. Amplicons were verified by gel extraction and Sanger sequencing. Amplicons from bulk sequencing were cloned into the TOPO vector (Invitrogen) before Sanger sequencing.
The frequency of homozygous and heterozygous integration in HeLa cells was determined by knocking-in mCherry and miRFP670 simultaneously. By measuring mCherry + , miRFP670 + and double positive cells, the homozygous knock-in could be calculated [7].
We used modified ICE analysis for deconvolution of amplicon Sanger trace data derived from unsorted Replace targeted cells [45]. The amplicons were made using a primer on the donor sequence and a primer on the genomic sequence flanking the ligated site. The amplicon was cloned into the TOPO vector and individual cloned alleles were Sanger sequenced along with the mixed PCR product. A cloned colony with Replace inserts without any InDels was identified, and these Sanger trace data were used as the 'wild-type' reference in ICE analysis.
Long-read deep sequencing and analysis
Bulk gDNA of targeted and control cells was amplified by PrimeStar GXL for Polb targeting. The HeLa synthetic reporter system required PCR with OneTaq (NEB) using the high GC content additive to amplify through the very GC-rich CAG sequence. Five-minute elongation steps were used to reduce PCR bias. Amplicons were cleaned by SPRI beads and quantified by Qubit. The Libraries were pooled and prepared for PacBio sequencing following company protocol. Data analysis was done using 'Pipeline Longread'. This pipeline uses custom Python scripts for preprocessing and bins the reads into different structural variants: original exon, replacement, insertion or deletion. Alignments were done with BBmap or MiniMap2 [57] and visualized with IGV (Interactive Genome Viewer). Analysis of alignments was done in R using a modified script from (Github/pigX). Plotting was done in R or Python with a number of the plots included in the Jupyter Notebooks.
Uni-directional targeted sequencing sample preparation
Wild-type and treated cells having had the POLB exon 5 targeted showing 50% mCherry expression were used. Samples were prepared either as described in LAM-HTGTS [37] beginning with 500 ng of gDNA or based on the Tn5-Uditas protocol [36] beginning with 50 ng gDNA. LAM-HTGTS was done generally as published with a few modifications. A single biotinylated gene-specific primer was used to amplify 500 ng sonicated gDNA (1 kb peak) 80× rounds. Streptavidin Dynabeads were found to inhibit PCR so the concentration was reduced to 1/10th and used to capture the amplified sequence. Capture bead-DNA was washed and then the universal primer was ligated on the end. This adapter-ligated sequence was PCR amplified with a universal primer and a nested gene-specific primer 30×. We added Nextera adapters by 10× rounds of amplification. Gel extract 300-500 bp smear 300-500 bp, quantified by Qubit and Bioanalyzer, then sequenced with Illumina MiniSeq. For Tn5 sample preparation, we modified the UDiTaS protocol, 50 ng gDNA was washed 2× with SPRI beads. Tagmentation used hyperactive Tn5 produced by the Max Delbrueck Center protein production facility following published protocols [58]. Samples were tagmented to add the universal primer binding site. Sample was amplified with gene-specific primer and universal primer 15×. A nested primer with Illumina adapter sequences was added and followed by PCR 15×. Then Illumina adapters were added with 10× PCR. Amplicons 300-500 bp were gel extracted, quantified by Qubit and Bioanalzyer, then sequenced with Illumina MiniSeq.
Analysis of uni-directional targeted sequencing
All scripts and notebooks are on github.com/ericdanner/ REPlacE_Analysis. The analysis of the linear amplified sequences was based on the Uditas software (https://github. com/editasmedicine/uditas). De-multiplexed samples are run through pipeline 1 or pipeline 2. Pipeline 1 generates amplicons of the various expected outputs and does a global alignment using Bowtie2 [59]. Reads that align well and cover the ligated junctions are analysed for InDels. If the samples were prepared with Tn5, they contained UMIs. Unique UMIs are tallied and editing outcomes are quantified. LAM samples do not contain UMIs. In Pipeline 2, the reads are checked for correct on-target priming. The samples are then trimmed using Cutadapt [60] up to the expected break site leaving only the sequence downstream of the break site. This sequence is aligned globally using Bowtie2 to an index file containing hg38 and the targeting vector.
Data accessibility. Sequencing data are available. Sequence Read Archive | 6,872.6 | 2020-05-12T00:00:00.000 | [
"Biology"
] |
Color Glass Condensate and initial stages of heavy-ion collisions
We introduce the concept of Color Glass Condensate, that describes the wave-function of a nucleon or nucleus at high energy. Then, we explain the relevance of this effective theory in the calculation of particle production in heavy ion collisions, and we show how it can be used in order to make predictions for the initial stages of these collisions – in particular in the task of providing initial conditions for hydrodynamical simulations. References 1. F. Gelis, T. Lappi, R. Venugopalan, Int. J. Mod. Phys. E16, (2007) 2595-2637. F. Gelis, T. Lappi, R. Venugopalan, Int. J. Mod. Phys. E16, (2007) 2595-2637. 2. F. Gelis, R. Venugopalan, Acta Phys. Polon. B37, (2006) 3253-3314. 3. E. Iancu, R. Venugopalan, in Quark Gluon Plasma 3, Eds. R.C. Hwa and X.N.Wang, (World Scientific, 2003). 4. E. Iancu, A. Leonidov, L. McLerran, in Cargese 2001, QCD perspectives on hot and dense matter, (2001) 74145. © Owned by the authors, published by EDP Sciences, 2010 DOI:10.1051/epjconf/20100701002 EPJ Web of Conferences 7, 01002 (2010) This is an Open Access article distributed under the terms of the Creative Commons Attribution-Noncommercial License 3.0, which permits unrestricted use, distribution, and reproduction in any noncommercial medium, provided the original work is properly cited. Article available at http://www.epj-conferences.org or http://dx.doi.org/10.1051/epjconf/20100701002
Introduction
Initial correlations and hydrodynamics • The equations of hydrodynamics are non-linear.Therefore, solving hydro evolution for event averaged initial conditions is not the same as solving hydro event-by-event, and averaging observables at the end : To study hydrodynamics event by event, one needs an event generator for T μν (τ 0 , η, x ⊥ )
Gluon saturation
• consider a hadron or nucleus probed via gluon exchange
Gluon saturation
• when energy increases, new partons are emitted • the emission probability is α s dx x ∼ α s ln( 1 x ), with x the longitudinal momentum fraction of the gluon • at small-x (i.e.high energy), these logs need to be resummed
Gluon saturation
• as long as the density of constituents remains small, the evolution is linear: the number of partons produced at a given step is proportional to the number of partons at the previous step (BFKL)
Gluon saturation
• eventually, the partons start overlapping in phase-space • parton recombination becomes favorable • after this point, the evolution is non-linear: the number of partons created at a given step depends non-linearly on the number of partons present previously Note: At a given energy, the saturation scale is larger for a nucleus (for A = 200, A
CGC = effective theory of small x gluons
• The fast partons (large x) are frozen by time dilation described as static color sources on the light-cone : • Slow partons (small x) cannot be considered static over the time-scales of the collision process they must be treated as standard gauge Kelds Eikonal coupling to the current J μ : A μ J μ • The color sources ρ are random, and described by a distribution functional W Y [ρ], with Y the rapidity that separates "soft" and "hard" where • This evolution equation resums all the powers of α s ln(1/x) and of Q s /p ⊥ that arise in loop corrections • This equation simpliKes into the BFKL equation when the source ρ is small (one can expand η in powers of ρ) Power counting • Dilute regime : one parton in each projectile interact • Dilute regime : one parton in each projectile interact
Power counting
• In the saturated regime, the sources are of order 1/g (because ρρ ∼ occupation number ∼ 1/α s ) • Order of a connected diagram : • The single inclusive spectrum has a simple diagrammatic representation : • There are only connected graphs (AGK cancellation) • Perturbative expansion in the saturated regime :
Expression in terms of classical Aelds at LO
Gluon spectrum at LO : • A obeys the classical EOM : δS YM δA + J = 0 • The boundary conditions are very simple:
Initial classical Aelds
• The initial chromo-E and B Kelds form longitudinal "Lux tubes" extending between the projectiles: • The color correlation length in the transverse plane is What is factorization ?
• The naive perturbative expansion of dN 1 /d 3 p, assumes that the coefKcients c n are of order one • This assumption is upset by large logarithms of 1/x 1,2 : Leading Log terms • Factorizability: the logarithms must be universal and resummable into functionals that depend only on the projectiles being collided • The duration of the collision is very short: • The logarithms we want to resum arise from the radiation of soft gluons, which takes a long time it must happen (long) before the collision space-like interval • The duration of the collision is very short: • The logarithms we want to resum arise from the radiation of soft gluons, which takes a long time it must happen (long) before the collision • The projectiles are not in causal contact before the impact the logarithms are intrinsic properties of the projectiles, independent of the measured observable I : The NLO gluon spectrum can be written as a perturbation of the initial value of the classical Kelds on the light-cone : Factorization follows easily
Leading Log factorization
• By averaging over all the conKgurations of the sources in the two projectiles, we get a factorized formula for the resummation of the leading log terms to all orders : ] must be evolved up to the rapidity of the produced gluon • In the saturated regime, the inclusive n-gluon spectrum at Leading Order is the product of n 1-gluon spectra:
Introduction
• At LO, in a given conKguration of the sources ρ 1,2 , the n gluons are not correlated • Note: this is true for the bulk (p ⊥ Q s ), but not for the tail of the distribution
Multigluon spectrum at NLO
• At NLO, one has again: • Correlations appear at NLO thanks to the operator G( u, v ) u v , which can link two different gluons
Leading Log factorization
Factorization formula for the n-gluon spectrum • This formula tells us that (in the Leading Log approximation) all the correlations arise from the W [ρ]'s they pre-exist in the wave-function of the projectiles • Note: some short range correlations will also arise from splittings in the Knal state (not taken into account here, because does not come with a ln(s))
•
Immediately after the collision, the chromo-E and B Kelds are purely longitudinal :
•
No analytic solution for the Yang-Mills equations,
1 •
The duration of the collision is very short:τ coll ∼ E −1
•
Procedure: (i) calculate the 1-loop corrections, (ii) disentangle the logarithms from the Knite contributions, (iii) show that the logs can be assigned to the projectiles • Problem: strong Kelds, analytic calculation not feasible Take advantage of the retarded nature of the boundary conditions in order to separate the initial state evolution (calculable analytically) from the collision itself (hopeless)
••••
• η-independent Kelds lead to long range correlations in the 2-particle spectrum :v r Particles emitted by different Lux tubes are not correlated (RQ s ) −2 sets the strength of the correlation• At early times, the correlation is Lat in ΔϕA collimation in Δϕ is produced later by radial Low The combinatorics of color source averages in a single glasma Lux tube leads to:N(N − 1) • • • (N − p + 1) − disc.terms = (p − 1)! N p Bose-Einstein distribution• If one superimposes k such Lux tubes emitting independently:N(N − 1) • • • (N − p + 1) − disc.terms = (p − 1)! N k p Negative binomial distribution with parameters ˙N¸, k • k is the number of Lux tubes: k ∼ Q 2 s R 2 ∼ # participants• Experimentally: it seems to work... Dense Matter In Heavy Ion Collisions and Astrophysics (DM2008) 01002-p.25
Number of gluons per unit area :
ARecombination
cross-section :
Dense Matter In Heavy Ion Collisions and Astrophysics (DM2008)
•
Dense Matter In Heavy Ion Collisions and Astrophysics (DM2008) Particles emitted by different Lux tubes are not correlated (RQ s ) −2 sets the strength of the correlation • Long range correlation in Δη (rapidity)• Narrow correlation in Δϕ (azimuthal angle)t correlation ≤ t freeze out e − 1 2 |y A −y B |• Was there something independent of η at early times? the chromo-E and B Kelds produced in the collision• The color correlation length in the transverse plane is Q −1 s Lux tubes of diameter Q −1s , Klling up the transverse area | 2,030 | 2010-10-01T00:00:00.000 | [
"Physics"
] |
EdgeFlow - Developing and Deploying Latency-Sensitive IoT Edge applications
—Demanding latency-sensitive IoT applications have stringent requirements like low latency, better privacy and security. To meet such requirements, researchers proposed a new paradigm, i.e., edge computing. Edge computing consists of distributed computational resources and enables the execution of IoT applications closer to the edge of the network. However, the distributed nature of this paradigm makes the application deployment and development process more challenging since the developer must divide the application’s functionality into multiple parts, assigning for each a set of requirements. As a result, the developer must (i) define the application’s requirements and validate them at design time and (ii) find a deployment strategy on the target edge computing platform. In this paper, we propose EdgeFlow, a new IoT framework capable of assisting the developer in the application development process. Specifically, we introduce a methodology for latency-sensitive IoT applications development and deployment, consisting of three different stages, i.e., the development, validation, and deployment. To this end, we propose an extension of the Flow-Based Programming paradigm with new timing requirements and provide a resource allocation technique to assist with the deployment and validation of latency-sensitive IoT applications. Finally, we evaluate EdgeFlow by (i) presenting the application development methodology and (ii) performing a quantitative evaluation demonstrating our resource allocation technique’s capabilities to find feasible and optimal deployment strategies. Experimental results illustrate the effectiveness of our methodology to assist the developer throughout the entire application development process.
I. INTRODUCTION
Latency-sensitive Internet of Things (IoT) applications have stringent requirements, e.g., low latency, better privacy and security.Current cloud-centric solutions fail to satisfy these requirements since high volumes of data must be transferred to the cloud [1].Hence, to successfully met the application's requirements, we must take advantage of the distributed computational nodes found in an IoT system.As a result, a latencysensitive IoT application consists of multiple interconnected components; a component is capable of executing one part of the application's functionality.However, developing and deploying such an application model is not a trivial task since the developer must (i) define and validate the application's requirements at design time and (ii) find a deployment strategy such that it satisfies all application requirements.
To address the shortcoming of cloud computing, researchers have proposed edge computing [2].Edge computing enables the utilization of available computation resources found at the edge of the network [3], [4] -a paradigm consisting of multiple geo-distributed resource-constrained devices capable of hosting deployed IoT applications.Edge computing assists cloud computing in satisfying the stringent requirements of latency-sensitive IoT applications, where components may be deployed on edge nodes.Some advantages of edge computing include low latency and data locality [5].Nevertheless, deploying an application on an edge computing platform is challenging since heterogeneity and limited resource capabilities define an edge node.As a result, the successful deployment of latency-sensitive applications is dependent on new resource allocation techniques.
Edge computing brings many advantages for the deployment of latency-sensitive IoT applications.However, edge computing makes the application development process more challenging, since the developer must divide the application's functionality and define different requirements for each component [6].Previously, in a cloud-centric system, a single component contains the entire application's functionality and it is deployed in a single location, i.e., in the cloud.In contrast, in an edge computing platform, the application model consists of multiple components that are distributed among different edge devices.An application model that is in line with the flow-based programming (FBP) paradigm [7] concepts; an application has a communication flow that connects different components to achieve certain functionality.Several FBP tools like noFlo [8], node-RED [9], and drawFBP [10] exist to aid the developer in creating new IoT application models and define their communication flow.However, it is still challenging to define and validate the application's timing and resource requirements during the development stage.
In this paper, we propose EdgeFlow, a new IoT framework for latency-sensitive IoT applications development and deployment.Our main contribution is a methodology for aiding the developer in the process of creating and deploying applications by (i) defining new applications' timing and resource requirements, (ii) validating all requirements, and (iii) finding a deployment strategy.
Development stage.We propose an IoT application modeling paradigm for developing latency-sensitive applications at design time.The purpose of this stage is to collect as much information as possible regarding the current applicationinformation that improves the chances to successfully deploy an application on the target edge computing platform.As a result, we employ the FBP paradigm as an application model to define latency-sensitive IoT applications and extend this paradigm with new timing requirements allowing the developer to provide timing and resource requirements.We allow for a higher granularity when defining the application's timing requirements.As a result, for a latency-sensitive IoT application, the developer can define an end-to-end (e2e) delay for many communication flows ranging from the communication link between two components to a flow containing the entire application (if possible).To evaluate our development stage, we create a prototypical framework based on the drawFBP tool.We describe the application development methodology by creating an IoT application.
Deployment and validation stages.The deployment stage offers support for deploying latency-sensitive applications on edge computing platforms.This stage provides validation for defined application constraints by determining eligible deployments (if any) of the designed application to the target edge computing platform.We cast our deployment technique within constraint programming (CP) paradigm [11], where we define the deployment constraints as a constraint satisfaction problem.Consequently, the deployment stage can generate feasible or optimal deployment strategies -it provides guarantees that if a deployment strategy exists, the technique can find it.A deployment strategy that (i) satisfies each component's resource requirements while not exceeding the device's available resources and (ii) meets the communication flow constraints, i.e., ensuring that the e2e delay of each communication flow does not exceed the determined one.Finally, we evaluate our deployment stage performance by assessing the execution time required to find an optimal deployment strategy.
The contributions of this paper are as follows: • EdgeFlow.A methodology for latency-sensitive IoT applications development and deployment.Our proposed methodology aids the developer in defining and validating timing and resource requirements as well as finding optimal and feasible deployment strategies.The remainder of the paper is structured as follows.In Section II we summarize the related work.Section III provides an overview of EdgeFlow and defines the application model, the edge computing platform, and the communication flows constraints, given as input files to the deployment stage.In Section IV we describe the implementation details of our proposed framework.Section V presents the application development methodology, while Section VI shows the results of our deployment stage evaluation.Finally, Section VII concludes the paper and provides an outlook on future work.
II. RELATED WORK
The adoption of edge computing and the stringent application requirements have changed the application's deployment and development process.Recently, the consensus, in the research literature, depicts an application model as a collection of components to accommodate the distributed nature of edge computing [12], [13], [14], [15].Typically, researchers consider as given the application model and its associated timing and resource requirements when proposing new application deployment techniques.However, developing an application model and defining all requirements is not a trivial task.
Only recently, researchers have proposed techniques to aid with the IoT application development process.Giang et al. [16] present a distributed dataflow programming model for fog computing that aids the developer during the application development process.In this case, the developer defines the application model as a directed graph, dividing the application's functionality between different application nodes.Wang et al. [17] propose a stream processing approach, i.e., Edge-Stream, for building new applications for edge computing systems.Edge-Stream represents data flows between the application's components as streams.Frasad [18] is another framework that helps with the IoT application development and makes use of a model-driven design approach to enhance the reusability, flexibility, and maintainability of sensor software.Rafique et al. [19] develop an IoT application development framework using model-driven development and attributedriven design.The framework transforms the application's requirements into a solution architecture using the attributedriven design and then uses model-driven development to generate models to transform the application's components into software artifacts.Other papers make use of FBP for the IoT applications development process.Szydlo et al. [20] introduce a heuristic data flow transformation technique to successfully distribute flows on the target network, while Belsa et al. [21] present a solution to interconnect services from different IoT platforms.Jain et al. [22] propose a mapping technique composed of two stages: (i) the IoT application is modeled into multiple different tasks annotated with target location information and (ii) each task is deployed on an edge node based on its location.The authors extend Node-RED to allow the development of the IoT application and deployment of defined components to their predefined location, i.e., cloud or edge.Compared to the related IoT development approaches, we focus on deployment and validation of IoT applications on different edge computing platforms without the need to introduce predefined locations for components -we enable the deployment of applications on large-scale platforms.
The deployment problem exists in many variants in the scientific literature [23], [24], [25], [26].The two most common scenarios where researchers propose deployment techniques are (i) service placement and (ii) service offloading.The former migrates services that reside in the cloud closer to the edge of the network, i.e., on edge or fog nodes.In contrast, the latter moves services from resource-constrained devices, e.g., smartphones, to nearby edge nodes, in an attempt to preserve the energy of devices.Brogi et al. [27] propose a deployment technique having as objective the latency and bandwidth.As a result, the proposed solution provides Quality of Service (QoS) aware deployments of IoT applications on a target fog computing architecture.Scoca et al. [28] propose a latency, bandwidth, and resource-aware scheduling algorithm that finds a mapping of services to edge nodes.The approach uses a score-based technique that evaluates the target edge nodes and communication links and computes a scoring mapping for each service.The main objective of this technique is to guarantee optimal service quality.In [29], Redowan et al. introduce a latency-aware technique aiming to deploy the application's modules on fog computing such that it satisfies all objectives.The approach has two objectives, i.e., (i) to satisfy the application's latency requirements and (ii) to optimize the utilization of the node's available resources.Liu et al. [30] propose a task offloading technique that aims to minimize the system cost, i.e., energy and latency.This technique groups the users into clusters based on their priorities and decided if a cluster should run all its tasks locally or should be offloaded to an edge server.Grosu et al. [31] introduce an online heuristic algorithm based on Mixed Integer Linear Program to deploy multi-components applications on edge computing platforms.As we can see, all approaches strive to achieve at least one objective, i.e., latency.However, the deployment problem is implemented as a resource allocation optimization problem leveraging assumptions about the application model.
Note that, in the presented related work, some solutions consider as target deployment platforms either fog or edge computing paradigms.These two paradigms have the same underlying premise of migrating computational resources closer to the edge of the network [6].Therefore, from the perspective of EdgeFlow, using one paradigm over the other poses no impact on the EdgeFlow functionality -in both cases, the available resources are shared between the participant devices.We differentiate ourselves from the aforementioned related work from two big perspectives: we (i) extend the FBP paradigm with new timing requirements and (ii) propose a new deployment technique.With the former, we allow the developer to define new timing requirements for each application component and communication link.Moreover, we introduce a new timing constraint for multiple communication flows, i.e., the developer can define individual delays for many communication flows of different sizes.The latter technique can find feasible or optimal deployment strategies, fulfilling the timing and resource requirements.Furthermore, since we are using CP, the technique can validate the application's timing and resource requirements considering the target edge computing platform.
III. EDGEFLOW: APPLICATION DEVELOPMENT AND
DEPLOYMENT FRAMEWORK New latency-sensitive IoT application models achieved through edge computing decompose the application's functionality into multiple distributed components.As a result, the developer must define new timing and resource requirements for each component and the overall application -besides the maximum e2e delay involved in the correct application functionality, the developer must define specific requirements for each component.As such, the developer must be able to define and validate all requirements during the application's development process; since these requirements play an active role in the application deployment.
In our framework, we provide a methodology to develop and deploy IoT applications on the target edge computing platform.For the former, we offer support for creating the application model and defining the timing and performance requirements.For the latter, we propose a deployment stage capable of finding a deployment strategy at design time.Depending on the type of the target edge computing platform, i.e., static or dynamic, the deployment stage provides a different utility.In the case of static architectures, e.g., like in a smart factory, the deployment stage can generate feasible or optimal deployment strategies.However, if the target platform is a dynamic edge computing architecture characterized by high uncertainty and node mobility, then the deployment stage can only validate the application requirements; a dynamic network may change while we search for the optimal deployment strategy at design time.As a result, for dynamic architectures, we can use a decentralized resource allocation technique capable of finding deployment strategies at runtime [32].Figure 1 presents an overview of our EdgeFlow methodology consisting of three distinct stages, i.e., (i) the development stage, (ii) the deployment stage, and (iii) the validation stage.
A. Development stage
The application development stage is an extension of the FBP programming paradigm, introducing new timing and resource requirements.The application modeling paradigm offers the possibility to divide the application's functionality into different components and build the application's communication flow such that the application performs certain functionality.
The FBP paradigm views an application as a network of processes, i.e., components, interconnected via predefined communication links.Each component runs asynchronously and communicates via streams of data chunks, i.e., Information Packets (IPs) [33].FBP is component-oriented, allowing the developer to develop different applications using the same network of components -a practice that improves the application development process and enhances the reusability of components.FBP is not a coding language.As a result, it is ideal to use predefined components from a library.
FBP extension.The FBP paradigm does not provide the possibility to define Quality of Service (QoS) requirements, i.e., timing requirements, data locality, affinity and anti-affinity constraints between components, privacy [34], [35], and security [36], during the application's modeling stage.In this paper, we target the development and deployment of latency-sensitive IoT applications -one of the fundamental concerns of these applications is latency.To this end, we propose an extension of the current FBP paradigm with new timing requirements 1 .In our opinion, three essential timing requirements define a latency-sensitive application, i.e., worst-case execution time (WCET), e2e delay for different flows, and worst-case communication delay (WCCD).Each application's component has associated a WCET representing the time required to produce a result.Similarly, we define, for each communication link, a maximum WCCD serving as the time that an IP needs to reach its destination.Finally, we provide the means to define an e2e delay for multiple communication flows.Notice that the first two timing constraints are part of the e2e delay computation since a communication flow consists of one or more components and communications links alike.In [37], authors formalized the syntax and semantics of Flow-Based languages, and they proposed a metamodel for FBP. Figure 2 presents our extended metamodel based on their formalism.
Application model.An application model is defined as an FBP network which consists of a set of components C={c 1 , c 2 , ... } that collaborates to perform a certain goal.An application may have one or more source components (i.e., the component that provides the required data) as well as at least one sink (i.e., a component that acts according to the data received).In Figure 3, we present an example of an application model having one source and two sinks, where the communication flow starts with the source component, i.e., c 0 , and finishes with two sink components, i.e., c 4 and c 6 .A component c i performs a certain functionality and represents a containerized microservice or serverless function.Each component is characterized by a set of timing and resource requirements, C req ={r 1 , r 2 , ... } as well as a set of input and output ports, C in ={in 1 , in 2 , ... } and C out ={out 1 , out 2 , ... }.During the application development process, the developer defines these requirements according to the application's goals.A resource requirement represents the generic memory (i.e., RAM), computational power (i.e., CPU), and storage (i.e., HDD) requirements, while the WCET of a component is an example of a timing requirement.To fit better the application's needs, in future work, we intend to extend the components' resource requirements with specific requirements, e.g., hardware requirements like GPUs for high computational components or specific data that must be present on the host node.
B. Edge Computing platform
An edge computing platform consists of multiple distributed edge nodes, having the following characteristics: (i) heterogeneity, (ii) limited computational resources, and (iii) mobility.Let E N ={E 1 , E 2 , ... } be a set of edge nodes found in the target architecture.Each node is characterized by a set of available resources, E res ={r 1 , r 2 , ... }, like RAM, CPU, and HDD, and a list of communication links Link com ={link 1 , link 2 , ... } -each link i having associated a bandwidth.
Based on the platform's characteristics and the administrative entity control level, we identify two types of edge computing platforms, i.e., a dynamic platform and a static platform.The dynamic platform consists of different mobile and static edge nodes owned by distinct administrative entities.As a consequence, it introduces a high uncertainty level into the system, making the deployment of an application at design time more challenging; an example of such a platform is the typical smart city scenario.In contrast, the static platform has a low uncertainty where the developer knows the nodes' characteristics at design time, e.g., a smart factory scenario.
C. Deployment and validation stages
The deployment and validation stages uses a resource allocation technique aiming to help the developer to validate the defined application's requirements considering the target edge platform's available resources.Consequently, we develop our resource allocation technique using Constraint Programming (CP), which produces both feasible and optimal deployment strategies.Notice that CP fits rather well with our deployment stage since our primary focus is to validate the application's requirements -CP provides guarantees that if a deployment strategy is possible, then it satisfies all requirements.To deploy an application, the deployment stage requires information regarding the application model and the target edge computing platform.The developer provides all required information as three input files, i.e., application model file, edge computing platform file, and flow constraints file; the developer can generate the files using the IoT application modeling stage or can create them manually.
Application model file.The development stage provides the developer with the ability to generate the application model file -a process that stores all application resource requirements in a JSON file.In the end, the JSON file contains information about the application's components, such as input and output ports, the period and data size associated with the ports, resource requirements (RAM, CPU, HDD), and the WCET.
Edge computing platform file.Since the edge computing platform is not modeled with the FBP paradigm, we assume that the developer obtains this file from the administrative entity that owns the platform, i.e., considering the static platform scenario.For the dynamic platform, we assume that the file represents an estimation of the possible current topology determined from the history data stored in the cloud.
Flow constraints file.This file contains the application's constraints -the deployment strategy uses them as objectives.We offer the developer the possibility to add for each flow found in the application model an e2e delay constraint.The e2e delay considers both the WCET of each component found on the path as well as the communication latency used when components exchange IPs.For example, consider the application shown in Figure 3, the developer can create multiple constraint flows between its components.There are three big flows consisting of the following components: (i) and (iii) c 0 − c 1 − c 4 respectively.However, the developer can add a constraint even for a smaller flow consisting of a minimum of two components, e.g., c 0 − c 1 .In this paper, we assume that the developer provides at least the number of flows required to involve all communication links and components found in the application model.If any component remains outside of a defined flow constraint, then our deployment stage will consider it as a single component with no dependencies.The proposed IoT framework utilizes a mini-language to specify the flow constraints, see Grammar 1.The developer can add the flow constraints using this language to specify the flow's path and the maximum e2e delay.In Equation 1, we present an example of a flow containing two components c 1 and c 2 .In the flow declaration, the IN and OUT ports represent the name of the input and output ports used by each component.As we can observe, the colon separates the flow's path declaration from its id, while ≤ shows the relation between the path and the e2e delay and → represents the direction of the communication.Furthermore, the last component does not need an output port, this highlighting that the path is ending.
STAGES
In this section, we present the two stages that represent the core of the EdgeFlow framework, i.e., the development stage and the deployment stage.For the former, to prove our concept, we develop a prototype to help the developer in creating new application models and defining their timing and performance requirements.For the latter, we propose a deployment technique capable of providing deployment strategies such that it satisfies all application's requirements and constraints.
A. Development stage prototype
To prove the benefits of creating a new latency-sensitive application using our IoT framework, we develop an application development prototype based on drawFBP.DrawFBP uses FBP at its core and allows developers to create diagrams using blocks, i.e., components [10].An advantage of drawFBP is that developers can generate different components that other developers can reuse -the developer can create them using Java, C#, or JSON.As a result, the developer can use existing components from the drawFBP library during the application development process.In this case, the development process resumes at creating a communication flow between selected components such that it fulfills the application's goals.However, defining the application's communication flow and choosing the components is not enough; the developer must define specific requirements for both the communication flows and for each component.
We extend drawFBP with new options, like set component requirements, set flow constraints, application model: generate JSON file, and flow constraints: generate JSON file, offering developers the possibility of adding timing requirements.Using the set component requirements option, the developer can describe for each component the following characteristics, i.e., WCET, period, message size, and resource requirements (RAM, CPU, HDD).Furthermore, the developer can define different e2e delay constraints for custom communication flows using the set flow constraints option.Finally, we collect all information into two input files, i.e., the application model and the communication flow constraints, using the two generate JSON file options; files that the deployment stage uses as input.
B. Deployment stage technique
Our deployment technique helps the developer to decide if the application can be deployed on the target edge computing platform.Depending on the success of the deployment, the developer gets more clarity in defining the component's resource requirements and the application's constraints.Two cases lead to deployment failure, i.e., (i) the application has very stringent requirements and (ii) the target platform lacks the required available resources.Under these conditions, if the deployment stage does not find a deployment strategy, the developer can investigate one of the two cases and make the required adjustments accordingly.Therefore, the developer can use the deployment stage to understand if the target edge computing platform can host the application.As a result, developers can create better application models suitable for deployment on a large variety of platforms.
As mentioned in the previous section, we implement the resource allocation technique using CP.Depending on the strategy found, CP can return one of the four different status values, i.e., (i) optimal, (ii) feasible, (iii) unknown, and (iv) infeasible.The deployment technique found a deployment strategy that meets the requirements if the returned status is (i) or (ii).In contrast, if the returned status is (iv), then the technique cannot find a deployment strategy that meets all requirements.An interesting state, i.e., (iii), may appear when the developer decides to limit the execution time of the deployment stage.Under these conditions, the technique is unable to decide if the current deployment strategy satisfies all application's requirements -as a result, it returns unknown.To find a deployment strategy, we model the problem using decision variables, constraints, and global objective and solve it using a CP solver.
The procedure starts once the deployment technique receives the required input files from the developer.Using the information received as input, we can create a set of decisions variables used in the CP model.A decision variable represents a variable for which the CP solver tries to assign a value chosen from a predefined domain to satisfy the application's requirements.In our case, we identify four different decision variables, i.e., component variables, latency variables, WCET variables, and resource variables.From all these decision variables, only the component variables yields an allocation, while all the other variables are support variables for validating the chosen deployment strategy.
Component variables.The component variables define for each component a domain containing a list of edge nodes where the current component can be mapped.For example, component c 1 can be mapped only on nodes E 1 , E 2 , and E 3 ; hence, a valid domain for the decision variable of 1 is D={E 1 , E 2 , E 3 }.Under these conditions, the solver can only choose a node from D to allocate c 1 .
Latency variables.These variables are in charge of saving the communication latency between two components considering their mapping.Let us consider that two components c 1 and c 2 communicate with each other -c 1 is mapped on E 1 and c 2 is mapped on E 2 .We can use the IP size and the bandwidth of the communication link used for communication to find the communication latency between the two components.To build the variable's domain, we use the dependencies between components described in the communication flow constraints input file and all possible locations from their respective domains devised in the component variables.As a result, to compute the communication latency between two dependent components, c i and c j , we take all possible distinct edge node combinations from their associated domains.
WCET variables.The WCET variables has the same purpose as the latency variables, i.e., to store the WCET given to each component.Since the WCET of a component is strictly dependent on the host's internal status, obtaining the exact WCET of a component is challenging; the edge computing platform consists of multiple heterogeneous devices, requiring a complete analysis of the WCET of a component on every edge node.We consider such analysis as out of scope for the current paper.Therefore, to lower the challenge in finding a suitable WCET, we assume the developer can provide a lower and an upper bound for the WCET of each component.
Resource variables.These variables keep track of the edge node's available resources.Every node starts with a predefined set of available resources; resources that decrease with the resource requirements of new mapped components.An approach that ensures the correct distribution of components on nodes without exceeding the node's available resources.
Once we add all decision variables to the CP model, we can continue with the introduction of our constraints.Each constraint represents an important part of our model, guiding the CP solver towards a feasible deployment strategy that considers the application's constraints.For this purpose, we define two different constraints, i.e., components constraints and flows constraints.
Components constraints.The components constraints ensure that the distribution of components on edge nodes does not exceed the node's available resources.To achieve such purpose, the components constraints make use of the following decision variables, i.e., component variables and resource variables.Equation 2, Equation 3, and Equation 4 guarantee that a deployment strategy does not exceed nodes' available resource, where n c represents the total number of components mapped on the current node.
Flows constraints.By validating the components constraints, we can successfully deploy the application on the target edge computing platform.However, we only consider the application's resource requirements as a deployment objective.Therefore, we introduce a new set of constraints, i.e., flows constraints, to consider the flow constraints introduced in the flow constraints file.We build these constraints based on the components variables, WCET variables, and latency variables.By combining the three decision variables, we manage to further enforce constraints on the deployment strategy.In conclusion, we can observe that these constraints consider both WCET and communication latency.Equation 5 guarantees that the flow's e2e delay does not exceed the maximum e2e delay associated with it; the e2e delay of a flow is the sum of all participants components' WCET and their communication latency.In Equation 5, l f represents the total number of links found in a flow f, c f is the total number of components part of a flow f, and maxE2Edelay f represents the maximum e2e delay allowed for flow f.Global objective.The purpose of this objective is to minimize the e2e delay of each flow.In doing so, we obtain a solution that offers an optimal deployment strategy if there is enough time to search for it.Equation 6shows the global objective, where n f represents the total number of flow constraints defined in the communication flow constraints file and flowE2E i is the current e2e delay of flow i.
M in(
V. APPLICATION DEVELOPMENT METHODOLOGY EdgeFlow provides a framework that aids the developer in creating emergent latency-sensitive IoT applications and deploy them in an edge computing platform.In this section, we evaluate the applicability of our IoT framework by presenting the application development experience.We describe this as a step-by-step process using one latency-sensitive IoT application.First, we describe the development of the IoT application model and generate the input files that the deployment stage requires to validate the requirements and find a deployment strategy to map the application on an edge computing platform.The application development prototype and the deployment stage technique are available in our online appendix2 and our git repository 3 .
As a running exemplar, we model a public safety IoT application deployed in a smart city scenario.The application aims to prevent any possible attacks by analyzing all the images and videos from an area.The application consists of multiple components capable of analyzing both the environment as well as people.For example, the application sends an emergency signal to the police department if a suspicious package is found in the monitored area.Since our focus is to show the extensions and improvements we bring with our proposed IoT framework, we assume that the public safety application's components are available in the drawFBP library.In this setting, the developer must connect the components and add the timing and resource requirements.
Considering the safety implications, the smart city application must adhere to some timing requirements such as low e2e delay.To prevent a possible disaster scenario, the application must be able to provide alerts without delay.Thus, the application must execute at the edge of the network.As a consequence, a prerequisite for the developer is to validate the timing and performance requirements on the target edge computing platform before deploying the application.As we will show, our IoT framework is capable of performing such validation.In our case, the application consists of five components, each enacting a specific functionality.
Development stage.
Using the application development prototype, the developer can create all components required for the application, add the functionality, and connect them via ports to create the application's functionality (see Figure 4).Currently, the developer has created the application model without defining the timing and resource requirements.
Once the model is complete, the developer can specify the timing and resource requirements using our FBP extension options presented in Section IV-A.To assign the component's requirements, the developer can use the option set component requirements available in the component menu; to access this menu, right-click on the target component.The process of setting the component's requirements goes through each requirement and asks the developer to provide a value or a range (in the case of WCET).To create new flow constraints, the developer can select the set flow constraints option from file menu and define a new flow constraint using the template from Equation 1.We have added support options, i.e., display flow constraints and delete flow constraints, that help the developer to display and delete all existing flow constraints.
Finally, there is one more step to perform before the developer can move to thedeployment stage, i.e., to generate the application model file and the communication flow constraints.The developer can generate these files using the two options, i.e., Application Model: generate JSON file and Flow Constraints: generate JSON file, from the file menu.As explained in Section III-C, the developer can obtain the edge computing platform file from other sources.
Deployment stage.The contents of the three input files are presented in Table I With the three files ready, the developer can start the process of finding a satisfiable deployment strategy.Considering the target edge computing platform, the deployment technique tries to find an optimal or feasible deployment strategy.Depending on what status the CP solver returns, the developer must decide if he/she should change the application's requirements or try to find a more suitable edge computing platform with more resources.In Figure 6, we can see the output of the deployment stage.We highlight the host node of each component by placing the node's id on the top left corner.For example, component c 1 is mapped on E 0 and component c 2 is mapped on E 1 .In this case, the deployment stage finds the optimal deployment strategy in 10 ms.The deployment stage returns a detail report showing the communication latency between components and their WCET concerning each communication flow constraint.In Table III we present the flows' e2e delay and the communication latency for the deployment strategy presented in Figure 6.We can observe that for the optimal solution, the actual e2e delay of flow f 1 is 34 ms, while for f 2 is 32 ms.Also, we can see that for f 1 the communication latency between components c 1 and c 2 is equal to 6 ms.Validation stage.As we can observe from the results of the deployment stage, for the current running example application, there is no need to redefine the timing requirements -the deployment stage has found an optimal solution that fulfills all application's requirements.However, if the deployment stage cannot find a solution, then the developer can change the requirements and employ the deployment stage again.
VI. EVALUATION
In this section, we perform a quantitative evaluation to assess the deployment stage's capabilities in terms of the time required to provide optimal and feasible deployment strategies for different scenarios.We are interested in finding how certain markers like (i) the application size, (ii) the edge computing platform size, and (iii) the number of flow constraints impact the tool's performance.Considering our evaluation objective, we propose three distinct scenarios, each having a different application model and flow constraints input files.Furthermore, in every scenario, we deploy the application on multiple edge computing platforms; each target platform has a different number of available edge nodes.
We proceed by generating the three input files for each scenario.We can obtain these files using the application modeling stage, as proven in Section V.However, considering our evaluation objective, it is not feasible nor required to develop the applications and add timing and resource requirements manually.As a result, we randomly generate all input files using different procedures.
Application model file: generation.All considered applications have one source component and one sink component -a decision that does not alter the evaluation objective and results since an application with multiple sources and sinks only implies a higher initial number of flows.We choose a different number of components for each scenario, starting from 10 for the first scenario up to 30 components for the last one; in our case, we increase the application size by 10.We first model the component's resource requirements as a tuple, i.e., (RAM, CPU, HDD), choosing for each resource a random value between [5,15] units.Next, we choose the WCET range [l, u] for a component by selecting a value for l and u, from [4,10] ms and [10, 12] ms respectively.Finally, the period has a value between [10, 30] ms, the IP's data size is between [30,120] bytes, and we define for each component a total of two input and output ports.
Edge platform file: generation.We create multiple edge computing platforms, having a size between 10 and 500 nodes.In each scenario, we gradually increase the size by 10, generate the edge platform file, and employ the deployment stage to find an application deployment strategy.Similar to the components' resource requirements, we model the available resources of an edge node as a tuple and choose for each resources a value between [15,30] units.Finally, we choose for each communication link an available bandwidth between [30,90] bytes/ms.
Flow constraints file: generation.We randomly generate flow constraints for every application model.The procedure takes as input the total number of flow constraints defined in a file and the maximum e2e delay.We set the maximum e2e delay to a high value, i.e., 500 ms, for all flows.Choosing a smaller e2e delay does not impact the deployment stage's performance; however, it may influence its ability to find a deployment strategy if set the e2e delay to a very stringent value.Moreover, in Section V, we have demonstrated the capability to generate deployment strategies under demanding e2e delay requirements.
After we choose the e2e delay value of a flow, we must provide the associated communication path.In our case, we define three flow constraints files for every scenario, i.e., a file containing (i) one flow constraint, (ii) three flow constraints, and (iii) a total number of five flows.Remember that the developer must define flow constraints such that it involves all communication links and components at least once.As a result, in our procedure, the first flow will always traverse the application from the source component to the sink component -involving all other components in between.
To create a communication path between the participating components, the procedure creates a pair of two components, i.e., (src, dest), starting from the source component and selects the next destination components.Next, we create a new pair using as src the dest component from the previous pair and choosing as the new dest a new component.The procedure continues until the destination becomes the sink component.
For example, let us consider that we want to build flow f1 from Section V.In this case, we have four components involved in f1, i.e., C={c 0 , c 1 , c 2 , c 4 }.To build the flow constraint, the procedure starts from c 0 and chooses the destination c 1 forming the first pair (c 0 , c 1 ).Next pair is formed by making 1 as the source and choosing c 2 as the new destination, resulting in the new pair (c 1 , c 2 ).Finally, the procedure stops with the pair (c 2 , c 4 ), since c 4 is the sink component.For all other flows, we randomly select the number of participating components and restart the procedure.
To evaluate the performance of our deployment stage, we perform 50 deployments for each scenario.In this case, once we find an optimal deployment strategy for the current edge computing platform, we increase the platform size and attempt to find a new deployment strategy for our IoT application.In Figure 7, we present the execution time required by the deployment stage to find an optimal deployment strategy for all three scenarios.The x-axis represents the total number of nodes found in the target platform, while y-axis represents the execution time in seconds.In Figure 7, we show the total execution time required by the deployment stage to yield an optimal deployment strategy.However, the deployment technique consists of two different parts, i.e., (i) building the CP model and (ii) solving the model using a CP solver.As a result, we are interested in finding how much execution time each part requires (see Figure 8).In Figure 8b, we present the execution time required to generate the CP model, while Figure 8a) presents the time required by the CP solver to find an optimal deployment strategy.
In all experiments presented above, we have kept the number of flow constraints equal to 1.However, we are interested in observing the impact of multiple flow constraints on the execution time of both the CP solver and model as well.As a result, we perform the same set of experiments, increasing the number of flow constraints as well -we perform 50 deployments using a total of 3 respectively 5 flow constraints.In Figure 9, we show the execution time required to solve a model for each scenario, while in Figure 10 we show the time required to build the CP model.
A. Discussion
We have demonstrated that with the proposed deployment technique, we can successfully find an optimal deployment strategy.Contrary to how we chose the flows' maximum e2e delay in Section V, we have decided to choose a less stringent maximum e2e delay since this does not impact our evaluation results; we use the same e2e delay for all scenarios.Results of Figure 7 shows that (i) the number of nodes found in the target platform and (ii) the application's size impacts the execution time required to find an optimal deployment strategy.Breaking down the execution time, see Figure 8a and Figure 8b, we can observe that the execution time required to build the CP model or to solve it is dependent on the number of nodes and components.Note that in scenario 1, when we deploy the IoT application on an edge computing platform consisting of 500 nodes, building the CP model requires more time than finding an optimal deployment strategy.In contrast, in scenario 3, the CP solver is more demanding than building the CP model, requiring more time to find an optimal solution -a trend that continues with an increase in both application and edge computing platform size.
In Figures 7 and 8, we have demonstrate the scalability of our resource management technique considering the application and platform size.However, other factors impact the technique's execution time, i.e., the number of flow constraints.From Figure 10, we can conclude that the number of flow constraints marginally increases the time required to generate the CP model -by adding more flows in the CP model, we must create more variables for the added flows constraints.We can observe that the execution time for building the CP model gradually increases with an increase in the (i) number of nodes, (ii) number of components, and (iii) number of flow constraints -an expected behavior, since the number of model variables and constraints increases in the CP model.Comparing to the execution time seen in Figure 8b, we can conclude that the flow constraints do not have a big impact on the execution time required to build the CP model.In contrast, the number of flows has a great impact on the CP solver's execution time (see Figure 9) -the problem to be solved has become more challenging.Compared to the results seen in Figure 8a, the number of flows severely impact the execution time required by the CP solver.On the one hand, we can see that the execution time fluctuates between different deployments when the edge platform size grows -a trend that is the result of all the default optimizations the CP solver has.On the other hand, in some scenarios (see Figure 9c), the CP solver requires up to x2.5 more time to find a deployment strategy -the addition of different flow constraints makes the problem harder to solve.As a result, the CP solver's execution time depends more on how complex the problem is.Note that the complexity of a problem depends on the node's available resources, the application size, the component's resource requirements, and the defined flow constraints.For example, in Figure 9c, we can observe that the solver manages to find a deployment strategy faster when there are 5 flows than when we have 3 flows constraints.A possible reason for this is that the overall problem complexity is higher with the addition of the three flow constraints.Therefore, we can conclude that not only the number of flows impact the execution time, but also the construction of each flow, i.e., the communication path, number of components, and the component's dependencies.However, since we built the scenarios randomly and the CP solver has its own optimizations, we cannot say with certainty why this behavior appears or the fluctuations in execution time.
Finally, one advantage of using CP for our deployment technique is the ability to allow the developer to limit the CP solver's execution time.For example, we can find a feasible deployment strategy for scenario 3, having one flow constraint, and 500 nodes (see Figure 8a), in 120 ms.By applying a time limitation we can lower the total execution time required to find a deployment strategy -lowering the time required to validate all requirements.However, it is important to mention that if the time limit is set too low, then the solver may not be able to decide if a solution exists.Hence, the solver's output will be 'UNKNOWN' -the solver does not have enough knowledge to determine if the solution is infeasible or feasible.As a consequence, the developer should pick a reasonable time for complex problems where finding the optimal deployment strategy requires too much time.
We acknowledge the high computational demands of our deployment stage when finding optimal deployment strategies for scenarios where the problem becomes too complex.We can see in Figure 7 that the deployment stage requires around 600 seconds to find the optimal deployment strategy for scenario 3.However, we argue that the execution time is not an issue since the deployment stage takes place at design time when the application is not operational.
B. Challenges and Limitations
We identify two types of latency-sensitive applications that would benefit from edge computing, i.e., the hard real-time IoT applications and soft real-time IoT applications.Both applications are similar since their correct functionality relies on having a low e2e delay and meeting their deadlines.However, there is an important distinction between the two, i.e., in the case of a soft real-time IoT application, violating the deadline of a component impacts the quality of the application, compared to hard real-time applications where missing a deadline can have catastrophic events.Hence, in this paper, we focus on the development of soft real-time IoT applications offering the possibility to validate only the e2e delay set for each flow, i.e., it does not violate its maximum allowed e2e delay deadline for a certain flow.We do not provide time There are two main challenges for the developer during the application modeling stage, i.e., assigning the WCET and the resource requirements for each component.The former plays an important role in the overall e2e delay while the latter is critical for the deployment technique; without knowing the resource requirements of a component, the deployment technique cannot find a deployment strategy.
Finding the WCET is not a trivial task.The WCET of a component is directly dependent on the host node, i.e., the developer must know the internal status of the node (i.e., the current load and the available resources) and the location of the component.An approach to determine the component's WCET is to compute it at deployment time.We can integrate the WCET analysis into the deployment stage similar to how we do for the latency communication computation.An approach that automates the process of finding the WCET and simplifies the tasks of the application developer.In this paper, we assume that the developer provides the WCET using an external tool; the implementation of an automatic approach is our target for future work.Similar to the WCET computation, finding the component's resource requirements is a challenging task.One option to find and estimate these resources (i.e., RAM, CPU, HDD) is to benchmark the application on multiple edge computing platforms and take the maximum usage as an estimate.
Finally, besides finding an allocation of components to nodes, we must map the input and output virtual ports as well.There are two approaches that we can follow to achieve port mapping, i.e., manual and automatic.The former requires that the engineer performs manually the mapping of virtual ports to the host node's real ports following the deployment strategy suggestion; a scenario that is possible only if the target edge computing platform is known and has a relatively small size.In comparison, in the latter approach, the resource allocation technique is in charge of mapping the ports and the components without requiring the help of an engineer.
VII. CONCLUSION
In this paper, we present EdgeFlow, a new IoT framework aiming to assist the developer throughout the entire application development and deployment process.For this framework, we propose a methodology for latency-sensitive IoT applications consisting of three important stages, i.e., the development stage, validation stage, and deployment stage.For the development stage, we propose an extension of the FBP paradigm with timing and resource requirements.These requirements are crucial to the successful deployment of the application on the target edge computing platform.Further, we enable the introduction of multiple communication flow constraints, ensuring that the e2e delay of a certain communication flow does not exceed a certain e2e delay.For the deployment and validation stages, we introduce a new resource allocation technique capable of finding feasible or optimal deployment strategies.In conclusion, our methodology allows for a more detailed application description and assures that if the resource allocation technique finds a deployment strategy, then the strategy satisfies all application requirements.
For future work, we intend to extend our current work with techniques to analyze and compute the component's WCET; hence, we eliminate the need of introducing a WCET range, offering a more efficient deployment strategy.By having the possibility to compute the WCET of a component, our IoT framework can assist in the development and deployment of hard real-time IoT applications as well.Furthermore, there is one more important set of applications that are relevant in the edge computing context, i.e., edge intelligence applicationsapplications that have components that require machine learning supporting hardware (e.g., GPU and TPU) and specific data stored locally.As a result, we plan to extend EdgeFlow to (i) allow the developer to add the specific requirements to each component during the application development process and (ii) consider them during the deployment stage.Finally, we aim to provide further extensions to the FBP paradigm, i.e., add the possibility to (i) define QoS requirements and (ii) add privacy and security requirements for each component.
Figure 5
Figure 5 shows the declaration of the flow's path in the communication flow constraints file where a list of communication ports, the source, and the destination component describe each link found in the path.For our application, we assign two different flows, i.e., f 1 with the path c 1 − c 2 − c 4 and f 2 having the path c 1 − c 3 − c 4 .For f 1 we chose an e2e delay equal to 40 ms and for f 2 the maximum e2e delay is 33 ms.To choose the maximum e2e delay, we consider the sum of the lower bound of the WCET of all components found on the communication flow.Furthermore, to this, we add a value of 10; this value reflects the impact of the communication latency between components.
Flows e2e delay and communication latency.
Fig. 7 :
Fig. 7: Execution time of the deployment stage for different scenarios over different edge computing platform sizes, considering only one flow constraint.In Figure7, we show the total execution time required by the deployment stage to yield an optimal deployment strategy.However, the deployment technique consists of two different parts, i.e., (i) building the CP model and (ii) solving the model using a CP solver.As a result, we are interested in finding how much execution time each part requires (see Figure8).In Figure8b, we present the execution time required to generate the CP model, while Figure8a) presents the time required by the CP solver to find an optimal deployment strategy.In all experiments presented above, we have kept the number of flow constraints equal to 1.However, we are interested in observing the impact of multiple flow constraints on the execution time of both the CP solver and model as well.As a result, we perform the same set of experiments, increasing the number of flow constraints as well -we perform 50 deployments using a total of 3 respectively 5 flow constraints.In Figure9, we show the execution time required to solve a model for each scenario, while in Figure10we show the time required to build the CP model.A. Discussion
Fig. 8 :
Fig. 8: Execution time of (a) finding a deployment strategy and (b) building the CP model over different edge computing platform sizes, considering one flow constraint.
Fig. 9 :
Fig.9: Impact of the number of flow constraints on solver execution time considering all three scenarios.
Fig. 10 :
Fig. 10: Impact of the number of flow constraints on model execution time considering all three scenarios.
TABLE I :
, TableII, and Figure5.TableIshows the target edge computing platform, where we can see the node's available resources, connections with other nodes, and the bandwidth of each communication link.For simplicity, we choose for each available resource, i.e., RAM, CPU, HDD, a value between 15 and 30 units.The deployment stage can operate with different units, e.g., MB or GB, as long as there is consistency between the available and required resources.Edge Computing platformThe application model file contains the timing and resource requirements of all components; requirements that we present in TableII.For our public safety application, we choose for each component the following: all resource requirements have a value between 1 to 15 units, we randomly select a data size value between 30 and 115 units, and add a custom range for the WCET considering each component's functionality.
TABLE II :
Application resource and timing requirements. | 12,665.8 | 2022-03-01T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
Single shot x-ray diffractometry in SACLA with pulsed magnetic fields up to 16 T
Single shot x-ray diffraction (XRD) experiments under pulsed high magnetic fields up to 16 T generated with a nondestructive minicoil have been performed with a x-ray free electron laser at SACLA. In a perovskite manganaite, Pr$_{0.6}$Ca$_{0.4}$MnO$_{3}$, the magnetic field induced phase transition from the charge-orbital ordered insulator phase to a ferromagentic metallic phase at above $\sim8$ T, which is observed in a series of single shot XRD via the accompanying lattice changes. It is discussed whether a XRD experiment under an ultrahigh magnetic field of 100 T is feasible using single-shots of the SACLA XFEL pulse and a destructive pulse magnet.
I. INTRODUCTION
As a new light source, free electron lasers (XFELs) are prominently characterized by the ultrahigh transverse coherence, an ultrashort pulse of a few femtoseconds and a highphoton flux of 10 11−12 photons/pulse that realizes single shot experiments. The advent of XFEL technique has provided us with such new experimental techniques to explore new area of science as coherent diffraction from a single nano-cluster, diffraction before destruction from a living cell, femtosecond time resolution pump-probe experiments, and also experiments with single shot x-ray diffractometry (XRD) under shocked compression environment [1,2]. As such a new experiment, we are proposing x-ray experiment at an extremely high magnetic field of above 100 T where we utilize the ultrashort temporal width and the single shot experiment of a XFEL, and a destructive pulse magnet for 100 T generation.
High field experiments have been successfully combined with synchrotron radiation (SR) based XRD and spectroscopies for condensed matter experiments with DC magnets up to 15 T [3][4][5][6][7] and non-destructive pulse magnets up to 50 T [8][9][10][11][12][13][14][15][16]. Recent advent of single shot techniques with XFEL, further made possible the observations of the weak superlattice reflections of charge density wave order appearing in a cuprate by suppressing the high-T C superconductivity up to 32 T [17,18]. These high-field studies with SR and XFEL have been confined below 50 T. To generate magnetic fields over 50 T is a challenge for a portable non-destructive pulse magnet. Non-destructive magnets generating above 50 T are available only in specific high magnetic field facilities over the world, where pulsed magnetic fields up to 100 T are available with sufficient safety measures [19].
For x-ray experiments at well above 50 T, we propose a use of a single turn coil (STC), a destructive pulse magnet, instead of non-destructive pulse magnets, because a STC can be portable and generates magnetic fields over 100 T [20,21]. Generating high fields well beyond 100 T necessarily requires destructive pulse magnets where magnetic pressure destroys the coil. For condensed matter experiments, flux compression and single turn coil techniques have been implemented [22][23][24]. The temporal duration of the generated field by a single turn coil is only a few micronseconds, which is about three orders of magnitudes smaller than a pulse of a non-destructive magnet. It is still long enough for a XFEL pulse to be used for a single shot experiment. Note also an exceptional single shot technique utilizing SR from storage rings [25], though usually the photon number of SR-based x-ray is too small for a single shot pulses of 100 T of a few micronseconds.
For a 100 T experiment at an XFEL site, one needs to construct a portable 100 T generator. It is needed to test special equipments such as nonmetallic vacuum tubes and cryostats that are conventionally used with the STC experiments. Portable STCs had been implemented before [20,21]. A portable STC to be build should be less than 200 kg. We are now constructing a portable pulse power with 30 kV charging and 4.5 kJ energy with a mono-spark gap. Conventionally, two STCs are in operation in ISSP Univ. of Tokyo, with 200 kJ at 50 kV with a total weight of 2 tons. In ISSP, specially prepared nonmetallic cryostat and vacuum system have been implemented for destructive pulse experiments at low temperatures. These equipments have been known to be proof against the explosions of the STC magnet. It is most preferable to use these equipments in the 100 T experiments in SPring-8 Angstrom Compact free electron LAser (SACLA). To check this possibility, we need to test if these equipments are compatible with the x-ray experiments in SACLA. Thus, it is fruitful to test the feasibility of those equipments at SACLA with a minicoil mimicking the 100 T experiment with STC.
As preceding studies for the singe shot XRD at 100 T in SACLA, we have discovered that the lattices of materials are actually responding to 100 T fields of a few micronseconds pulse generated in destructive pulse magnets [26]. Recently, a high-speed strain sensor utilizing fiber Bragg grating (FBG) is devised for magnetostriction measurements at above 100 T [27]. Solid oxygen is a candidate for structural analysis with XRD at above 100 T. A high field beyond 120 T induces a new thermodynamic phase called θ phase [28]. A cubic lattice geometry is anticipated in the θ phase in contrast to the monoclinic α phase at low field phase, which remains elusive until a arXiv:2004.12409v1 [cond-mat.mtrl-sci] 26 Apr 2020 XRD is performed at 120 T. An magnetostriction measurement do find a first order lattice change accompanying the magnetization jump, indicating a lattice change at α-θ transition. A perovskite cobaltite LaCoO 3 is a candidate for investigating superlattice reflection at 100 T. A peculiar spin crossover (SCO) is induced by magnetic field beyond 100 T [29]. There are two high field phases whose origins are controversially argued to be an Bose-Einstein condensation of excitons, a excitonic insulator, or a crystallization of the spin-state degree of freedom. This is to be determined by observing a superlattice reflection from the spin-state crystallization and also by observing the d electron states by the x-ray emission spectroscopy.
In this paper, we report an experiment of single shot XRD at SACLA with a non-destructive room-temperature-bore magnet mimicking the single turn coil. The purpose of this work is to see feasibility of an application of the ready-made explosionproof equipments for the STC to XRD experiments at SACLA. We successfully detect the field induced phase transition of Pr 0.6 Ca 0.4 MnO 3 with a series of single shot XRD with SACLA up to 16 T. We comment on a design for a portable single turn coil system for SACLA.
II. EXPERIMENT
The experiment was performed at a hard x-ray beam line BL2, in SACLA [30,31]. A schematic drawing of the experimental setup is shown in Fig. 1(a). The XFEL was tuned to 16 keV with a mean pulse energy of ∼ 100 µJ/pulse. Pink beam (∆E/E ∼ 10 −3 ) and monochromatic beam (∆E/E ∼ 10 −4 ) were used. The single shot diffraction signals were monitored with a multi-port charge-coupled device (MPCCD) image sensor [32]. The magnetic fields were generated with a mini bank system of 2.4 kJ at 2000 V (1.2 mF) [8,33] and a room temperature coil wounded by hand. The waveforms of the pulsed magnetic fields are shown in Fig. 1(b).
The bore of the coil is 1cm diameter and 1cm long in axial direction. In the bore, a vacuum tube is suspended with a Heflow and non-metallic type cryostat located inside as shown in Fig. 1(c). The sample is put inside the cryostat, which is a position where a XFEL pulse hits and the diffracted signal escapes through Kapton windows at the back and top of the vacuum chamber. The 2θ range of 15 -25 degree is captured with a 25.6 × 51.2 mm window of the MPCCD with a distance of 300 mm from the sample to the detector. The diffraction image is shown in Fig. 1 (d).
Pr 0.6 Ca 0.4 MnO 3 was powdered from a single crystal and dispersed in a glue whose effective thickness was ∼ 10 µm. Powdered samples with rough one and smoother one are called sample P and Q, which are used with the pink and the monochromatic beams, respectively. The powder was roughly powdered on purpose so that spotty Debye ring is produced in XRD, where it is expected that we observe a speckle pattern around the diffraction spot reflecting the microstrain of a particle. The spotty diffraction ring is produced from the roughly powdered sample P. One diffraction spot is coming from a single domain of a micro-particle of a sample. The magnetic field effect is visible in Fig. 2. The trend of the image is similar from 0 T to 6.6 T. The images from 9.7 T to 16.3 T have a common feature being distinct from the low field data. The images at 7.7 T and 8.7 T seem to be linear combinations of both features from low field and high filed data, indicating a transient state. Two magnifications are shown in Figs. 2(b) and 2(c). In Fig. 2(b), it is clear that the spot A appears above 7.7 T. In Fig. 2(c), on the contrary, the spot B disappears above 9.7 T. The x-ray intensities of the spot A and spot B are shown as a function of B as shown in Fig. 3. The appearance of spot A means that the θ-2θ configuration is satisfied after the field induced lattice parameter change of the particle for spot A. In contrast, for spot B which vanishes after the field induced lattice parameter change of the particle for spot B, the θ-2θ configuration is no longer satisfied at high fields. With an energy dispersion of the pink beam, 10 −3 , the assignment of the diffraction peaks is inconclusive. Fig. 4(a) shows a result of a series of single shot XRD at pulsed high magnetic fields with a monochromated XFEL beam. Fig. 4(b) shows integrated intensities of x-ray in vertical direction of the pink colored area of Fig. 4(a). The less spotty picture than Fig. 2 indicates the sample is a bit smoother in the sample Q. It is clear that the 112 reflection increases at above 8 T, while the 020, 200 reflections decreases. This behavior is in good agreement with the previous result [8]. It is reported that, in Pr 0.6 Ca 0.4 MnO 3 , the charge and orbital ordered insulator (COOI) phase appears at low temperatures [34]. When COOI phase appears, the a and b axis elongates and the c axis shrinks [35,36]. When COOI phase is collapsed as a result of an external magnetic field, the a and b axis shrinks, and the c axis elongates, as observed in the previous study using SR and a mini pulse magnet. According to this picture, the 020 and 200 diffractions are supposed to shift to higher 2θ overlapping the peak of 112, and the 112 diffraction stays. As a result, the diffraction peak at 16.4 • decreases and the diffraction at 16.5 • increases. The observation in Fig. 4(b) is in good agreement with the above expectation, indicating that the field induced COOI with a single shot XRDs in SACLA is successfully observed, with a slight discrepancy that the 020 reflection does not clearly disappear at high fields. This discrepancy may arise from the fact that the Debye ring is not smooth enough to obtain a good statistics, which is evident from the XRD picture in Fig. 4(a). A smooth data will be obtained with a use of adequately finely powdered sample. So far it is shown that the single shot XRD is successfully conducted with a 1.4 ms-pulsed magnetic field and the non-metallic cryostat and the vacuum tube that are compatible with the STC experiments. Also, the lattice change of Pr 0.6 Ca 0.4 MnO 3 accompanying the field induced phase transition is observed in a series of single shot XRD experiments. The observed changes of XRD are consistent with the shrinkage of a and b axis and an elongation of c axis at high fields.
Here, we propose and discuss the feasibility of a single shot powder XRD measurement in SACLA up to 100 T for a structural analysis. In the present study, a rough powdered sample was employed with the motivation to observe speckle patterns from a micrograin under stress in the two phase coexistence state during the phase transition. However, such speckle patterns are not observed around the diffraction spots in Fig. 2. The 100 T XRD experiment will focuses on the structural analysis by means of the smooth powder diffraction using a well powdered sample. In the present study, a limited range of 2θ < 25 • is covered, where a small number of low indexed diffraction peaks are obtained, being limited by the opening angles of the fiber-reinforced plastic (FRP) based cryostat and the minicoil. Dimension of the present minicoil is d = 10 mm in diameter and h = 10 mm in axial length. For structural analysis, it is favorable to cover a larger angle up to 2θ ∼ 60 • . A larger coverage of 2θ becomes possible with a use of all-Kapton cryostat and a vacuum tube with a Kapton window as shown in Fig. 5. In the 100 T XRD experiment, a double turn coil is to be used which has a larger aspect ratio of h/d =∼ 0.5. This allows us to cover 2θ up to 60 • as schematically inspected in Fig. 5.
Presently, a pink beam does not show an appreciable resolution of XRD for the structural analysis. A use of narrower band beams such as monochromatic beam or the seeded beam [37] is practical in the 100 T XRD experiment. Besides, use of a photon energy of 10 keV increases the photon number by a factor of ∼ 6 in stead of a photon energy of 16 keV used in the present study. Presently, hν = 16 keV is used due to the limitation of the small 2θ range. With the larger 2θ as proposed above, the use of hν = 10 keV is allowed. A magnetic field of 80 T is estimated to be obtained with the double turn coil with a diameter of 6 mm and the portable bank system of ours currently under construction, which is rated at 30 kV with an energy of 4.5 kJ, generating 200 kA. Instead of the MPCCD detector, we plan to use an imaging plate (IP) because IP allows a robust detection even with a fragmentation of the coil, which is not feasible with the MPCCD detectors.
The single crystalline experiment aiming a weak structure like superlattice reflection is at this moment not feasible due to the limited volume of the magnetic field generated in STC, where sample rotation mechanism should be somehow installed. Evaluation of this method will be a future task.
IV. SUMMARY
We performed single shot XRD experiments in SACLA with a pulsed magnetic filed up to 16 T. We have successfully observed field induced lattice change in Pr 0.6 Ca 0.4 MnO 3 , which is originated in a collapse of COOI phase and appearance of a ferromagnetic metal. Based on the result, a methodology of a single shot XRD experiment up to 100 T range in SACLA using a portable 100 T generator is discussed. | 3,543.4 | 2020-04-26T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Offshore Electrical-Oil Production Coupling System Reliability Analysis
The role played by offshore oil resources in energy supply is becoming more and more important, and the normal operation of oil production system on offshore oil platforms requires the reliable operation of the electric power system as a prerequisite; the failure of the components in the oil production system may lead to a large-scale production shutdown, and cause the related electrical components to shut down, making the power system ineffective in a certain range. In this paper, a coupled system model of the electric power system and production system of an offshore oil platform based on massively parallel computing is proposed. By analysing the connection between the oil production system and the electric power system, the coupled system model is established and the load data are calculated. Meanwhile, the reliability model of the coupling components is established based on the fault data, and the electric-related reliability index and the production-related reliability index of the coupling system are proposed. The broad-first search (BFS) algorithm was used to determine whether the coupling nodes were powered, and the depth-first search (DFS) algorithm was used to determine the connectivity of the coupling system. In addition, a minimum load-shedding model based on the integrated fault degree is proposed to reduce the load shedding of the coupled system.
Introduction
On offshore oil platforms as the base of operations for offshore oil extraction, processing, and transportation processes, ensuring the reliability of their power and oil coupling systems is an important guarantee for offshore oil operations.In actual production, how accurately assessing the reliability of the coupling system of power and oil production is important to achieve the normal operation of the offshore platform.
Most of the current reliability studies of electric power and other multi-platforms have conducted their reliability studies by establishing coupled systems.[1] and [2] regionalized the generation, transmission, and distribution system and assessed the reliability of the electric power system of offshore oil platforms in a system-wide context.[3] studied the effect of extreme weather on the coupled electricgas system to make the reliability index assessment closer to the real value under severe weather.[4] established a load-shedding model and analysed the impact of demand response on the reliability of integrated energy systems.[5] combines wind power hydrogen production with local hydrogen storage tanks and fuel cells to participate in a day-ahead economic dispatch, bringing better environmental friendliness and economy to the system.In [6], a minimum cut load model with actions according to load classes is proposed, and a model simplification method for minimum cost flow is proposed to improve the rationality of the interconnected power system structure in offshore oil fields.
This paper conducts targeted research on the coupled system of oil production for offshore power interconnection.Based on the fault data, a reliability model of the coupled components is established and the electric class-related reliability index and the production-related reliability index of the coupled system are proposed.The reliability of the coupled system is evaluated using an improved nonsequential Monte Carlo method.During the single sampling of the system, the BFS is used to determine whether the coupling nodes are powered, and the DFS is used to determine the coupling system connectivity.In addition, a minimum load cut model based on the integrated fault degree is proposed to reduce the load cut of the coupled electrical power-oil production energy system.
Structural Model of Offshore Power-Oil Coupled System
Offshore oil platforms are mainly divided into Central Platform (CEP) and Wellhead Platform (WHP).According to the power supply requirements, the central platform is generally equipped with a small power station structure and a certain number of turbine generators of the same type to supply power [7], which is required to meet not only its load demand but also the load demand of sub-platforms in the region.
In this paper, the structure of the oil production system is simplified, and each component of the production system is directly connected to the power system after simplification.The platform power distribution system mainly supplies power to the components of this platform coupling system, and the connection relationship between a WHP and the power distribution system is shown in Figure 1.
Stability assessment process of electro-oil coupled system
To assess the reliability of a coupled system, the state j of the power system must first be evaluated to determine the connectivity of the coupled nodes before further evaluation.We chose a non-sequential Monte Carlo method for sampling component states based on a state sampling approach, where model components are classified into normal and fault states, or numbers in the range [0,1] for normal, fault, and repair states.To quickly traverse each node of the attributed topology one by one, a vertical topdown traversal method using DFS was chosen.
After each oil platform and power system state sampling, the connectivity of each node of the coupled system is judged on the premise that the end nodes of the distribution system are connected, and then whether there is a branch connection in the production process of the coupled system.First, the connectivity of the end nodes of the distribution system is judged by BFS, and then the connectivity is judged by DFS, and if it is connected, the production process can continue, otherwise, it returns to the previous node for traversal.
Electricity-related reliability indicators of offshore power and oil coupling system
After establishing the reliability model of the coupled components, the system-related reliability indexes are proposed from two aspects, namely, electricity and oil production, according to the characteristics of the coupled system itself and the operation mode different from that of the electric power system.
The specific evaluation indicators are as follows: 1) The probability of production load curtailments (PPLC) represents the probability that the production process will be affected by the removal of its component load in a coupled system.The calculation formula is as follows: =1 () where is the set of system states in which load state i generates a cut production load quantity; is the number of occurrences of state j; is the total number of samples sampled for load state i; is the duration of load state i; is the total time of each load state; is the number of production load states.
2) The expected number of production load curtailments (ENPLC), in units of times/year, refers to the number of times the state of the system with (without) cut production load is transferred to the state of the system without (with) cut production load.The calculation formula is as follows: where is the total number of transfer rates leaving the cut production load state j; is the k-th transfer rate of the component leaving the coupled system cut production load state j.
3) Expected duration of production load curtailments (EDPLC), in h/a, indicates the duration of time per year that the component load is removed from the coupled system and is calculated as follows: ×8760 EDPLC = PPLC .
(3) 4) Expected probability of coupled system failure (EPCSF), which indicates the annual probability of failure of the coupled system, is calculated as follows: =1 ( ) where i F is the set of coupled system fault states at the i-th load state.5) Expected time of coupled system failure (ETCSF), in h/a, indicates the duration of failure of the coupled system per year and is calculated as follows: =1 ( ) where j T is the sustained failure time of the coupled system state j.
6) Expected Production Energy Not Supplied (EPENS), in MWh/a, indicates the expected value of the coupled system being supplied with insufficient production power and is calculated as follows: =1 () where j C is the amount of cut production load corresponding to system state j.
Oil production-related reliability indicators of offshore power and oil coupling system
Such indicators are designed to evaluate the number of further losses caused by the impact on the entire production process.IOP Publishing doi:10.1088/1742-6596/2592/1/0120344 1) Probability of Invalid Production Energy Supplied (PIPES) is used to evaluate the probability that a coupled system will supply invalid electrical energy in one year.The calculation formula is as follows: =1 ( ) where A is the amount of supply inefficient production load corresponding to system state j.
2) Crude Oil Production Loss (COPL) in m3/a, which is used to measure the average annual production system oil reduction due to coupled system failure.The calculation formula is as follows: where i S is the set of all coupled system failures leading to production reduction states; i P is the crude oil production reduction corresponding to the unified state i; i T is the duration of system state i.
3) Diesel Fuel Production Loss (DFPL) in m3/a, which is used to measure the average annual production system oil reduction caused by coupled system failures.The calculation formula is as follows: where i D is the diesel reduction rate corresponding to system state i.
Integrated Fault Degree-Based Load Cutting Model
After each determination of the system state, a tidal current calculation is required to determine if the line is overloaded, and a load-shedding operation is required.In this subsection, the optimal loadshedding model is proposed and a method adapted to the tide calculation of offshore oil platforms is selected.
Regional proximity cut load model
The components used in offshore oil platforms have a total of three states, and the three-state model includes normal operation state, fault state, and maintenance state.The non-sequential Monte Carlo sampling method often requires multiple simulations of many system states, for each of which fault analysis and load cut evaluation are performed.In these fault states, the original system structure may have undergone minor changes, such as a busbar fault that prevents some loads from being supplied, or in more severe fault states, the system may have been disconnected, forming an "island" structure without power.
Based on the relevance to the production process, the components are divided into 21 levels of production load and 10 levels of power system load.The importance of the load decreases with the increase of the critical level.The principle of load removal is that no smaller level load shall be removed until the removal of a larger level load is completed.
DC sensitivity analysis
Since the non-sequential Monte Carlo method requires many systems states to be simulated, each nonrepetitive state requires a tide calculation to determine whether to cut the load or not, so the DC tide method is less computationally intensive and faster.The DC current-based load-shedding model can be described as follows: The overall objective is to minimize the amount of cut load and the objective function: The constraints include two formula constraints, the tidal formula constraint, the active power balance constraint, and some inequality constraints, the load cut constraint, the line tidal constraint, and the generator output constraint.
In addition, to quantitatively describe the importance of each load point in the production process and to consider the impact of faults on the production system and the power system, the concept of comprehensive failure degree (CFD) is proposed in this paper, and the constraints on the degree of faults are added.The specific calculation formula is shown below: where n S is the set of nodes ranked from smallest to largest integrated fault degree; i CFD P is the amount of node load with integrated fault degree I; i ELL is the load level of the busbar power system to which load i is connected.
Example of calculation
Offshore oil platforms are now mostly supplied in the form of a network [8].In this paper, an offshore oil platform group is used as an example for analysis, which consists of three central platforms and eight wellhead platforms, and the specific location distribution and network schematic are shown in Figure 2.
Analysis of calculation results
To make the estimation more accurate when performing the calculation of the arithmetic example, the coupled system is calculated with 30, 000 samples, the reliability assessment index is shown in Table 1, and the variance convergence graph is shown in Figure 3.In terms of failure probability, the power system failure probability is 1.45 times higher than the coupled system failure probability, and in terms of failure time, the power system failure time accounts for 50.48% and the coupled system failure time is 49.52%.The power system failure time is overwhelmingly composed of transmission system failure time, while most transmission system failures, although they will cause its directly connected sub-platform coupling system to be affected, only a very few transmission failures will only cause the interconnected system to decouple into several parts without causing coupling system failure, so the coupling system failure time is only slightly smaller than the power system failure time.
2) Comparison of load shedding indicators: In terms of the number of load shedding, the load-shedding of the power system is only slightly more than that of the coupled system, and once the transmission system cuts load, it is more likely to cause the overall load shedding of the wellhead platform connected to it, and thus its coupled system has a load shedding situation.In terms of load-cutting time, the power system load-cutting time is 1.34 times longer than the coupled system load-cutting time.For the coupled system, its load is not cut due to the backward order of cut, and at the same time, the gap between the two types of systems in terms of cut load time is more obvious due to the long repair time of the interconnected submarine cable.
3) Comparison of power shortage expectation: The table shows that the expected value of power shortage in the power system is 3.45 times higher than the expected value of power shortage in the coupled system.Since the load reduction model based on the integrated fault level prioritizes the removal of unproductive loads, it makes the proportion of coupled system power losses smaller than its load volume proportion by about 9%.4) Analysis of other coupled system reliability indicators: The crude oil daily production and diesel daily production data of each offshore oil platform were statistically derived, and the production-related reliability indexes of each offshore oil platform coupling system were sampled and calculated for a total of 30, 000 times, and the production-related reliability indexes of each platform were obtained as shown in The amount of oil loss from each platform is positively correlated with its oil production, but on this basis, it is the structure of the coupled system and the weakness of its connected distribution system links that affect the final loss value.
Conclusion
To study the mutual influence relationship between the reliability of the electric power system and the reliability of the production system of an offshore oil platform, a coupled system model is established in this paper, taking the electric power system and the production system of an offshore oil platform as the research object; the reliability of the coupled electric power-oil production system is evaluated by using the improved non-sequential Monte Carlo method.
The actual failure data was used to establish the coupled component reliability model.In the system state assessment, the BFS algorithm is used to determine whether the coupling components are reliably supplied, and the DFS algorithm is applied to determine the production process connectivity.The production fault degree is considered based on the regional proximity cut load model, and a comprehensive fault degree index is proposed to comprehensively judge the importance of the load, which reduces the amount of load removal in the coupled system.
Figure 1Coupled system structure connection diagram
Figure 1Coupled system structure connection diagram
P
is cut production load quantity for load i ; D N is the set of load nodes; i is the production importance level of load i .
Figure 2
Figure 2 Offshore oil platform network diagram B is the nodal derivative matrix; is the nodal voltage phase angle vector; 0
Table 1
System electrical-related reliability index results Convergence diagram of the variance of the coupled system reliability index Further analysis of the reliability indicators in the table yielded the following results: 1) Comparison of failure indicators:
Table 2
Production-related reliability indicators for each platform | 3,729.6 | 2023-09-01T00:00:00.000 | [
"Engineering"
] |
Investigating the Structural Compaction of Biomolecules Upon Transition to the Gas-Phase Using ESI-TWIMS-MS
Collision cross-section (CCS) measurements obtained from ion mobility spectrometry-mass spectrometry (IMS-MS) analyses often provide useful information concerning a protein’s size and shape and can be complemented by modeling procedures. However, there have been some concerns about the extent to which certain proteins maintain a native-like conformation during the gas-phase analysis, especially proteins with dynamic or extended regions. Here we have measured the CCSs of a range of biomolecules including non-globular proteins and RNAs of different sequence, size, and stability. Using traveling wave IMS-MS, we show that for the proteins studied, the measured CCS deviates significantly from predicted CCS values based upon currently available structures. The results presented indicate that these proteins collapse to different extents varying on their elongated structures upon transition into the gas-phase. Comparing two RNAs of similar mass but different solution structures, we show that these biomolecules may also be susceptible to gas-phase compaction. Together, the results suggest that caution is needed when predicting structural models based on CCS data for RNAs as well as proteins with non-globular folds. Graphical Abstract ᅟ Electronic supplementary material The online version of this article (doi:10.1007/s13361-017-1689-9) contains supplementary material, which is available to authorized users.
Introduction
T he advent of electrospray ionisation (ESI) transformed the field of mass spectrometry (MS) by providing the ability to routinely analyze not only large proteins but also noncovalently bound biomolecular complexes. In the three decades since this development, there has been a significant body of literature providing evidence of the native-like state of biomolecules measured by both ESI-MS and, more recently, ESI-ion mobility spectrometry-MS (ESI-IMS-MS) [1][2][3].
Ion mobility spectrometry (IMS) is a separation technique based on the gas-phase mobility of ions as they travel, under the influence of a weak electric current, through a drift tube filled with an inert gas [4][5][6]. Ions are separated based on their charge and shape: briefly, compact ions travel faster than extended ions carrying the same number of charges, whilst ions with a high number of charges travel faster than ions carrying a lower number of charges derived from the same precursor molecules. When coupled with MS, the data output is a 3D array of m/z versus intensity versus IMS drift time. The IMS drift time for ions can be converted to collision crosssection (CCS) directly if the IMS drift tube is a linear one [6][7][8], or indirectly following a calibration procedure [9][10][11][12] if the drift tube is of a traveling wave (TW) [13] design. The CCS of an ion corresponds to the averaged rotational 2D projection of the biomolecule's 3D structure. Hence, ESI-IMS-MS is a unique and powerful tool that can separate and characterize biomolecules, providing both mass and shape (via CCS) information on individual species within an ensemble in a single, rapid, experiment. Indeed, ESI-IMS-MS has been employed to study the 3D architecture and conformational properties of many proteins and noncovalently bound biomolecular complexes [4][5][6][7][8][9][14][15][16][17][18][19][20][21][22].
In 1997, Joseph Loo stated there are three camps of opinion concerning the retention of native protein structure upon transition into the gas-phase: Bbelievers, nonbelievers, and undecided^ [2], and quite possibly he was correct to hint at caution because despite the high number of successes reported, there has been a slow, low level emergence of literature demonstrating the Bcollapse^of certain proteins upon transition into the gas phase [23][24][25], one key example being antibodies [26][27][28]. Here, by systematic analysis of different non-globular proteins and RNA molecules using ESI-TWIMS-MS, we provide evidence of compaction in the gasphase, highlighting a potential caveat in studying these specific biomolecules using this technique. The degree of compaction has been revealed by comparing the CCS values estimated from the ESI-IMS-MS data with CCS values calculated from the PDB structures of these biomolecules and also, in the case of the proteins, with in vacuo Molecular Dynamics (MD) simulations.
Protein Mass Spectrometry Analyses
All nanoESI-TWIMS-MS protein measurements were carried out using a Synapt HDMS mass spectrometer (Waters Corp., Wilmslow, UK). Samples were introduced to the mass spectrometer using in-house pulled borosilicate capillaries (Sutter Instrument Co., Novato, CA, USA) coated with palladium using a sputter coater (Polaron SC7620; Quorum Technologies Ltd., Kent, UK). All protein samples were analyzed in positive ESI mode. The m/z scale was calibrated using 10 mg/mL aqueous caesium iodide (CsI) clusters across the acquisition range (typically m/z 500-15,000).
All data were processed and analyzed with the MassLynx v4.1 and Driftscope software, supplied with the mass spectrometer.
ESI-TWIMS-MS CCS Calibrations for Proteins
ESI-TWIMS-MS experiments were carried out on a Synapt HDMS mass spectrometer using traveling wave IMS. Calibration of the traveling wave drift cell was carried out using a previously published method [11]. The calibrant proteins used were: beta-lactoglobulin, concanavalin A, alcohol dehydrogenase, and pyruvate kinase, taken from the Clemmer/Bush database [12]. Calibrant proteins were dissolved at a concentration of 10 μM in 200 mM ammonium acetate before being analyzed under the same conditions as the protein analytes.
Calibrant proteins were corrected for mass-dependant flight time using Equation 1 [11]: where t' D is the corrected drift time, t D the measured drift time of the analyte, m/z the mass-to-charge ratio of the ion, and C EDC the enhanced duty cycle (EDC) delay coefficient of the instrument (in this case 1.57). The corrected drift times were plotted against the reduced cross-sections (Ω') as outlined in [11], and the plot fitted to a linear relationship (Equation 2): where A is a fit determined constant and X the exponential factor. The calibrations were converted to linear plots to allow for straightforward extrapolation for measurements of unknown proteins and complexes. For this, a new corrected drift time was calculated using Equation 3: where μ is the reduced mass of the ion. The new corrected drift time (t′′ D ) was then plotted against the cross-section of the calibrant proteins (taken from the database [12]) to generate the calibration plots (see Supporting Information, Supplementary Figures S1, S2, and S3).
RNA Mass Spectrometry Analyses
All nanoESI-TWIMS-MS RNA measurements were carried out using a Synapt G2-S mass spectrometer (Waters Corp., Wilmslow, UK). Samples were introduced to the mass spectrometer using in-house pulled borosilicate capillaries (Sutter Instrument Company, Novato, CA, USA) coated with palladium using a sputter coater (Polaron SC7620; Quorum Technologies Ltd., Kent, UK). All RNA samples were analyzed in negative ESI mode. The m/z scale was calibrated using 10 mg/mL aqueous caesium iodide (CsI) clusters across the acquisition range (typically m/z 500-15,000).
ESI-TWIMS-MS CCS Calibration for RNAs
ESI-TWIMS-MS experiments for the RNAs were carried out on a Synapt G2-S mass spectrometer using traveling wave IMS with negative ionisation electrospray. Calibration of the traveling wave drift cell was carried out using the method described previously in this document for protein samples but with an enhanced duty cycle delay coefficient (C EDC ) of 1.41. The calibrant was a DNA polythymine of 10 nucleotides (d[T] 10 ), the CCS of which had been measured and reported by Clemmer [32] (see Supporting Information, Supplementary Figure S4).
Theoretical CCS Calculation
MOBCAL software was used to calculate the theoretical CCSs for the samples studied and was implemented using a Linux operating system. The MOBCAL projection approximation value [33] was used to generate the projection superposition approximation (PSA) as outlined in [34]. Equation 4 was used for this: In Vacuo Molecular Dynamics (MD) Simulations MD simulations were run using the NAMD software (NAMD 2.9), using the CHARMM force field [35]. Structures were simulated in a solvent-free system. For the simulation, a constant temperature of 300 K with Langevin thermostat was used and a time-step of 2.0 fs with a radial cut-off distance of 12 Å used throughout. Energy minimization in vacuo was implemented for a total of 0.5 ns before an equilibration of 10 ns; the cut-off distance, force field, and time step remained as described above throughout the simulation. Visual molecular dynamics (VMD) [36] was then used to visualize the simulation, and individual frames were saved as PDB coordinates in order to compute the CCS using MOBCAL. The VMD software was also used to calculate the root mean square deviation (RMSD) and radius of gyration (Rg). Analysis of the RMSD revealed whether a protein had equilibrated by the end of the 10 ns simulation; any sample that had not finished equilibrating was resubmitted for a further 10 ns until equilibration was reached. The NAMD and VMD software was operated under a Linux operating system.
Results and Discussion
Insights into the Gas-Phase Collapse of Monoclonal Antibodies Using ESI-TWIMS-MS under non-denaturing conditions to characterise an IgG1 monoclonal antibody, mAb1, we observed that despite presenting a narrow ESI charge state distribution (21+ to 25+ ions) usually indicative of a Bnative-likep rotein, the experimentally estimated CCS value of the lowest charge state (68.2 nm 2 ) was significantly lower (32.4%) than the computationally determined CCS (101 nm 2 ) based on the published structure (PDB = 1IGY [37]) (Figure 1a i, ii, iii). Similar behavior of monoclonal antibodies (mAbs) has been reported by others [27,28], and Pacholarz et al. carried out in vacuo MD simulations to interrogate the observed compaction of IgG molecules in the gas-phase, demonstrating that the protein likely collapsed around the hinge region in between the fragment antigen-binding (Fab) and the fragment crystallizable (Fc) regions [28].
Molecular modeling is a useful tool to aid the study of biomolecules in the gas-phase. Although CCSs measured using ESI-TWIMS-MS methods can be compared directly to solved X-ray crystal or NMR structures from the Protein Data Bank (PDB), it is becoming clearer that this is not suitable for all proteins. For example, the conditions used to crystallize proteins can be very different to the conditions used for mass spectrometric analysis. Further, some proteins are inherently flexible or disordered, and may not have a PDB structure with which to compare the measured CCS, and additionally a subset of structures in the PDB consist only of fragments of the full protein in question. In vacuo modeling, therefore, allows us to achieve a glimpse of how such proteins may behave within the gas-phase. Adopting a similar in vacuo MD simulation approach as used by [28], we also observe a collapse around the hinge region of mAb1 such that the measured CCS of the mAb is substantially less than both the predicted CCS from its crystal structure [37] and the in vacuo MD simulation (Figure 1a iii).
To understand the role of the hinge region and determine whether this flexible linker was the main attributor to the collapse observed, we released the Fab and Fc regions of mAb1 using Lys-C proteolysis and analyzed the two fragments independently (Figure 1b, c). The CCS values determined by ESI-TWIMS-MS, estimated from the PDB coordinates, and indicated by in vacuo MD simulations are compared for both the Fab and the Fc regions (Figure 1b and c, respectively). The MD simulations indicated that both proteins collapsed to some extent in vacuo compared with their crystal structures, with the Fab region collapsing 11% compared with the 17% collapse of the Fc region. Furthermore, the CCS of the Fab region measured by ESI-TWIMS-MS was closer in agreement to the CCS predicted from its PDB structure than with its equilibrated collapsed MD structure, whereas the CCS of the Fc region measured by ESI-TWIMS-MS was closer to that of its collapsed MD structure than with its crystal structure. Although both of these fragments consist of four Ig domains, the Fc region retains the majority of the hinge region, supporting the notion that the flexible hinge plays a prominent role in the gasphase collapse observed.
Investigating the Gas-Phase Collapse of Other Nonglobular Proteins
To investigate the generality of the role of flexible hinge regions in gas-phase protein collapse, using ESI-TWIMS-MS we analyzed an I27 concatamer, (I27) 5 [29] (Figure 2a) and the POTRA domains from BamA [38] (Figure 2b). (I27) 5 is a mechanically robust pentamer, the folded Ig subunits of which are connected by flexible linkers of 4-6 amino acids. This construct is used widely for AFM and mechanical stability studies [29,39]. Furthermore, poly-Ig domains as well as I27 polyproteins have been shown to be flexible in solution and can adopt various conformations as revealed by electron microscopy [39][40][41]. The POTRA domains were chosen as, similar to (I27) 5 , the protein consists of five subunits (POTRAS 1-5) connected by short linker regions [42,43].
The mass spectrum of (I27) 5 indicated a narrow charge state distribution (13+ to 16+ ions) (Figure 2a i). As neither a crystal nor NMR structure was available for (I27) 5 , a model was built based on the solution structure of the I27 monomer (1TIT, [44]) and building in the 4-6 residue linker regions (see Supporting Information) (Figure 2a i). This enabled a theoretical CCS for the five-domain construct to be established and formed the starting point for the in vacuo MD simulations (see Supporting Information). The measured CCS for (I27) 5 (39.8 nm 2 ) [45] is lower than both the modeled value predicted for the native structure (63.1 nm 2 ) and the MD simulation end point (49.4 nm 2 ) (Figure 2a ii). Upon in vacuo minimization and equilibration, the protein undergoes compaction, then collapses around the flexible linker regions between the individual subunits, which is reflected by the CCS at the end of the simulation.
ESI-TWIMS-MS analysis of the combined POTRA domains from BamA again produced a mass spectrum with a narrow charge state distribution (12+ to 16+ ions) (Figure 2b i). The ESI-TWIMS-MS data indicate that the CCS (35.1 nm 2 ) obtained for the lowest charge state ions (12+) is closer to the predicted CCS of the in vacuo-equilibrated structure (37.1 nm 2 ) than the CCS value predicted from the crystal structure (5D0O; 45.3nm 2 ) (Figure 2b ii). The MD collapse observed for the POTRA domains is attributable to compaction around the short hinge regions between the individual domains, as well as to an overall collapse with POTRA1 moving towards POTRA5, resulting in a more ring-like structure in the equilibrated molecule (Figure 2b iii).
Together, the data presented for mAb1, (I27) 5 , and the POTRA domains suggest that non-globular proteins with flexible linker or hinge regions are susceptible to gas-phase collapse. To determine how linear, elongated molecules without any distinct linker regions behave upon transition to the gas phase, we studied the protein SasG (Figure 2c). SasG consists of repeats of two domains (G5 and E), in which the C-terminus of any given domain is directly connected to the N-terminus of the subsequent domain (Figure 2c). Furthermore, SasG (G5 1 -G5 7 ) has been shown to form long, elongated fibrillary structures that maintain a highly extended conformation in solution, with no evidence of compaction [31]. The ESI-MS data indicate a native-like conformation, centered on the 20+ and 21+ charge state ions, together with a highly charged, more unfolded conformation (centered on the 48+ charge state ions) (Figure 2c i). The ESI-TWIMS-MS CCS of the compact conformation was measured at 57.7 nm 2 (18+ ions). In comparison, the predicted CCS based on the structure obtained from SAXS data [43] was 137.8 nm 2 , whereas the in vacuo MD simulations indicate that the protein collapses in the absence of solvent to a species with a CCS of 80.7 nm 2 (Figure 2c ii, iii). Thus, an elongated linear protein, with no obvious linker or hinge regions, can also undergo significant compaction in the gasphase.
Gas-Phase Collapse of Other Biomolecules
Recent ESI-TWIMS-MS studies indicated that the DNA duplex [d(GCGAAGC)] is a dynamic ensemble in the gas phase [46], in contrast to earlier work on G-complexes of ≥20 nucleotides, which suggested that their chemical topology remained unaltered in the gas-phase [47]. Here, we carried out ESI-TWIMS-MS analyses on two RNAs, each of 35 nucleotides and of very similar mass, but different sequences and secondary structures (2PCV [48] Figure 3a) to determine if their 3D structures were preserved in the gas phase and hence if it was possible to differentiate between the two. An NMR solution structure has been published for 2PCV [48] and a crystal structure for 2DRB [49], and these were used to calculate CCS values (Figure 3b).
ESI-TWIMS-MS analysis of the RNAs yielded identical CCSs for all of the corresponding charge state ions (4-to 7-ions; CCS~10-11 nm 2 ) (Figure 3b). Comparing the TWIMS CCS values with the CCSs estimated from the PDB structures, the TWIMS CCS data were significantly lower than the predicted values for either 2PCV (14.45 nm 2 ) or 2DRB (11.46 nm 2 ). For example, in the case of the 5-ions, TWIMS CCSs of 10.21 nm 2 for 2PCV and 10.16 nm 2 for 2DRB were measured, thus indicating both RNAs undergo gas-phase collapse. It may be argued that the ESI-MS solution conditions (50 mM aqueous ammonium acetate) differ from the crystallography conditions used for 2DRB (50 mM HEPES, 80 mM ammonium sulfate [n.b. some crystals were detected in the absence of the sulfate ions], 0.2 M tri-lithium citrate, and 20% PEG4000 [49]) and from the NMR solution conditions used for [48]), and that this may have affected the CCS values obtained from the three biophysical techniques. Although beyond the scope of this study, a systematic analysis of the effects of counter-ions, pH, oligonucleotide length, and sequence on collapse in the gas-phase with parallel MD simulations [46,50] could be informative to cast more light on the response of RNA molecules in the gas-phase in general. However, here the collapse of both of the RNAs to a similar degree in the gas-phase is evident.
Conclusion
The question remains: can the solution structure of proteins be retained upon transfer into the gas phase? For stable, globular proteins, the answer is undoubtedly Byes,^backed by an impressive number of literature examples. However, here we have presented a small number of protein examples from our 14 years' experience with ESI-IMS-MS where we have found that the CCS values measured underestimate the physical size of the solution structure and modeled data of the biomolecule under scrutiny. This phenomenon has been reported elsewhere in the case of antibodies [26][27][28], but here we have shown by studying isolated regions of an antibody that the Fc region, which contains the majority of the flexible hinge region, is more prone to gas-phase compaction than the Fab region. Other proteins we have identified that undergo gas-phase compaction include those with flexible hinge regions in between more structured domains, such as an engineered concatamer, (I27) 5 , in addition to the BamA complex with its extended array of POTRA domains. Other non-globular proteins such as SasG, an elongated linear protein, can also exhibit this behavior. Gas-phase compaction is not limited to proteins, as illustrated with reference to two 35nucleotide RNA molecules of similar mass but different shape. Both RNAs appeared from the ESI-TWIMS-MS data to be significantly smaller than expected from their 3D crystal or solution structures.
We do not intend this report to be perceived as a negative message to the use of ESI-TWIMS-MS. Indeed, the advantages of this technique far outweigh any disadvantages. However, there are certain classes of biomolecules for which due caution should be employed when interpreting the results.
Open Access
This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 4,537.4 | 2017-05-08T00:00:00.000 | [
"Chemistry"
] |
Economic evaluations of maternal health interventions: a scoping review
Background Evidence on the affordability and cost-effectiveness of interventions is critical to decision-making for clinical practice guidelines and development of national health policies. This study aimed to develop a repository of primary economic evaluations to support global maternal health guideline development and provide insights into the body of research conducted in this field. Methods A scoping review was conducted to identify and map available economic evaluations of maternal health interventions. We searched six databases (NHS Economic Evaluation Database, EconLit, PubMed, Embase, CINAHL and PsycInfo) on 20 November 2020 with no date, setting or language restrictions. Two authors assessed eligibility and extracted data independently. Included studies were categorised by subpopulation of women, level of care, intervention type, mechanism, and period, economic evaluation type and perspective, and whether the intervention is currently recommended by the World Health Organization. Frequency analysis was used to determine prevalence of parameters. Results In total 923 studies conducted in 72 countries were included. Most studies were conducted in high-income country settings (71.8%). Over half pertained to a general population of pregnant women, with the remainder focused on specific subgroups, such as women with preterm birth (6.2%) or those undergoing caesarean section (5.5%). The most common interventions of interest related to non-obstetric infections (23.9%), labour and childbirth care (17.0%), and obstetric complications (15.7%). Few studies addressed the major causes of maternal deaths globally. Over a third (36.5%) of studies were cost-utility analyses, 1.4% were cost-benefit analyses and the remainder were cost-effectiveness analyses. Conclusions This review provides a navigable, consolidated resource of economic evaluations in maternal health. We identified a clear evidence gap regarding economic evaluations of maternal health interventions in low- and middle-income countries. Future economic research should focus on interventions to address major drivers of maternal morbidity and mortality in these settings.
Amendments from Version 1
1.The introduction is amended to highlight evidence mapping as a reason for the review's relevance and the discussion section is amended to specify geographic regions and population subgroups that would benefit from additional focus, as suggested by reviewer 1.
2. The methods section is amended to briefly describe how the intervention categories and topics were developed and applied, as suggested by reviewer 1.The limitations of this approach for studies relating to multiple categories is noted in the amended strengths and limitations section.
3. Several inadvertent errors in the discussion section were identified regarding the number of studies relating to three key causes of maternal mortality.These figures were inconsistent with the final published dataset and tables in the supplementary material and have now been amended.No amendments to the published supplementary data were required.
Introduction
An estimated 295,000 maternal deaths occur during pregnancy, childbirth, and the immediate postpartum period each year, as well as 2 million stillbirths and 2.5 million neonatal deaths [1][2][3] .Ensuring universal access to good-quality care for all women during pregnancy, childbirth and the postpartum period would prevent the vast majority of these deaths [4][5][6] .The World Health Organization (WHO) produces evidence-based global guidelines to help health services, clinicians and communities ensure that the best care can be provided to pregnant women, regardless of where they give birth.Since 2017, the WHO Department of Sexual and Reproductive Health and Research has embarked on a "living guidelines" approach to update recommendations in maternal and perinatal health 7 .Based on this approach, WHO's portfolio of over 400 maternal and perinatal health recommendations is regularly assessed by an independent international panel of experts, to identify which recommendations are in most urgent need of updating, and if new recommendations are needed.
Developing and updating WHO recommendations for global use involves explicit consideration of available evidence for a given intervention across several criteria, including: the balance of benefits and harms, how stakeholders value different health outcomes, acceptability, feasibility, equity and cost-effectiveness of the intervention 8 .Even when there is clear evidence that an intervention is beneficial, acceptable and feasible, policy makers must consider the resource implications of implementation at scale.Health budgets are finite and limited, meaning that adding (or expanding access to) an intervention has an opportunity cost that may result in detrimental reduction of another health intervention.In these instances, evidence on the affordability and cost-effectiveness of the intervention is critical to inform decision-making.The effectiveness evidence for a majority of WHO maternal and perinatal health recommendations are drawn from systematic reviews of randomised trials, however these reviews do not routinely evaluate outcomes related to resource needs or cost-effectiveness 7 .
There have been previous efforts to map economic evaluations across different maternal health interventions, though these have been narrowly focused on selected interventions.For example, a 2013 scoping review identified 36 studies on economic benefits of reproductive, maternal, newborn and child health interventions, but it was limited to cost-benefit studies from low-and middle-income countries (LMICs) only, excluded studies published before 2000, and did not consider all maternal and perinatal interventions recommended by WHO 9 .The 2016 Disease Control Priorities summarised cost-effectiveness evidence for selected, high-value maternal interventions, identifying 26 studies 10 .More recently, systematic reviews of economic evaluations have been conducted for single interventions as part of WHO recommendation updates 11,12 .Other reviews have focused on economic evaluations of certain categories of interventions in LMICs, such as health systems strengthening strategies, or programs to increase utilisation and provision of care 13,14 .
A broad, contemporary synthesis of economic evaluations across a wide range of interventions would provide a critical resource for future updates of WHO maternal health recommendations.It could also provide a consolidated, navigable resource for policy makers, health managers, and clinicians to identify and consider evidence for decision-making in maternal health, including judgements around allocative efficiency and costing models for maternal health budgets 15,16 .Such a synthesis needs to be amenable to regular updating to reflect future changes in the underlying literature.A review encompassing all economic evaluations of any maternal health intervention will also enable identification of gaps in the current evidence base and inform development of priorities for future health economic research in the field.Therefore, the aim of this project was to conduct a scoping review of primary economic evaluations of maternal health interventions to create such a database and to provide preliminary insights into the body of research conducted in this field.
Methods
A systematic scoping review was undertaken in this study.A scoping review is a type of research synthesis that aims to map literature on a particular topic or research area, providing an opportunity to identify types and sources of evidence to inform practice, policymaking and research 17 .This methodology was selected as we were seeking to examine the extent, range and nature of evidence on maternal health interventions and identify gaps in the literature, and not to formally summarise or pool data on cost-effectiveness of any single intervention 18 .This review was conducted in line with the Levac et al. scoping review framework 19 , which is an extended version of the Arksey and O'Malley framework 20 , and the PRISMA Extension for Scoping Reviews (PRISMA-ScR) reporting checklist (extended data E5) 18 .These frameworks help to ensure a consistent, thorough approach to the methodology of the review, and promote replicability.This protocol was registered and published on Open Science Framework (OSF) website 21 .
Eligibility criteria
For this review, we considered only full economic evaluations -including cost-benefit analyses, cost-effectiveness analyses, and cost-utility analyses -to be eligible (Box 1).Studies with cost effectiveness data within, or alongside, randomised controlled trials of effectiveness were eligible.Systematic reviews of economic evaluations were not included.As this review focused on maternal health interventions, the population of interest was women who were pregnant or recently pregnant, in any stage of labour or childbirth, or in the postpartum period (up to 42 days).This review considered any intervention primarily aimed at improving maternal and perinatal health outcomes.This included any clinical, pharmacological, procedural, educational, or behavioural intervention implemented at any level (including individual, health care provider, community, facility, subnational or national levels).Pre-conception interventions, abortion-related interventions, interventions related to management of miscarriage or ectopic pregnancies, and interventions aimed only at newborns were not included.
Cost-benefit analysis (CBA)
Economic evaluations in which the cost of the intervention is related to a value of benefits that uses a common or equal unit of measure, typically monetary.
Cost-utility analysis (CUA)
Economic evaluations in which the cost of the intervention is related to a multidimensional measure of effectiveness which considers not only the outcomes but the valuation of benefits, i.e. a measure of utility such as QALYs or DALYs.
Cost-effectiveness analysis (CEA)
Economic evaluations in which the cost of the intervention is related to a single clinical or natural measure of effectiveness, e.g.deaths, cases.
Adapted from: U.S. National Library of Medicine -Health Economics Information Resources: A Self-Study Course (Module 4) Studies were eligible regardless of what comparator was used and considered any perspective (including societal or health system perspectives).They were eligible if they reported any quantifiable health outcome alongside costs, though the key outcomes of interest were cost-benefit outcomes (where health effects are valued in monetary terms), cost per quality-adjusted life year (QALY) or disability-adjusted life year (DALY), and cost per condition averted or life saved.Eligible studies were those published in peer-reviewed journals conducted in any country.We excluded records published as letters, editorials, or conference abstracts.No language restrictions were applied; for studies published in languages other than English an initial translation was carried out using open-source software (Google Translate) for assessing eligibility.If the study was potentially eligible and this translation was inadequate for data collection, we sought assistance from multilingual colleagues.
Information sources and search strategy
We searched both specialist health economics databases (NHS Economic Evaluation Database and EconLit) and general medical and health databases (PubMed, Embase, CINAHL and PsycInfo) on 20 November 2020.For the period up to 2014, we limited searching to NHS EED, which provides access to over 17,000 economic evaluations of health and social care interventions.NHS EED collated results from weekly searches of MEDLINE, Embase, CINAHL, PsycInfo and PubMed until the end of December 2014.Economic evaluations added to NHS EED compare the costs and outcomes of two or more interventions using cost-benefit, cost-utility or cost-effectiveness analyses.NHS EED is available online but has not been updated since March 2015.Hence, for the period 2015 to 2020, we searched PubMed, EconLit, Embase, CINAHL and PsycInfo.The search strategies for these sources combine terms relevant to maternal health with terms related to economic evaluations (see extended data E1).Search terms for maternal health were derived from search strategies used by Cochrane Pregnancy and Childbirth to maintain and update their specialised register.Search terms for economic evaluations were derived from the search strategies used to populate NHS EED.
In consultation with an information specialist, we adopted a multi-phase approach to searching and screening records from PubMed.Phase 1 of the search was limited to records indexed with the most relevant MeSH term (Cost-Benefit Analysis).Phase 2 extended this to records indexed with other MeSH terms related to economics and costs.Phases 3a and 3b used free-text terms in the title/abstract limited to records not MeSH-indexed (i.e., the non-MEDLINE subset of PubMed).Phase 4 combined MeSH terms and free-text terms across all of PubMed.For pragmatic reasons, we adopted a sampling approach for the 16,135 unique records retrieved by phase 3b and phase 4 of the search, since we expected very few of these records to be relevant.We screened a 10% and 5% sample of phase 3b and phase 4, respectively.We similarly screened a 10% sample of 1025 NHS EED records obtained using non MeSH-indexed terms.Screening these sample records resulted in less than the pre-specified threshold of 3% being included in the review.Searches of Embase, CINAHL and PsycInfo were limited to records indexed with the appropriate subject indexing terms only.We also searched the WHO Global Health Library for any economic evaluations not identified from searches of the sources listed above.
Study selection, data extraction and analysis
Titles and abstracts of all identified citations were deduplicated in EndNote and imported into Covidence software for screening.Two review authors independently assessed unique citations against the eligibility criteria.Potentially relevant articles were included for full text review and assessed for eligibility by two independent authors.At both stages, disagreements were resolved through discussion or consulting a third author.Where more than one paper reported on the same study (i.e. using the same sample and methods), the papers were collated to ensure the primary study was the unit of interest 22 .
Data extraction was conducted using a customised spreadsheet in Google Sheets.We extracted data on study characteristics, including: year, country, population of interest, period of intervention, context of care, intervention and comparator description, category of intervention, intervention mechanism, outcome measures, evaluation type and perspective, relation to WHO recommendation(s), cost year, currency, and data source.Country income levels were coded using World Bank data.Intervention categories and broad topics were developed inductively from the included studies through discussion with study authors.We developed operational definitions for consistent coding of the extracted data (extended data E2).When coding the intervention mechanism of included studies, we used the Cochrane Effective Practice of Care (EPOC) classifications for health systems interventions.For each study, we searched the WHO website to identify whether the intervention or comparator considered by that study had a current WHO recommendation (for or against).If only part of the intervention was related to a recommendation (for example, when the study explored a package of interventions, of which one was a WHO-recommended intervention), that study was classified as partially linked to a WHO recommendation.All data were extracted by a single author, with a 15% sample independently reviewed by a second author.We conducted a series of consistency and validation checks for additional quality assurance, including reviewing included studies within each intervention category for consistency with the operational definition for that category.As this was a scoping review, no quality assessments of individual studies were performed.We reported findings on extracted variables using descriptive analysis with frequency tables and graphs on characteristics and coded categories of included studies as described above.
Results
We identified 923 studies for inclusion in this review (Figure 1).The number of economic evaluations of maternal health interventions has increased over time, with over half of all included studies (489 studies, 53.0%) published from 2014 onwards, compared to those from 1984 to 2013 (434 studies, 47.0%) and just over a quarter of studies (239 studies, 25.9%) in the last three years (2018-2020) (Figure 2).
Geography and income level
The economic evaluations were conducted in 72 countries (extended data E3: Table S1 23 ).Ten countries (United States of America [USA], United Kingdom [UK], Canada, Australia, Netherlands, China, South Africa, India, France, and Spain) accounted for nearly 70% (642 studies) of all studies (Table 1).The highest number of studies were from USA (313 studies), followed by the UK (119 studies), Canada (40 studies), and Australia (39 studies); 48 of the 72 countries had 5 or less studies.In total, 71.8% (663 studies) were conducted in high-income countries, with a further 21.3% (197 studies) in LMICs.The remaining 6.8% (63 studies) were conducted in multiple countries across different income levels (Table 2).LMICs with the highest number of studies were China (24 studies), South Africa (23 studies), and India (17 studies).
Population, intervention period, and setting Studies varied in the population of interest they focused on.We categorised studies based on the subpopulation of interest and identified 53 subgroups (extended data E3: Table S2 23 ).The most common were studies of women at risk of or experiencing preterm birth (57 studies), women undergoing caesarean section (51 studies) and women with HIV (48 studies) (Figure 3).Approximately half (465 studies, 50.4%) broadly considered any or all pregnant women or mothers, without specific restrictions or focus.More than half of studies related to interventions only in the antenatal period (543 studies, 58.8%), followed by the intrapartum period only (173 studies, 18.7%), and the postpartum period only (76 studies, 8.2%); the remainder were a combination of two or more periods (Figure 4).In terms of care setting, studies relating to outpatient services were most common (424 studies, 45.9%), followed by inpatient (224 studies, 24.3%), and a combination of both (147 studies, 15.9%) (Table 3).Only 115 studies (12.5%) related to interventions outside of healthcare settings, including community, home-based, or telemedicine interventions.
Intervention categories and mechanisms
We identified 61 distinct categories of interventions, which we mapped to 10 broad topic areas (extended data E3: studies used diverse methods to explore a wide range of interventions, and the majority of studies presented evidence from high-income countries. Comparison with other reviews of economic evaluations in maternal health similarly found that research in this area has increased.Previous reviews typically had a narrower focus, including studies focused on a single or specific set of interventions or evidence from specific settings [9][10][11][12][13][14] , and consequently identified a smaller number of eligible studies (typically less than 30).For example, a 2018 systematic review of health systems strengthening economic evaluations in maternal and perinatal health identified 24 eligible studies, 23 of which were published since 2000 13 .A 2014 review identified 48 economic evaluation studies on utilisation and provision of maternal and newborn care in LMICs, of which 36 were published since 2000 14 .These reviews, along with the upward trend of publications identified in our review, suggest increasing demand for economic evaluations in this topic area.
Evidence from economic evaluations can be difficult to generalise across different settings, given differences in health system arrangements, payment models, and labour, equipment and medicine costs between jurisdictions 24 .Global estimates of maternal and neonatal mortality rates show that the vast majority of these deaths occur in LMICs [1][2][3] .In these contexts, health budgets are likely to be more limited, with difficult decisions to be made about which interventions to prioritise when resources are scarce.Affordability is also likely to be an issue for those countries where individuals and families are often required to cover the cost of healthcare (i.e.out-of-pocket costs).Despite these public health realities, this review found most economic evaluations were conducted in high-income settings; only 21% of included studies were set in LMICs, and seven high-income countries accounted for nearly two-thirds of available economic evidence.This is consistent with a 2013 scoping review of cost-benefit analysis studies pertaining to reproductive, maternal, newborn and child health in LMICs, which identified only 36 eligible studies 9 .Larger health budgets in high-income countries may be a driver for this, creating a stronger incentive to ensure value for money across higher overall health expenditure.The breadth of healthcare interventions available in high-income settings may also incentivise health economic research since there are more options to be considered by policymakers and insurers when allocating budgets.Nevertheless, this inequity in health economic research suggests efforts need to be better targeted to settings and health systems where the mortality and morbidity burden is greatest.Barriers to implementation of effective interventions in these settings are complex and diverse, but often include economic factors 25 .Greater investment in health economic evaluations for LMIC contexts -tailored specifically to the interventions used in these settings -would probably improve policy decision-making in these settings, yielding additional public health benefits.The majority of included studies were conducted in the Pan-American and European regions, consistent with the finding that most evidence is from
Type of economic analysis
Cost effectiveness analyses (CEA) using condition or intervention-specific measures of health effects accounted for more than half of all included studies (573 studies, 62.1%).13 studies (1.4%) conducted a cost-benefit analysis, valuing health effects in monetary terms, and 337 studies (36.5%) conducted cost-utility analysis (CUA) valuing health effects using quality-of-life measures.Within the CUA studies, studies conducted in high-income countries primarily assessed qualityadjusted life years (QALYs) (206/209 studies) while those in LMICs primarily assessed disability-adjusted life years (DALYs) (61/90 studies), and the remaining CUA studies were conducted across multiple income levels.Included studies considered seven different self-reported cost-effectiveness perspectives, with some studies reporting more than one perspective (Table 5).Of the seven perspectives, health sector was the most reported (205 studies, 22.2%), followed by societal (176 studies, 19.1%), provider (98 studies, 10.6%), government (96 studies, 10.4%), third party funder (41 studies, 4.4%), payer unspecified (30 studies, 3.3%), and finally the patient (20 studies, 2.2%).Nearly one-third of studies did not specify the perspective used (302 studies, 32.7%).
Key findings and interpretation
This review identified and categorised 923 economic evaluations of maternal health interventions published over a 37-year period (the earliest study identified was from 1984).To our knowledge, this is the first such broad mapping of economic evaluations of interventions used during pregnancy, childbirth, and the postpartum period.The number of maternal economic evaluations have increased markedly in the last decade, with over half of included studies published since 2014.Included high-income countries; of the remaining four regions, the Eastern Mediterranean (5 studies) and South-East Asian regions (35 studies) were particularly underrepresented.Of those studies from the Pan-American region, only 29 were conducted in countries other than America and Canada, suggesting that further economic evaluation research in Latin America is also warranted.Even within high income countries, resource allocation trade-offs and affordability concerns may vary for certain subgroups.Only 21 studies from high income countries evaluated interventions delivered to subgroups defined by socioeconomic factors (e.g.ethnic minorities, low income, and inadequate care subgroups).
Studies considered a diverse range of interventions and patient sub-populations, such as women experiencing preterm birth, caesarean section, or HIV.However, a relatively small proportion of included studies related to the leading causes of maternal deaths globally 26 .Specifically, only 37 studies (4.0% of all studies) focused on obstetric haemorrhage, 31 studies (3.4%) on hypertensive disorders, 31 studies (3.4%) on infections that could lead to sepsis, and 6 studies (0.7%) on embolism -these four conditions comprise the leading direct causes of global maternal deaths.In this review, the most frequently studied interventions related to genetic screening and diagnostic tests (including for cystic fibrosis, trisomy disorders, and thalassaemia traits) (109 studies, 11.8%), HIV in pregnancy (including prevention of maternal-to-child transmission) (82 studies, 8.9%), preterm labour and birth (64 studies, 6.9%), and diabetes in pregnancy (38 studies, 4.1%).This may be related to the large proportion of studies conducted in high-resource settings, where maternal deaths are comparatively rare and economic research priorities may lie elsewhere 27 .
When developing their recommendations, WHO prioritises interventions that are likely to have the greatest impact on reducing global maternal mortality and morbidity -as well as increasing the experience and wellbeing of women -and cost-effectiveness is a key consideration in developing these recommendations 8 .This review identified 258 studies that provide cost-effectiveness evidence on interventions directly linked to current WHO recommendations (extended data E3: Table S9 23 ).However, the majority of studies identified either did not relate (or relate only partially) to a WHO recommendation.This similarly suggests a dearth of economic evaluation research on those maternal health interventions of highest global priority.
Strengths and limitations
This scoping review used a robust search in multiple databases, allowing us to identify a large number of studies across a broad range of interventions, settings and analytical designs.Adherence to the Levac et al. scoping review methodological framework 19 and PRISMA-ScR checklist 18 maintained consistency in our approach, while quality assurance and validation checks ensured data accuracy.Despite our best efforts, it is possible that some eligible studies were not captured.For example, while effectiveness trials may report on cost outcomes, this may not be clearly documented in the study abstract or main findings, making it difficult to detect.We also relied upon the NHS EED database to identify studies published before 2015.While our search from 2015 onwards focused on the same databases indexed by NHS EED, we are unable to fully assess the veracity of their eligibility assessment process and whether the two approaches meaningfully differed.An additional challenge in this review was in systematically classifying the population, intervention, comparators, and outcomes used across studies.For example, economic evaluations may involve the same target population, but report on different outcomes of interest, or consider different cost perspectives.Studies were coded to the most relevant intervention category rather than multiple categories to avoid double-counting in the analysis.
With the data extracted in this review, we were not able to explore some economic analytical questions of public health importance (e.g.any differences in study findings across private vs public contexts), however, future expansion of this scoping review may allow us to do so.
Future research and implications for practice
This review was conducted to support WHO activities on living guidelines in maternal health 7 .In light of future updates to those guidelines, we intend to regularly update this review.In future updates, we anticipate incorporating quality assessments for individual studies that are generated from evidence syntheses of specific interventions, though there are acknowledged limitations in available tools for assessing quality of health economic literature 28 .The identification of studies in this review can be useful to maternal health guideline development or policy decision-making processes by providing a searchable, contemporary database of health economic evidence.This can be used to identify all available studies for specific interventions, subpopulations, and contexts.Further to this, the gaps in maternal economic evaluations that have been identified in this review can provide insights into where future research needs to be targeted.
Conclusion
We identified 923 economic evaluations of maternal health interventions, covering a wide range of subpopulations of women and health conditions.While the volume of economic evaluations has increased over time, there are significant disparities between available economic literature, and the causes and settings of maternal and newborn deaths.Future health economic research needs to focus on interventions to address the major drivers of maternal morbidity and mortality, and their implementation in limited-resource contexts.The review findings provide a comprehensive, and navigable resource for economic evidence to support maternal health guideline and policy development.
Sameera Senanayake
Queensland University of Technology (QUT), Musk Ave, Kelvin Grove, QLD, Australia The manuscript titled "Economic evaluations of maternal health interventions: a scoping review" provides an extensive review of economic evaluations focused on maternal health interventions.
The review aims to support global maternal health guideline development, particularly by the World Health Organization (WHO), through the synthesis of 923 economic evaluations conducted across 72 countries.
The manuscript is well-written and clearly structured, making it easy for readers to follow the objectives and findings.The authors have effectively articulated their methodology and results, providing valuable insights into the state of economic evaluations for maternal health interventions.
However, there are a few areas for improvement.The search was conducted in November 2020, meaning that more recent studies may not have been included.Given ongoing advancements in maternal health interventions, it would be important to update the search to include the latest evidence, if possible, to strengthen the paper's relevance.
Additionally, while the study justifies not applying a CHEERS quality assessment due to its scoping nature, including a quality assessment would have enhanced the robustness of the findings.I understand that reassessing more than 900 articles for quality may not be practically feasible at this stage, but it would have been valuable if this had been included from the outset.
Are the rationale for, and objectives of, the Systematic Review clearly stated?Yes
Are sufficient details of the methods and analysis provided to allow replication by others? Yes
Is the statistical analysis and its interpretation appropriate?
Malte Sandner
Department of Economics, Nueremberg Institute for Technology, Nürnberg, Germany I value the effort of the autors to improve the article.
Are the rationale for, and objectives of, the Systematic Review clearly stated?Not applicable
Are sufficient details of the methods and analysis provided to allow replication by others? Not applicable
Is the statistical analysis and its interpretation appropriate?Not applicable
Are the conclusions drawn adequately supported by the results presented in the review? Not applicable
If this is a Living Systematic Review, is the 'living' method appropriate and is the search schedule clearly defined and justified?('Living Systematic Review' or a variation of this term should be included in the title.) I think the authors should make clearer why the review is relevant.I think one reason for the relevance is to identify areas in which little research is conducted and where more research is necessary.I think the article is so far neglecting this aspect.
Furthermore, the article should formulate certain areas where more research is needed.The authors go into this direction, but I think they can identify more particular gaps by interacting setting of care, subpopulations, country, country income level, and broad categories of maternal health interventions.
Finally, I think the broad categories of maternal health interventions can be explained better.On which base are these broad categories chosen?How is the categorization conducted?I can imagine many interventions can also put in more than one broad categories of maternal health interventions.
Are the rationale for, and objectives of, the Systematic Review clearly stated?Yes Are sufficient details of the methods and analysis provided to allow replication by others?Yes focus.
Reviewer comment 3: "Finally, I think the broad categories of maternal health interventions can be explained better.On which base are these broad categories chosen?How is the categorization conducted?I can imagine many interventions can also put in more than one broad categories of maternal health interventions." Response: The intervention categories and broad topic areas for maternal health interventions in this paper were developed inductively from the included studies through discussion with study authors.This was an iterative process, during which operational definitions were developed to ensure consistent coding of the extracted data.Following coding, categories were reviewed against the operational definitions to ensure consistency.
The methods section has been amended to include further detail regarding this process.
The operational definitions of each intervention category are set out in Extended Data E2: Table S3.
Categorisation for each study was determined during the data extraction process, as described in 'Study selection, data extraction and analysis'.The first reviewer categorised each study, on the basis of the described intervention within the study.A 15% sample of data extraction (including this categorisation) was independently reviewed by a second author.A series of consistency and validation checks were conducted, which further confirmed that studies of the same intervention appeared in the some broad category.
We acknowledge that some studies were relevant to more than one category of intervention, but that this would have resulted in double-counting in our analysis.Accordingly, we prioritised the most relevant category for each study for the purpose of analysis.An amendment to reflect this has been added to the limitations section.
The benefits of publishing with F1000Research: Your article is published within days, with no editorial bias • You can publish traditional articles, null/negative results, case reports, data notes and more • The peer review process is transparent and collaborative • Your article is indexed in PubMed after passing peer review • Dedicated customer support at every stage • For pre-submission enquiries, contact<EMAIL_ADDRESS>
Figure 2 .
Figure 2. Number of studies by year of publication.
Figure 4 .
Figure 4. Number of studies by time period of intervention.
Figure 5 .
Figure 5. Number and proportion of studies per identified relation to corresponding WHO recommendation for the studied intervention.
Figure 3. Top ten subpopulations of interest, excluding 'all pregnant women and mothers', by number of studies. disorders
2309 studies, 11.8%); models of care (e.g.midwifery-led care) (103 studies, 11.2%), and routine antenatal and postpartum care (77 studies, 8.3%) (Table4).In assessing interventions, we also identified 52 intervention mechanisms mapped to seven broad types (extended data E3: TableS423).Relation to WHO recommendationsOf the 923 studies in the review, 531 (57.5%) studies assessed an intervention or comparator related to a published WHO recommendation.For 258 studies (27.9%) the intervention was directly linked; for 217 studies (23.6%) the intervention was only partially linked; and for 56 studies (6.1%) the comparator was linked.A total of 392 studies (42.5%) assessed interventions and comparators for which there is no current WHO TableS323 ).The most common studies were those addressing prevention, recognition, and management of infection not specific or exclusive to pregnancy, such as HIV, Group B Streptococcus (GBS), and Hepatitis B (221 studies, 23.9%); labour and childbirth care (e.g.caesarean section) (157 studies, 17.0%); prevention, diagnosis, and management of obstetric complications (145 studies, 15.7%); screening and diagnosis of genetic 23commendation (Figure5).Within the 258 studies where the intervention was directly linked to a current WHO recommendation, the most frequent interventions related to HIV management in pregnancy (54 studies); obstetric haemorrhage (23 studies); midwifery-led care (14 studies); syphilis in pregnancy (14 studies) and induction of labour (11 studies).Of those studies exploring interventions which were not the subject of a current WHO recommendation, categories including genetic screening (58 studies); premature labour/preterm birth (48 studies); vaccination in pregnancy (26 studies); caesarean section (23 studies); and Group B streptococcal disease (17 studies) were most common (extended data E3: TableS523).
Table 4 . Number and proportion of studies for broad categories of maternal health interventions. Broad category of maternal health intervention Number of studies Percentage of total studies Non-obstetric infection - prevention, recognition and management
(e.g.smoking cessation, promotion of physical activity during pregnancy)
Table 5 . Number of studies within the dataset that self-report one of the seven identified cost- effectiveness perspectives. Cost-effectiveness perspective Number of studies Percentage of total studies*
*Studies reporting more than one perspective are listed against each applicable perspective; as such, the percentages are not cumulative.
have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.
Reviewer Report 14 July 2023 https://doi.org/10.5256/f1000research.148031.r173802© 2023 Sandner M. This is an open access peer review report distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | 7,763 | 2022-02-24T00:00:00.000 | [
"Economics",
"Medicine"
] |
Changes in PCSK 9 and apolipoprotein B100 in Niemann–Pick disease after enzyme replacement therapy with olipudase alfa
Background Enzyme replacement therapy (ERT) with olipudase alfa, a recombinant human acid sphingomyelinase (rhASM), is being developed to treat patients with ASM deficiency (ASMD), commonly known as Niemann–Pick disease (NPD) types A or B. This study assessed the effect of ERT on lipid parameters and inflammatory markers. Methods Serum and plasma samples from five adults with NPD type B (NPD-B) who received olipudase alfa ERT for 26 weeks were analysed. We also collected fasting blood samples from fifteen age- and sex-matched participants as reference and comparison group. We measured fasting lipid profile, apolipoproteins B48 and B100 (apoB48 and apoB100), apolipoprotein A1 (apoA1), proprotein convertase subtilisin/klexin type 9 (PCSK9) mass, oxidised low-density lipoprotein (oxLDL), small dense low-density lipoprotein cholesterol (sdLDL-C) and tumour necrosis factor α (TNF-α). Results Patients with NPD-B, compared with age and sex matched reference group, had higher triglycerides, PCSK9, apoB48, oxLDL and TNF-α and lower high density lipoprotein cholesterol (HDL-C) and apoA1. Treatment with ERT was associated with improved lipid parameters including total cholesterol, triglycerides, low density lipoprotein cholesterol (LDL-C), sdLDL-C, oxLDL and apoB100. Though there was an increase in apoA1, HDL-C was slightly reduced. TNF-α showed a reduction. ApoB100 decreased in parallel with a decrease in total serum PCSK9 mass after ERT. Conclusion This study demonstrated that patients with NPD-B had a proatherogenic lipid profile and higher circulating TNF-α compared to reference group. There was an improvement in dyslipidaemia after olipudase alfa. It was possible that reductions in LDL-C and apoB100 were driven by reductions in TNF-α and PCSK9 following ERT.
Background
Acid sphingomyelinase deficiency (ASMD), also known as Niemann-Pick disease (NPD) types A or B, is an extremely rare genetic disorder characterised by mutations in the SMPD1 (sphingomyelin phosphodiesterase 1) gene, leading to a deficiency of the enzyme acid sphingomyelinase (ASM). NPD type C is not considered in this study as it has a different causality. ASMD leads to an accumulation of sphingomyelin and large lipid-laden Open Access 16:107 foam cells within the hepatocytes as well as tissues within the lung, spleen, lymph nodes, adrenal cortex, bone marrow and central nervous system [1][2][3]. NPD type A (NPD-A) is a rapidly progressive fatal neurodegenerative disorder leading to death by 2 to 3 years of age. NPD type B (NPD-B) has little or no neurological involvement and most patients survive into adulthood. NPD Type B is characterised by hepatosplenomegaly, thrombocytopaenia, interstitial lung disease and dyslipidaemia [1][2][3]. Lipid profiles of NPD patients are characterised by elevated LDL-C, very low-density lipoprotein (VLDL) cholesterol, and triglyceride levels, whereas HDL-C levels are significantly reduced [4]. This atherogenic lipid profile has contributed to the onset of early coronary artery disease in NPD-B patients [1,3].
While there is currently no approved therapy for NPD, enzyme replacement therapy (ERT) with recombinant acid sphingomyelinase (olipudase alfa) is being developed as a treatment option. The atherogenic lipid profiles in NPD-B patients were shown to improve with ERT in a phase 1b clinical trial (DFI13412 study, NCT01722526) [4]. Frozen serum and plasma samples of the patients collected as part of this parent study were made available to us by Genzyme for the conduct of the present study. Favourable changes in LDL-C and apoB100 levels were observed in the parent study. The clearance of plasma LDL-C via the LDL receptor (LDLR) pathway is mitigated by PCSK9 which, by binding to LDLR, prevents the latter from being recycled and hastens their degradation [5].
We hypothesize that PCSK9 may be associated with the observed changes in LDL-C and apoB100 levels. To this end, we compared lipid profile, apolipoproteins, oxLDL, circulating PCSK9 and HDL functionality in five NPD-B patients with a reference group. The latter was made up of age and sex matched healthy individuals. They served to provide reference ranges as a non-disease population for experimental assays and they were used as a comparison group for all measured parameters. We evaluated changes in parameters measured after olipudase alfa. ERT was reported to cause transient elevations of some systemic inflammatory markers [4]. We looked at whether ERT was associated with changes in TNF-α, another marker of inflammation.
Study design
This is an Investigator Initiated Study supported by a research grant from Sanofi-Genzyme for its design and conduct. The study was sponsored by Manchester University NHS Foundation Trust.
The parent study was a phase 1, open-label, withinpatient, repeat-dose, dose-escalation study as previously described [4]. In brief, five adult patients (3 males and 2 females) with NPD-B were recruited to the parent study. Informed consent was obtained from all participants before the conduct of any study-related procedures. Participants consented to the storage of samples from the study for use in future ethically approved studies. All 5 patients had hepatosplenomegaly and 4 had thrombocytopenia. 2 of the patients were on stable regimen of lipid-lowering therapy, viz simvastatin 20 mg and 40 mg daily respectively and the statin dose was unchanged during the study. Patients received escalating doses (0.1 to 3.0 mg/kg) of olipudase alfa intravenously. The initial dose of 0.1 mg/kg was given on day 1. This was followed 2 weeks later by 0.3 mg/kg. After 2 consecutive doses of 0.3 mg/kg, dose escalation continued at 0.6, 1.0, 2.0 and 3.0 mg/kg, the last dose was maintained until week 26. Patients who experienced adverse events more than mild in severity either stayed on the same dose or received a reduced dose at the next infusion. All patients were successfully escalated to the target dose of 3.0 mg/kg and they all completed the study at week 26.
Samples made available to us from the parent study included pre-ERT serum samples and plasma samples pre-and post-ERT at 4 different time points: day 1, week 8, week 16 and week 26. All the blood samples at these time points were taken 24 h post-infusion. All study samples were taken in the fasting state. The ERT doses at these times points were 0.1 mg/kg, 1.0 mg/kg, 3.0 mg/kg and 3.0 mg/kg respectively. All samples had been stored at − 80 °C and were transported to our site on dry ice. For our study, fifteen age-and sex-matched healthy participants were recruited as a reference group from Manchester University NHS Foundation Trust (Manchester, UK) and University of Manchester (Manchester, UK). Reference subjects had no significant pre-existing medical conditions and were not on any regular medications. Patient Information Sheet was given to all eligible subjects and informed consent was obtained from those who agreed to participate before a single fasting blood sample of 60 ml was obtained from each reference subject.
ApoB-depleted serum prepared and cholesterol efflux assay was conducted as described before [7,8]. Briefly, J774A.1 cells were pelleted and incubated for 24 h with 0.2 µCi of radiolabelled 3 H-cholesterol in RPMI 1640 medium with 0.2% BSA at 37 °C in a humidified atmosphere containing 5% carbon dioxide. ABCA1 is upregulated using medium containing 0.3 mM C-AMP (8-(4-Chlorophenylthio) adenosine 3′,5′-cyclic monophosphate sodium salt) for 4 h. These cells were then incubated with 2.8% apoB-depleted serum using polyethylene glycol (PEG MW8000) for 4 h. After incubation, the cell media were collected and cells were washed with PBS and dissolved in 0.5 mL 0.2 N NaOH to determine radioactivity. Cellular cholesterol efflux was expressed as the percentage of radioactivity in the medium from the radioactivity in the cells and medium collectively. The intra-assay and interassay coefficients of variation were 3.9% and 7.3% respectively. Cholesterol efflux was calculated using the following formula: Serum paraoxonase (PON-1) activity was determined using paraoxon (O,O-Diethyl O-(4-nitrophenyl) phosphate) as a substrate (Sigma-Aldrich Company Ltd) using a RX Daytona auto-analyzer (Randox Laboratories Ltd) [9]. The intra-assay and inter-assay coefficients of variation were 3.5% and 2.7%, respectively, for the measurement of PON1 activity.
Statistical analyses
Statistical analyses were performed using SPSS for Mac (Version 23.0, IBM SPSS Statistics, Armonk, New York, USA) and figures were produced using GraphPad Prism for Mac (Version 7.00, GraphPad Software, La Jolla California, USA). Data were presented as mean and standard deviation for all variables. The independent samples t-test was used for comparison between NPD-B and control groups. A P value of less than 0.05 was considered to be statistically significant. Changes after ERT is expressed as percentage.
Results
The baseline characteristics for NPD-B patients and reference group are presented in Table 1. Compared to reference subjects, NPD-B patients had higher triglycerides, Cholesterol efflux (%) = Radioactivity in medium Radioactivity in cell + radioactivity in medium × 100 apoB48, oxLDL, PCSK9 mass, TNF-α, and lower HDL-C and apoA1. Absolute values and changes in measured parameters at baseline and following ERT were shown in Table 2 and Fig. 1 respectively. There were reductions in total cholesterol, triglycerides, LDL-C, apoB100, sdLDL-C, PCSK9 and TNF-α following ERT. PCSK9 level showed consistent reductions after each dosing with olipudase alfa and progressive reductions in apoB100 was observed in conjunction (Fig. 2).
Discussion
In a study of the safety and tolerability of olipudase alfa in patients with NPD-B, an improvement in the total cholesterol, triglyceride and LDL-C was demonstrated [4].
Changes in biomarkers (IL-6, IL-8 and CRP) and hematology variables had already been reported in the original paper [4]. Our study looks at possible associations with changes in lipid parameters and the inflammatory marker TNF-α. Patients with NPD-B have an atherogenic lipid profile characterised by elevated triglyceride, oxLDL, apoB48 and lower HDL-C and apoA1 compared with healthy individuals. The reference group in our study acted as a comparison group not only for experimental assays but for all measured parameters including those that already have standard reference ranges. This is relevant as in the UK, healthy individuals with no history of cardiovascular disease are known to have higher lipid values than standard ranges. Baseline LDL-C level in NPD-B patients was noted to be low. This may be because 2 of the 5 subjects were on stable regimens of lipid lowering therapy [4].It may also reflect hepatic infiltration of sphingomyelin resulting in reduced capacity for VLDL and LDL particle production. We note a discrepancy between our baseline and W26 lipid parameters and those presented in the parent study [4]. We understand the quality of our samples might have been compromised by the processes of freezing and thawing [14] that might have contributed to the discrepancy. Another explanation for the difference in results, particularly noted in HDL-C, might have been the different laboratory assays and analysers used in the parent study and our study.
We observed a prompt reduction in TNF-α following ERT. ERT has been postulated to cause gradual debulking of sphingomyelin releasing ceramide and other sphingomyelin metabolites [4]. TNF-α is known to stimulate sphingomyelinase and trigger ceramide formation, and ceramide in turn is known to negatively regulate TNF-α production [15][16][17]. One credible explanation of our results could be that as ERT caused ceramide levels to increase, the latter might in turn bring about a reduction in TNF-α. Further studies are needed to confirm this. Another possible reason for the lower TNF-α levels might be the improved wellbeing of ERT recipients as a consequence of lipid accumulation reduction, this would be relevant in the later stages of the study. TNF-α had been shown to induce PCSK9 expression in a cytokine
Table 1 Comparison of lipid profile, markers of LDL quality and HDL functionality, and markers of inflammation between patients with Niemann-Pick disease type B and healthy controls at baseline
Data are presented as mean ± SD P value for comparison with controls using independent samples t-test apoA1, apolipoprotein A1; apoB-100, apolipoprotein B-100; apoB-48, apolipoprotein B-48; HDL-C, high-density lipoprotein cholesterol; LDL-C, low-density lipoprotein cholesterol; oxLDL, oxidised low-density lipoprotein; PCSK9, proprotein convertase subtilisin/kexin type 9; PON1, paraoxonase-1; sdLDL-C, small dense low-density lipoprotein cholesterol; TNF-α, tumour necrosis factor α signalling-3 (SOCS3) dependent manner [18,19] and may therefore be a contributing factor to the reduction in PCSK9 that was also observed following ERT. ApoB100 is the main apolipoprotein component of LDL, VLDL and intermediate-density lipoprotein (IDL) while apoB48 is an intestinal chylomicron particles specific marker [20][21][22]. The association between apolipoprotein B and cardiovascular disease has been well established in epidemiological studies [23,24]. Compared with healthy individuals, patients with NPD-B have higher apoB100 levels indicating increased atherogenic particles and higher fasting apoB48 concentration suggesting a delay in processing and clearance of chylomicron particles. PCSK9 belongs to the proprotein convertase family [25] and its binding to LDL receptors (LDLR) facilitates their degradation and results in increased plasma LDL particle and cholesterol concentrations [26]. It has been suggested that PCSK9 also have an additional direct regulatory role in apolipoprotein B degradation independent of LDLR [27]. PCSK9 has been shown to have correlative relationship with apoB containing lipoproteins including sdLDL and oxLDL [28]. It is possible that the reduction in TNF-α led to lower PCSK9 levels that resulted in better clearance of apoBcontaining particles and reductions in sdLDL and oxLDL. The lowering of oxidised LDL particles in turn could lead to further reduced secretion of cytokines. This study is limited by the small number of subjects with NPD-B receiving ERT with olipudase alfa, limiting the ability to draw strong conclusions from the associations between changes in variables observed. We observed a change in lipid profile and a reduction in TNF-α. We noted that 2 of the 5 subjects in the parent study were on lipid lowering medications and it was not known whether their compliance with medications might have improved during the study as a result of regular contact with healthcare professionals. We analysed samples that had been frozen since the parent study was conducted and it was possible that some sample quality might have suffered. It is important to acknowledge that NPD-B is a very rare condition hence it is not possible to recruit large number of patients.
Conclusion
We conclude that ERT with olipudase alfa is associated with changes in lipid profile and reduction in a systemic inflammatory marker. Whether this translates to any effects on atherosclerotic cardiovascular risk requires further study. | 3,246.6 | 2020-08-19T00:00:00.000 | [
"Medicine",
"Biology"
] |
A novel ensemble learning-based grey model for electricity supply forecasting in China
Abstract: Electricity consumption is one of the most important indicators reflecting the industrialization of a country. Supply of electricity power plays an import role in guaranteeing the running of a country. However, with complex circumstances, it is often difficult to make accurate forecasting with limited reliable data sets. In order to take most advantages of the existing grey system model, the ensemble learning is adopted to provide a new stratagy of building forecasting models for electricity supply of China. The nonhomogeneous grey model with different types of accumulation is firstly fitted with multiple setting of acculumation degrees. Then the majority voting is used to select and combine the most accurate and stable models validated by the grid search cross validation. Two numerical validation cases are taken to validate the proposed method in comparison with other well-known models. Results of the real-world case study of forecasting the electricity supply of China indicate that the proposed model outperforms the other 15 exisiting grey models, which illustrates the proposed model can make much more accurate and stable forecasting in such real-world applications.
Introduction
Energy is the foundation of a country's economic development. Electricity is an essential energy source, and the electric energy policy will guide the development of a country to a certain extent [1]. Sustainable supply of electricity is one of the biggest challenges. Effective forecasting of electric energy supply has therefore become a prerequisite for formulating energy policies. If the electricity supply is underestimated, it will not only fail to meet the regular power demand of the country, but also pose a threat to the security of the electricity system. On the contrast, if the electricity supply is overestimated, the national economy will suffer severe losses due to the difficulty of large-scale storage of electric energy. Terefore, it is important to have a high-precision system of electricity supply forecasting. Among numerous forecasting models, the complicated calculation process of support vector machines [2] and the demand of artificial neural networks [3] for data magnitude sets cannot always obtain the ideal forecast accuracy. Because the power supply is faced with the problems of small sample size and incomplete information, and grey system proposed by Deng is a tool to deal with them, so the forecasting method of grey system is suitable for solving such problems [4]. The grey system model has been applied in various fields in recent few years. For example, Zeng et al. applied a multivariate grey model to Chinas grain production [5]. Ding et al. applied the improved Simpson grey model to the prediction of electric vehicles through an adaptive method of generating dynamic weighted sequences [6]. In terms of environmental pollution, in order to predict carbon dioxide emissions in the BRICS countries, Wu et al. established a new conformable fractional-order nonhomogeneous grey model [7]. Hu et al. proposed a novel time-delayed fractional grey model, which took into account the time delay effect and applied it to the natural gas consumption forecast of the manufacturing industry in China [8].
Driven by the research of many scholars, the grey system has been developed vigorously, which is mainly reflected in the three aspects of accumulative order, background value optimization and expansion form. Wu et al. designed a grey model of fractional accumulation to eliminate the randomness of the original data [9]. Zhou et al. used the exponential weighted average method to define the cumulative generation of new information priority with parameters [10]. Ma et al. designed a conformable fractional grey model [12]. As more accumulation methods are proposed, the grey model can achieve stronger and more accurate predictions. Zeng et al. developed a structurally compatible multivariate grey model to improve compatibility [11]. Ma et al. applied the Simpson formula to the model and proposed a background value optimization model [13]. Wei et al. also established a new method of optimizing background value by using the integral median theorem [14].
With the discrete grey model proposed by Xie et al., there are more ways to deal with various data types [15]. In order to fit a time series that approximates the nonhomogeneous exponential law, Cui et al. designed NGM to solve the limitations of the traditional grey model [16]. Chen et al. combined the Bernoulli equation to improve the prediction accuracy by controlling the power exponent n to adjust the curvature of the curve [17]. To overcome the defects of the traditional Verhulst model's misalignment substitution of parameters and the unreasonable selection of initial values, Zeng et al.
proposed a new prediction model for tight gas production [18]. These methods of improving the grey model greatly expand its application range and enrich the grey system theory.
In response to the formulation of energy policies in developing countries, Li et al. applied the adaptive grey model to electricity consumption forecasting in the Asia-Pacific region and obtained effective results [1]. Xu et al. designed a grey model with an optimal time response function, used particle swarm optimization to optimize nonlinear parameters, and verified the reliability of the model with an example of China's electricity consumption [19]. Wang et al. focused on the problem of forecast stability and developed a hybrid forecast based on an improved grey forecasting model based on a multi-objective ant optimization algorithm, which dynamically selects the best input training set and takes the annual electricity consumption in various regions of China as the research object for practical evaluation [20]. Ding et al. used the particle swarm optimization algorithm to optimize the new initial conditions and combined with the rolling prediction mechanism to obtain the future trend of China's total electricity consumption and industrial electricity consumption [21]. Liu et al. predicted the electricity consumption in China and India with a new grey multinomial model of time electricity term score [22]. Yu et al. developed a highly flexible grey model with time-delayed power-driven for photovoltaic power generation [23]. The above research shows that although the single improved model performs well in electric energy, the traditional single-model prediction method also has certain limitations in different studies.
As a branch of machine learning, ensemble learning aims to get better results than the single estimator [24]. The integration schemes have many types [25], roughly divided into classification and regression. Yu et al. used a simple addition strategy set to output the prediction results [26]. The dynamic ensemble model proposed by Chen applied to the prediction of wind speed showed strong competitiveness [27]. Dong et al. first built a decision tree to mine energy consumption patterns and used ensemble learning methods to establish building energy consumption prediction models for different modes [28]. Lin et al. designed an air quality monitoring tool by using multiple linear regression technology to integrate multiple deep learning prediction models [29]. At the level of classification, Arangarajan uses voting methods combined with discrete wavelet transform analysis to detect and classify different power quality disturbances [30]. The ensemble idea can not only effectively improve the prediction accuracy, but also has a lot of room for development in the integration of multiple models.
According to the literature study above, although there are many cases of combination of grey system and machine learning in recent years [31], ensemble learning has never been used for grey system models at present. With significant performance of ensemble learning to improve the existing machine learning models, it can also be expected to enhance the performance of the grey system models. Therefore, the novel ensemble strategy is propsed to integrate the different form of grey system models in this work. Then the ensemble learning-based grey model is used of the application of electricity supply in China and the effectiveness of the new model has been verified.
The remainder of this paper is organized as follows. Section 2 describes the accumulation methods and diverse forms of the grey model. Section 3 introduce the construction of the novel ensemble theory. In Section 4, the verification of numerical cases is given. The electricity supply of China is discussed in Section 5. Furthermore, the conclusions are shown in Section 6.
Method of grey model
The main methods used in this work will be presented in this section. Section 2.1 presents the different processing methods of the raw data in the modelling process. Section 2.2 presents the establishment of the nonhomogeneous grey model and discusses the form of different models.
The different accumulation methods
The method of processing raw data in the grey model determines the accuracy of the forecast. For a set of raw time series data Y (0) = y (0) (1), y (0) (2), . . . , y (0) (n) , the corresponding accumulation generation sequence is defined as Y (ℵ) = y (ℵ) (1), y (ℵ) (2), . . . , y (ℵ) (n) , ℵ is the accumulative generator, the following three accumulation methods are used as the research direction of this paper.
The base estimator
In this work, the nonhomogeneous grey model (NGM) will be selected as the basic estimator for ensemble learning because it has a simple form but is more flexible than the classic grey model. The NGM is based on multiple accumulations, which makes the form of the model more abundant and can fully reflect the various characteristics of the data. Therefore, in the process of ensemble learning, it can comprehensively incarnate the essence of the system to improve forecasting performance. Other common grey models are also given in the form of the NGM conversion.
Relationship between the nonhomogeneous grey model to other existing grey models
• Denote βt + γ as W, when the grey action quantity W: the NGM degenerates into GM (1,1), the continuous-time response function of GM is: (2.14) • Note the equationŷ as the discrete form of GM, the recursive function of the discrete grey model (DGM) is: • In the same way, the discrete form of NGM is: 17) and the recursive function of the nonhomogeneous discrete grey model (NDGM) is: • When the grey action quantity W: where τ is the nonlinear parameter, and the continuous-time response function of the nonlinear grey Bernoulli model (NGBM) is: when τ = 2, the above model is Verhulst grey model.
The basic form and solution of models are shown in the Table 1. To sum up the above contents, it can be seen the NGM has the better generality because the NGM could degenerate into various basic models within such higher generality, it is used as the base estimator of ensemble learning could give full play to the advantages of several grey models.
The theory of ensemble grey model
Ensemble learning is a commonly used machine learning method that uses specific strategies to combine multiple learners to complete the learning task. Based on the conventional ensemble method, a novel ensemble strategy is constructed to improve prediction performance. The new strategy will simplify the ensemble method and apply it to the grey system to obtain forecasting results by combining several basic models. This section introduces the necessary elements of the conventional ensemble model in the first part, then gives an improved ensemble strategy, discusses the advantages of the new method in the second part, and ultimately introduces the construction of the ensemble grey model in this paper. Grey model Nonhomogeneous grey model Nonhomogeneous discrete grey modelŷ (ℵ) (k Nonlinear grey Bernoulli model
Conventional ensemble learning strategy
In the ensemble learning theory, the most significant operation is divided into the construction of the base model and the approach of ensemble. This section will present the ensemble learning from these two stages. In the first stage of the ensemble operation, a single data set is used by different base models to obtain a trained base model. In the second stage, the forecasting results of each model are combined by the averaging method to improve the forecasting accuracy.
Construction of the base model
The raw data sequence is assumed as Y (0) = y (0) (1), y (0) (2), . . . , y (0) (n) , the base model is denoted by e , and the parametric grid is S. Divide the data set Y (0) into two parts: the modelling set Y M to get the trained model and the forecasting set Y F to compare the forecast performance of the model, which: The parameter χ is selected from the parameter grid S of model e as a single base model e (χ = χ ). By fitting the data on Y M , the ensemble base modelˆ e (χ = χ ) can be obtained. The variety of different base model is obtained by the sampling times of χ in the parameter grid S.
Ensemble approach
This paper adopts a simple averaging method to combine the forecasting results of base models. After the trained base models are obtained, the set of base models is recorded as E and the results on the forecasting set Y F is recorded asŶ F = ŷ (0) (m + 1),ŷ (0) (m + 2), . . . ,ŷ (0) (m + ℘) . Then the final ensemble forecasting results can be expressed by the following mathematical formula: where card (E) represents the cardinality of E,ŷ (0) e (k) is the forecasting results of a single base model andŷ H (k) is the forecasting results of ensemble learning.
Improved ensemble strategy
In the above-mentioned conventional ensemble operation, as the average prediction result is selected as the ensemble method, if there is a model with poor generalization ability in the input base model, it will pull down the performance of other models on this data set. Furthermore, if the sampling times in the parameter grid are plethoric, the number of base models that need to be input in is exceeding numerous. Therefore, we improved the ensemble strategy from the structure: first, the different parameter values χ is selected by the sample of the parameter grid S with equal intervals s. Then the forecasting model with optimal parameters χ * is obtained by setting the objective function and conditions of optimization. This section focuses on the search for the optimal basic model.
For the models with nonlinear parameters (such as r − order, λ − order or nonlinear parameter τ), the modelling set Y M continue to be divided into two data blocks: the training set Y train and the validation set Y valid , which: Y train is used to estimate model parameters, Y valid is used for validating the performance of the model out of the training sample. The objecive function is formulated as: To obtain a relatively superior parameter value in the interval, the parameter values are sampled with equal intervals s to calculate the forecasting results of the model. According to the Section 2, we get the parameter values that satisfying the optimal conditions Eq (3.4) in the parameter interval S as the approximate optimal parameters χ * of the model, and take the trained modelˆ e (χ = χ * ) as the parametric base model of the ensemble model. The process of parameter searching is presented by Algorethm 1.
The ensemble of the nonhomogeneous grey model with a different accumulative approach
In this section, the ensemble learning-based grey model (ELGM) will be presented. The three accumulation forms of the nonhomogeneous grey model is used as the base models, i.e. .
According to Section 2.1, when e = NGM, we have n = m; when e = FNGM, NIPNGM, we have n = ς. And: When e = NGM, directly calculate the value on Y F and get the foracasting resultŷ (0) e through Eq (2.2): When e = FNGM, NIPNGM, solve the optimized parameter ℵ * (ℵ = r, λ): The optimal parameter ℵ * is used to refit the training set to caculate the value on the Y F . Then the forecasting resultŷ (0) e is obtained.
The forecasting resultsŷ (0) e of E is combined to get the ultimate resultsŷ H (k) of ELGM by averaging method:ŷ And the process of building the ELGM can be represented by Figure 1.
Model validation
This section gives several numerical examples to verify the different accumulation forms of NGM under the new ensemble strategy. In each cases, the ELGM we proposed is compared with the different models in different accumulation forms presented in Section 2.2.2. In addition, the first section briefly presents the model evaluation criteria used in this work and the last section gives a summary of numerical examples.
Evaluation criteria for model prediction performance
To compare the generalization ability between the ELGM and other models, the prediction performance of the model is quantified by MAPEPR (Mean absolute percentage error for the prior-sample period), MAPEPO (Mean absolute percentage error for the post-sample period) and MAPE (Mean absolute percentage error) [33,34]. The specific mathematical expressions are as follows:
Numerical cases
Numerical case 1: (The renewable energy consumption of the Czech Republic) We consider a sample for establishing a grey model provided in the literature [35]. The raw data is divided by the parameter values given in the Table 2. And Figure 2 shows the comparison of the forecasting results of the five models with the smallest MAPE value. The MAPEPR, MAPEPO, and MAPE obtained from the ELGM are 5.93%, 2.82%, and 5.37%, respectively, which are better than the other comparison models.
The novel model has the smallest evaluation criterion value in the modelling and forecasting stages, which shows that the ELGM is better than other competing models in this case. Numerical case 2: (The biodiesel production of the United States) The data on biodiesel energy in the literature [36] is used to verify the grey model again. The data is divided as defined in the Table 2. Similar to numerical case 1, the forecasting results of the five models with the smallest MAPE value are plotted in the Figure 3. From the bottom of the figure, it is shown that the new model has not only achieved good results in the modelling and forecasting stages but also has a MAPEPO value of 0.73% in the forecasting stage, which is much better than other comparable models. It indicates that the ELGM could produce better predictive power than the base model under certain circumstances.
Summary of the numerical cases
The performance of the ELGM in this section has been evaluated through two real numerical cases. It is worth noticing that these data not only contain monotonous changes in the degree of growth but also contain the characteristics of gradual fluctuations over time. The ELGM can also obtain the relatively best forecasting results among many models. However, there are also significant differences in the results obtained when dealing with changing trends over time. In numerical case 1, except for the ELGM, the NIPNGBM with the optimal parameters can also obtain similar prediction performance. But when we compared the forecasting results of the base model with the ELGM, we can conclude that the application of the novel ensemble strategy is, to a certain extent, optimize the forecasting results so that the prediction performance is further improved. In numerical case 2, when the ELGM faced disturbance data, although the overall prediction accuracy is not as good as the application of monotonic trend data, it can even achieve ten times the prediction performance of other models.
In a nutshell, on the cases verified in this work, the ELGM has the lowest MAPE compared with the other fifteen models in the modelling and forecasting stages. It can be obviously seen that the ELGM can also show higher prediction accuracy in short-term prediction when processing the data with different feature.
Data collection and division
The raw data of electricity supply in China is collected from the website [37] until the last update, which includes electricity supply data from 2000 to 2018 in China. The data is divided from a total of 19 time nodes from 2000 to 2018 by the division method introduced in Section 2.1 to obtain two data sets: modelling set and forecasting set without optimized parameter models; training set, validation set and forecasting set with optimized parameter models. The parameters of the division method are shown in the Table 3.
Forecasting results
In this paper, the three accumulation forms of NGM are used as three basic models to train and integrate the forecasting results of the models to get the final results. At the same time, the forecasting results of five different models under three accumulation forms are analyzed. The forecasting results, optimized parameter values and evaluation metrics values are shown in the Table 4. MAPEPR, MAPEPO and MAPE of the ELGM are 2.34%, 0.20% and 2.55%, respectively, which are the minimum values among the metrics, and it indicates that the ELGM has the best prediction performance.
Taking the MAPE value as the benchmark, the forecasting results of the minimum accumulation mode of MAPE value under the different accumulation of five models and the prediction charts of the ELGM results are given, as shown in Figure 4. It can be seen the prediction ability of the ELGM is slightly better than the optimal situation of other several models, which illustrates that the ensemble strategy can further improve the traditional prediction method and optimize the prediction results. Figure 4. Comparison of optimal forecasting results between ELGM and various theoretical models.
The ELGM is compared with the original three basic models. It can be concluded from Figure 5 that the ELGM can not only ensure the minimum error between the forecasting set and the original data, but also make up for the poor prediction effect of the early samples due to the minimum error in the later samples. The evaluation criteria values of the top ten models with better prediction performance are visualized in Figure 6. From the evaluation criteria values in the figure, it can be clearly observed that the prediction of the ELGM is better than other models, which further verifies the feasibility of the ensemble learning-based grey prediction. Figure 6. The MAPEPR, MAPEPO, MAPE of models with superior forecast performance.
Brief summary and short discussion
According to the forecasting results in Section 5.2, first of all, the models with the same basic form but different accumulation are called a group. The group of GM, DGM, or NGBM has a better fitting effect than the NGM group, but all of them are less effective than the NGM group in the forecasting stage, which indicating the NGM group has high stability. And the NDGM group performed worse in modelling and forecasting set. It could be seen the forecasting ability of ELGM benefits from the base estimator.
However, compared with the forecasting results of the basic models, the forecasting accuracy of ELGM obtained by the new ensemble strategy exceeds all basic models, which verifies that the strategy achieves the ensemble effect of ensemble learning while theoretically simplifying the input of the basic model.
The above discussion shows that the proposal of ELGM not only verifies that the effect of ensemble learning is very significant, but also provides a new way for the selection of ensemble strategies. The application results also fully illustrate that ELGM could be used as a reliable forecasting tool in the energy field.
Conclusions
A nonhomogeneous grey model based on a novel ensemble strategy is proposed in this work, abbreviated as ELGM. At first, the grid search cross validation is used to obtain the optimal parameters of each basic model, and the nonhomogeneous grey models under the three accumulation forms are taken as the ensemble objects, and the ensemble is carried out in the way of average prediction results. In the case study, compared with fifteen general models, the results show that the ELGM has the highest generalization ability in small-sample time series forecasting, and can also have the same excellent performance when dealing with different trend data. The applied research of electricity supply shows that the proposed ELGM has the highest prediction accuracy in short-term electricity prediction and is expected to become a reliable tool for energy prediction. | 5,602.8 | 2021-01-01T00:00:00.000 | [
"Computer Science"
] |
Optimal fuzzy inverse dynamics control of a parallelogram mechanism based on a new multi-objective PSO
: This work presents a multi-objective optimization method based on high exploration particle swarm optimization, called MOHEPSO, for optimization problems with multiple objectives. In order to convert the single-objective (HEPSO) algorithm to the multi-objective one, its fundamentals should be changed. The leaders’ selection in the proposed algorithm is based on the neighborhood radius concept for the global best position and the Sigma method for the personal best position. Also, a fuzzy elimination technique is used for pruning the archive. The numerical results of the MOHEPSO algorithm on mathematical test functions are compared with those of other multi-objective optimization algorithms for the performance evaluation of the algorithm. Finally, the proposed algorithm is implemented to find the optimum values of controller coefficients for a parallelogram five-bar linkage mechanism. The introduced control strategy is designed based on the inverse dynamics concepts, improved by fuzzy systems and optimized by regarding two objective functions. The simulation results are presented to demonstrate the efficiency and accuracy of this approach.
PUBLIC INTEREST STATEMENT
The Particle Swarm Optimization (PSO) algorithm, originally was introduced by Kennedy (an American social psychologist) and Eberhart (an American electrical engineer), and is one of the modern heuristic algorithms. It was developed through simulation of simplified social systems as a stylized representation of the movement of organisms in a bird flock or fish school, and is robust in solving nonlinear optimization problems. In comparison with other evolutionary methods, the PSO technique can generate a high-quality solution with shorter calculation time and a more stable convergence characteristic. On the other hand, the relative simplicity of PSO, and the fact that it is a population-based technique have made it a natural candidate to be extended for multiobjective optimization. Multi-objective optimization which is also called multi-criteria optimization or vector optimization has been defined as finding a vector of decision variables satisfying constraints to give acceptable values to all objective functions.
Introduction
In engineering, the concept of optimization is noticed when complex problems are going to be solved. Such complexity may be associated with the kind of problem which an operator wants to solve (i.e. to be nonlinear or not) or the kind of solution which a person wishes to address (i.e. whether it is an exact or an approximate solution). Furthermore, the optimization problems fall into two categories: the single-objective and the multi-objective. The latter has been gaining increasing attention in the stochastic optimization community. A great variety of multi-objective optimizers have been developed, and their performance has been tested on many problems with different characteristics. In fact, the goal is to find a series of non-dominated solutions that represent a trade-off among the objectives, because unlike single-objective optimization problems, there is no single solution available for these problems (Donoso & Fabregat, 2016;Hughes, 2008).
In the practical multi-objective problems, the design criteria called objective functions may conflict with each other so that improving one of them will deteriorate another. When solving multiobjective problems, due to the presence of two or more conflicting objectives, it is often impossible to obtain a solution vector that simultaneously optimizes all of the objective functions and satisfies all of the constrains. In such cases, the concept of Pareto-optimal solutions or equivalently Pareto front arises. A state of solutions is named Pareto front if and only if there is no alternative state that would make an objective function better off without making anyone worse off (non-dominated solutions) (Cheng, Chen, Wai, & Wang, 2014;Liu, Chen, Deb, & Goodman, 2016;Meza, Espitia, Montenegro, Giménez, & González-Crespo, 2017;Razmi, Jafarian, & Amin, 2016).
Among different multi-objective optimization algorithms, multi-objective evolutionary algorithms (MOEAs), which make use of the strategy of the population evolution, are some effective methods for solving multi-objective problems. In recent years, several approaches such as multi-objective particle swarm optimization algorithms (MOPSO) (Mahmoodabadi, Bagheri, Mostaghim, & Bisheban, 2011), vortex multi-objective particle swarm optimization (MOVPSO) (Meza et al., 2017), multi-objective artificial bee colony algorithms (MOABC) (Atashkari, NarimanZadeh, Ghavimi, Mahmoodabadi, & Aghaienezhad, 2011) and etc. have been proposed for the performance improvement of MOEAs. Particle swarm optimization (PSO), first introduced by Kennedy and Eberhart, is one of the modern heuristic algorithms and was inspired by natural flocking and swarming behavior of birds and fish (Bratton & Kennedy, 2007;Valle, Venayagamoorthy, Mohagheghi, Hernandez, & Harley, 2008). Further, it was developed through simulation of simplified social systems, and has shown robust performance, in solving nonlinear optimization problems. This method is able to quickly produce a proper solution which also has better convergence properties in comparison with the other evolutionary approaches (Ding, 2017).
Nonetheless, solving multi-objective problems by PSO is involved with two matters. The first one is to sustain an archive in order to make a trade-off between convergence and diversity. The nondominated solutions are sustained by an external elitist archive which also can update itself so as to preserve the diversity. There are several methods used to update the archive. For example, many multi-objective algorithms adopt the crowding distance (Al Moubayed, Petrovski, & McCall, 2014) to prune the archive; the clustering mechanism for maintaining an archive is also applied to the multiobjective algorithms (Abbasian, Nezamabadi-pour, & Amoozegar, 2015;Padhye, Branke, & Mostaghim, 2009) to keep the size of external archive constant. The other matter in MOPSOs is updating the global best (gbest) and the personal best (pbest) positions for each particle, since there is no absolute best solution, but rather a set of non-dominated solutions. In recent years, several methods such as the Ranking methods implemented to find global best positions (Wang & Yang, 2009); the decomposition approach employed to select global and personal best positions for each particle (Gong, Cai, Chen, & Ma, 2014), etc. are proposed to determine the gbest and pbest.
In order to have an effective multi-objective algorithm with the good population diversity, an external archive strategy is often used. In fact, the external elitist archive is implemented to store the non-dominated solutions obtained by the algorithm and these solutions are filtered by a certain quality measure, such as fuzzy elimination technique, and etc. In this work, a new multi-objective particle swarm optimization algorithm based on the high exploration particle swarm optimization (HEPSO) which is one of the evolutionary methods introduced by one of the authors (Mahmoodabadi, Salahshoor Mottaghi et al., 2014) is proposed. The HEPSO seems particularly suitable for multi-objective optimization, mainly because of convergence speed, global optimality, solution accuracy, and algorithm reliability, in comparison with well-known and recent evolutionary algorithms for singleobjective optimization. It is in fact the combination of PSO with two different operators, so as to increase the exploration capability of the PSO algorithm. The first operator was inspired by the multi-crossover mechanism of the genetic algorithm (Chang, 2007), and the other uses the bee colony mechanism to update the position of the particles (Akay & Karaboga, 2012).
The introduced MOHEPSO algorithm is tested on the mathematical benchmark functions and challenged by optimally design of fuzzy inverse dynamics controller for a parallelogram mechanism. The objective functions which often conflict with each other are appropriately defined as weighting normalized summation of angle errors and weighting normalized summation of control efforts. The constant parameters of the introduced controller are regarded as the design variables used for the optimization process. The obtained Pareto front is shown and three optimum design points are selected for computer simulation and verifying the effectiveness and robustness of the proposed strategy.
The inverse dynamics controller has a long history in the control engineering and is accepted in a lot of real applications due to the efficient performance. Peng, Lin, and Su (2009) applied a computed torque control-based composite nonlinear feedback controller for robot manipulators with bounded torques. Zeinali and Notash (2010) introduced fuzzy logic-based inverse dynamic modelling for robot manipulators. Wang (2011) implemented adaptive inverse dynamics for free-floating space manipulators. Chen, Mei, Ma, Lin, and Gao (2014) proposed robust adaptive inverse dynamics control for uncertain robot manipulator. Giusti, Malzahn, Tsagarakis, and Althoff (2017) presented a combined inverse-dynamics/passivity-based controller for robots with elastic joints.
The rest of this paper is organized as follows: Section 2 gives a brief review on the multi-objective optimization. High exploration particle swarm optimization is presented in Section 3. In Section 4, the proposed algorithm is introduced. Experimental results and comparison studies to verify the capability of proposed algorithm are shown in Section 5. The description of a parallelogram controllable mechanism is presented in Section 6. Its fuzzy inverse dynamics controller is given in Section 7. The optimal fuzzy inverse dynamics controller of the parallelogram robot is simulated in Section 8. Finally, Section 9 concludes the paper.
Multi-objective optimization
From a classical standpoint, optimization of a single function simply entails determining a set of stationary points, identifying a local maximum or minimum, and possibly finding the global optimum. In contrast, the process of determining a solution for a multi-objective optimization problem is slightly more complex and less definite than a single-objective problem.
Multi-objective optimization problems (MOPs) are problems involving more than one objective to be optimized. In these problems, there are several objectives or cost functions (a vector of objectives) which are to be optimized (minimized or maximized) simultaneously. Unlike single objective optimization algorithms, the performance of multi-objective optimization solutions cannot be improved without sacrificing the performance of at least another one. Therefore, there is not a single optimal solution as the best solution with respect to all the objective functions (Kukkonen & Coello, 2017). Instead, there is a set of optimal solutions, known as Pareto-optimal solutions or Pareto front (Khishtandar & Zandieh, 2017) for such problems. The Pareto-optimal set is defined based on Pareto dominance. Each new addition to this set is compared with all the objective function values of potential solution points in order to determine if the new point is dominated. If it is non-dominated, then it is kept in the set of potential solution points. In this section, the main concepts related to the multi-objective optimization problems are described.
A general multi-objective optimization problem is defined as follows: where N is the number of objective functions involving N ≥ 2, k is the number of inequality constraints, and p is the number of equality constraints. f(x) is a N-dimensional vector of objective functions.
Definition 1 (dominance): A decision vector x 1 dominates another vector x 2 (denotes as x 1 < x 2 ) if: Definition 2 (Pareto-optimal): A vector of decision variables x* ∊ s ⊂ R n (s is the feasible region) is Pareto-optimal if it is non-dominated with respect to s.
High exploration particle swarm optimization
In nature, birds seek food by considering their personal experience and the knowledge of the other birds in the flock. This idea was first introduced by Kennedy and Eberhart to propose the PSO method. The original version of PSO suffers from trapping in local minima and premature convergence. Several approaches, such as FIPSO (Wang, Wang, Yan, & Shen, 2017), APSO (Zhan, Zhang, Li, & Chung, 2009), ACOR-PSO (Huang, Huang, Chang, Yeh, & Tsai, 2013), HEPSO (Mahmoodabadi, Salahshoor Mottaghi et al., 2014) and etc., have been thus far proposed to modify the performance of PSO. The HEPSO algorithm has shown more successes in comparison with well-known and recent evolutionary algorithms for single-objective optimization.
In the PSO algorithm, each candidate solution to the problem is considered a particle and the set of solutions is called "the swarm". Moreover, every candidate solution is associated with a velocity, and the position of each particle in population is changed according to its own experience and the other particles experience (velocity) in the flock. HEPSO is a combination of PSO with two different operators so as to increase the exploration capability of the PSO algorithm. The first one is based on the multi-crossover mechanism of the genetic algorithm (Chang, 2007), and the second one performs similarly to the bee colony mechanism (Akay & Karaboga, 2012).These formulae are used to improve the converging process and to escape from local minima in the original version of the PSO algorithm.
In PSO, the position of each particle is changed according to its own experience and that of its neighbors. Let ⃗ x i (t) denote the position of particle i, at time step t. The position of particle i is then changed by adding a velocity ⃗ v i (t) to its current position, i.e.: and the velocity vector changes in the following way: where ⃗ r 1 and ⃗ r 2 are vectors which contain the random elements from the interval [0,1]; C1 is the cognitive learning factor and represents the attraction of a particle toward its own success, and C 2 is the social learning factor and represents the attraction of a particle toward the success of the entire swarm; and finally, w is the inertia weight which is used to balance local and global searches.
In HEPSO, the first added operator uses the global best position of the population (⃗ x gbest i ) as the premier parent, and the personal best position (⃗ x pbest i ) as the second parent. It will randomly generate the new velocity for the selected particle ⃗ x i (t) as below: where ⃗ x i (t) denotes the position of particle i at iteration (time step) t, ⃗ r ∈ [0, 1] is a random vector which contains values, c2 is the social learning factor, ⃗ x gbest is the global best position, and ⃗ x pbest i is the personal best position. Furthermore, the second operator is inspired by the foraging behavior of honey bees (Karaboga, 2005), and the food source obtaining operator of Artificial Bee Colony (ABC) algorithm is used for the selected particles in the HEPSO method. The position of the randomly selected particle x i (t) will change in the dimensions as below: This is an important issue, because, the archive has to be updated where d is a random integer in the range [1, dimension], ⃗ r ∈ [0, 1] is a random value, and j is also a random integer in the range [1, number of particles]. After calculation of Equation (7), the superior value between ⃗ x i (t + 1) and ⃗ x i (t) should be selected. The flowchart of this algorithm is shown in
Multi-objective high exploration particle swam optimization
The proposed MOHEPSO algorithm is designed, as briefly described with regard to such items as diversity and convergence here. Initially, a random population is generated and in each iteration, the learning factors (C 1 and C 2 ) and the inertia weight (w) would be allocated. Also, after calculation of the fitness values of all particles, the non-dominated solutions are determined and the archive is formed.
Now, it is possible to identify the ⃗ x gbest and ⃗ x pbest from the Pareto front based on the considered strategies mentioned in the following sections. The random values 1 and 2 ∊ [0,1] are allocated to every particle. If, for a particle, ρ 1 is not greater than the standard deviation of the fitness values; or if ρ 2 is not smaller than ; the operator in Equation (7) will generate a new particle. Note that, p B denotes the probability of the bee colony, t refers to the present iteration, and max iteration denotes the biggest possible iteration number. For the particles that are not selected for the bee colony operation, another random number [0, 1] should be assigned. If a particle has < P c , then the multi-crossover operator would generate a velocity for it Equation (6). Here, P c is the multi-crossover probability. Furthermore, the particles that are not chosen for these two operations will be enhanced by PSO, and this cycle should be repeated until the user-defined stopping criterion is satisfied.
In the following subsections, the strategies to restrain the archive size and select ⃗ x pbest and ⃗ x gbest are discussed.
Archive size and fuzzy elimination
In most of multi-objective optimization methods, the archive must contain a set of specified criteria solutions, while maintaining a good spread of solutions in the obtained Pareto front. However, if all of non-dominated solutions are maintained in the archive, then the size of the archive grows very quickly.
This is an important issue, because, the archive has to be updated in each iteration and with a growing archive size, the update may become computationally demanding. Herein, a fuzzy elimination technique (Mahmoodabadi, Taherkhorsandi et al., 2014) is used to prune the archive. It involves a fuzzy parameter ɛ Fuzzy , which sets the extent of fuzzy desired in a problem, In this approach all the particles in the archive have a neighborhood radius equal to ɛ Fuzzy , and if the Euclidean distances of particles (in the objective function space) from a certain particle are fewer than ɛ Fuzzy , then they will be simply eliminated from the objective function space. Figure 2 illustrates this technique as an example in a bi-objective space. The technique is formulated as: Table 1 and Figure 3 respectively. The E variable , as the inference outcome of the consequent variables, can be evaluated by applying any of the simplified inference, the product-sum-gravity, and the min-max gravity approaches (Rao & Kumar, 2017).
The input of the membership function ought to be evaluated as follows: where fix t 7 is the rounding function of t 7 to the closest integer toward zero.
In fact, Equation (9) normalizes the input of the membership functions for the fuzzy variables. Equation (8), however, makes the archive retain more non-dominated solutions at first iterations due to low elimination radius, and consequently increases the algorithm convergence. An increase in the iteration number causes a rise in the elimination radius and omits more close solutions. Therefore, the algorithm spread becomes broader and the non-dominated solutions are more uniformly diversified.
The strategy for the globel best position (gbest)
To update the position of particles in MOPs, one should employ the whole swarm's best particle (⃗ x gbest ) as a leader. Every particle can only has a single leader, which must be suitably selected so as to raise the diversity and the convergence of the solutions. To this end, the selection of ⃗ x gbest is done through assigning a neighborhood radius (R neighborhood ) to all of the non-dominated solutions with regard to density measures . Accordingly, when the Euclidean distance between two non-dominated solutions is smaller than this radius, they are said to be the neighbor of together. Hence, after counting the neighbors of every single non-dominated solution at every iteration, the one with fewer neighbors is considered to be the leader.
The strategy for the personal best position (pbest)
By selecting a proper technique to find ⃗ x pbest i for particle i, the diversity within the swarm is maintained. In the proposed algorithm, each particle of population selects a personal best position from the archive during each iteration via the Sigma method (Mostaghim & Teich, 2003). These selections are distinct from each other; in the first step, the value σ i is calculated for particle i in the archive; and in the second step, σ j for particle j of the population is assigned. Then, the distance between σ i and σ j is evaluated for all the particles of both the population and the archive. Finally, the particle k of the archive is selected as the personal best position of the particle j of the population if the distance between their sigmas is the minimum. In other words, each particle with a closer sigma value (in the case of two dimensional objective space, closer means having less difference of sigma values) to the sigma value of an archive member must select that archive member as its best local guide (⃗ x pbest ).
For a two-objective space, the sigma parameter is defined as follows: where f 1 and f 2 are the first and second objective functions. Figure 4 shows the strategy for selecting the personal best position among the archive members for each particle, based on the Sigma method, in a bi-objective space.
Numerical experiments on multi-objective high exploration particle swarm optimization
This section contains the computational results obtained by the MOHEPSO algorithm using mathematical test functions compared to the other multi-objective methods.
Test functions
The test functions for multi-objective optimization should impose sufficient difficulty to challenge the algorithm in searching for the Pareto-optimal solutions. Here, SCH, FON, ZDT1, and ZDT6 benchmark problems, all of them have two objective functions, are utilized to examine the performance of the proposed method (Mahmoodabadi, Taherkhorsandi et al., 2014). Table 2 shows the formulae of the functions along with the number of dimensions, the admissible bound of their variables, the Pareto-optimal solutions, and the nature of the Pareto-optimal front for each problem.
Comparison metrics
In order to provide a quantitative assessment of the performance of multi-objective optimization on the test functions, two goals are often taken into consideration: a good convergence toward the Pareto-optimal set; and the maintenance of diversity in obtained solutions existing in the Paretooptimal set. Since both of them are conflicting in nature, comparing two sets of trade-off solutions require different performance measures. Herein, we consider five performance metrics for evaluating each of the above two goals in the set of Pareto-optimal solutions.
(1) The first is the diversity metric (∆). This measures the extent of spread achieve among the nondominated solutions (Mahmoodabadi, Taherkhorsandi et al., 2014). It is given as: Nonconvex, nonuniformly and spaced where the parameters d f and d l are the Euclidean distances between the extreme solutions and the boundary solutions of the non-dominated set; d i is the Euclidean distance between consecutive solutions in the obtained non-dominated set; d denotes the average of these distances; and finally, n is the number of members in the set of non-dominated solutions. A good distribution would make all distances d i equal to d , and would make: d f = d l = 0. Thus, for the most extensive spread set of non-dominated solutions, Δ = 0.
(2) The metric of generational distance (GD) (Azzouz, Bechikh, & Said, 2017) gives a good indication of the distance between the true Pareto-optimal front and non-dominated solution members as follows: where n is the number of members in the set of non-dominated solutions archive and d i is the minimum Euclidean distance between the member i in the non-dominated set and the Pareto-optimal front. If all members of non-dominated set are in the true Pareto-optimal front, then GD = 0.
(3) The metric of the maximum spread (MS), calculates the normalized Euclidean distance of boundary solutions in the objective space. It is formulated for a two-objective problem as: where n is the numbers in the discovered Pareto front; f i m is the mth objective of member i; and F max m and F min m are respectively the maximum and minimum of the mth objective function in the Paretooptimal front. The greater the maximum spread is, the more area of Pareto-optimal front is covered by the non-dominated solutions. If the above metric equals 1, a widely spread set of solution is obtained (Khishtandar & Zandieh, 2017).
(4) The spacing metric (SP) aims at assessing the spread distribution of vectors throughout the set of non-dominated solutions. It is calculated with a relative distance measure between consecutive solutions in the obtained non-dominated set (Mirjalili, Saremi, Mirjalili, & Coelho, 2016): where n is the number of members in the set of non-dominated solutions and d i is the Euclidean distance between the member i and its nearest neighbor in the non-dominated set. The desired value for this metric is zero, which means that the elements of the set of non-dominated solutions are equidistantly spaced.
(5) The metric of convergence (Υ) (Pal & Bandyopadhyay, 2016) measures the proximity of the generated set of Pareto-optimal solutions to the true Pareto front. First, a set of uniformly spaced solutions of the true Pareto-optimal front in the objective space is defined; and then, for each solution obtained by the algorithm, its minimum Euclidean distance (d i ) from the chosen solutions on the Pareto front is calculated. Also, n is the number of members in the set of nondominated solutions. The average of these distances is computed, and the lower the value of Υ, the closer the generated Pareto front to the true Pareto front.
Numerical results and discussion
The MOHEPSO algorithm is tested with a population of size 150. The maximum iteration, E constant and R neighborhood are respectively set at 250, 200 and 0.1. Moreover, C 1 is linearly decreased from C 1i = 2.5 to C 1f = 0.5, while C 2 is linearly increased from C 2i = 0.5 to C 2f = 2.5 over iteration. Furthermore, the bee colony probability is set at P B = 0.02, and the multi-crossover probability is set for SCH and FUN at P c = 0.95 and for ZDT1 and ZDT6 at P c = 0.1.
The non-dominated solutions obtained, in one arbitrary run, by the new considered multi-objective algorithm for the test functions given in Table 2 are shown in Figure 5. As these figures show, good diversity and convergence have been achieved by the proposed algorithm.
In order to evaluate the performance of the proposed algorithm on the test functions via the metrics, three well-known versions of multi-objective optimization algorithms are used for comparison, as detailed in Table 3.
Each algorithm is implemented in 20 independent runs, for which the average values of the performance metrics corresponding to each test function are brought in Table 3.
In other words, this table includes the average values of the metrics obtained from the four algorithms (MOHEPSO, MOPSO Ingenious, MOEA-D, and NSGA-II) for the test functions given in Table 2. As it is specified from the table, the proposed algorithm delivers a better performance in all of the functions for the GD, MS and Υ metrics except ZDT1. The MOPSO Ingenious in turn has desirable Δ and SP values in all the test functions except ZDT6. The MOEA-D, however, performs poorly in MS and Υ metrics along with the SCH and FUN test functions. In addition, the MOHEPSO has desirable Δ values in ZDT6; and MOPSO Ingenious has a better performance in Υ metric for the ZDT1 test function (Table 4).
Dynamical equation of a parallelogram five-bar linkage mechanism
The mechanism in Figure 6 is called a five-bar linkage, where having all revolute joints. Clearly there are only four bars in the figure, but in the theory of mechanisms, it is a convention to count the ground as an additional linkage, which explains the terminology. In Figure 6, it is assumed that the lengths of links 1 and 3 are the same, and that the two lengths marked l 2 are also the same. In this way, the closed path in the figure is in fact a parallelogram, which greatly simplifies the computations. Notice, however, that the quantities l c1 , and l c3 need not to be equal. In other words, even though links 1 and 3 have the same length, they need not to have the same mass distribution (Spong & Vidyasagar, 2008).
It is clear from the figure that, even though there are four links being moved, there are in fact only two degrees of freedom, identified as q 1 andq 2 . At first, the coordinates of the centers of mass of the various links are written as a function of the generalized coordinates. This gives Next, with the aid of these expressions, the velocities of the various centers of mass are written as a function of q 1 and q 2 . For convenience, the third row of each of the following Jacobian matrices is dropped as it is always zero. The result is: x c1 y c1 = l c1 cos q 1 l c1 sin q 1 x c2 y c2 = l c2 cos q 2 l c2 sin q 2 x c3 y c3 = l c2 cos q 1 l c2 sin q 2 + l c3 cos q 1 l c3 sin q 1 x c4 y c4 = l 1 cos q 1 l 1 sin q 1 + l c4 cos(q 2 − ) l c4 sin(q 2 − ) l c3 cos q 1 l c3 sin q 1 = l 1 cos q 1 l 1 sin q 1 − l c4 cos q 2 l c4 sin q 2 (17) v c1 = −l c1 sin q 1 0 l c1 cos q 1 0 q v c2 = 0 −l c2 sin q 2 0 l c2 cos q 2 q v c3 = −l c3 sin q 1 −l 2 sin q 2 l c3 cos q 1 l 2 cos q 2 q v c3 = −l 1 sin q 1 −l 2 sin q 2 l 1 cos q 1 l 2 cos q 2 q The velocity Jacobians J v c1 , i = 1, … ,4 are defined as the four matrices appearing in the last equations. Next, it is clear that the angular velocities of the four links are simply given by Thus the inertia matrix is given by If Equation (17) is substituted into Equation (19) and the standard trigonometric identities are used then IF then the inertia matrix is diagonal and constant, and as a consequence the dynamical equations will contain neither Coriolis nor centrifugal terms.
Turning now to the potential energy:
Hence
Notice that ∅ 1 depends only on q 1 but not on q 2 and similarly that ∅ 2 depends only on q 2 but not on q 1 . Hence, if Equation (21) is satisfied, then the rather complex looking manipulator in Figure 6 is described by the decoupled set of equations as follows.
where u 1 and u 2 are the control inputs. Equation (24) can be rewritten as follows.
The state-space formulation of the system can be described as Equation (26).
Fuzzy inverse dynamics controller for the mechanism
The inverse dynamic model of an arbitrary manipulator can be expressed as: where u = [u 1 , u 2 , … , u n ] T is the vector of input generalized forces, and n denotes the number of generalized coordinates of the manipulator. Where inertia matrix M(q) is an n × n symmetric positive definite matrix, C(q,q) is an n × n matrix of centripetal and Coriolis terms and g(q) is an n × 1 vector of gravitational terms; q = [q 1 , q 2 , … , q n ] T , q = [q 1 ,q 2 , … ,q n ] T and q = [q 1 ,q 2 , … ,q n ] T are the displacement, velocity and acceleration vectors of joints respectively. In Equation (27), if (20) d 11 (q) = m 1 l 2 c1 + m 3 l 2 c3 + m 4 l 2 1 + I 1 + I 3 d 12 (q) = d 21 (q) = m 3 l 2 l c3 − m 4 l 1 l c4 cos(q 2 − q 1 ) d 22 (q) = m 2 l 2 c2 + m 3 l 2 2 + m 4 l 2 c4 + I 2 + I 4 (21) m 3 l 2 l c3 = m 4 l 1 l c4 y ci = g sin q 1 (m 1 l c1 + m 3 l c3 + m 4 l 1 ) = g sin q 2 (m 2 l c2 + m 3 l 2 − m 4 l c4 ) � 1 = g cos q 1 (m 1 l c1 + m 3 l c3 + m 4 l 1 ) � 2 = g cos q 2 (m 2 l c2 + m 3 l 2 − m 4 l c4 ) then Finally, the tracking error e(t) = q d − q satisfies where K p and K d are diagonal matrices with diagonal elements consisting of position and velocity gains, respectively. The following choice for the gain matrices K p and K d causes an exponentially stability of the system.
The fuzzy inverse dynamics controller with input of the membership function e(t) = q d − q and output F for the robot manipulator could be calculated using Equations (30) and (31) for each link as the following equations.
where Then, in Equation (29): The membership functions are shown in Figures 7 and 8, and their rules are mentioned in Tables 5 and 6.
Optimal fuzzy inverse dynamics controller of the parallelogram mechanism
In this section, the proposed MOHEPSO algorithm introduced in Section 4 with a population of size, maximum iteration, E constant and R neighborhood 250, 300, 200 and 0.2 is used to determine the appropriate parameters of the fuzzy inverse dynamics controller for the five-bar linkage mechanism. In this paper, weighting normalized summation of angle errors and weighting normalized summation of control effort of the both links are regarded as the objective functions and calculated using Equation (34). q = a q , a q = −K p q − K dq + r and r =q d where W 1 and W 2 are the weighting coefficients. q 1 and q 2 are displacement of joints. u 1 and u 2 are the input generalized forces.
The vector [VB, PO, ZB, HS, HM, HB, M 1 , ΔM 1 , M 2 , ΔM 2 ] is the chosen parameters of fuzzy inverse dynamics controller. VB, PO, ZB, HS, HM and HB are input of the membership function. M 1 , ΔM 1 , M 2 and ΔM 2 are positive coefficients of the fuzzy inverse dynamics controller. The weighting normalized summation of angles errors and weighting normalized summation of control effort are the functions of this vector's components. This means that by selecting various values for the selective parameters, we can make changes in the weighting normalized summation of angles errors and weighting normalized summation of control effort. Indeed, this is an optimization problem with two objective functions and ten decision variables (VB, PO, ZB, HS, HM, HB, M 1 , ΔM 1 , M 2 , ΔM 2 ). The design variables corresponding to the optimal design points A, B, and C are illustrated in Table 7. The parameters of the multi-objective algorithms are chosen in Section 4 in a similar way.
In Figure 9, points A and C stand for the best weighting normalized summation of angles errors and weighting normalized summation of control effort, respectively. This figure illustrates that all the optimum design points in the Pareto front are non-dominated and can be chosen by the designer based on the design criteria. In Figure 9, point B can be regarded as a trade-off optimum choice when the minimum values of both the weighting normalized summation of angles errors and weighting normalized summation of control effort are considered. The real tracking trajectories of the optimal design points A, B, and C are shown in Figure 10. The tracking errors of the optimal design points A, B, and C are shown in Figure 11. Moreover, Figure 12 illustrates the control effort of the parallelogram mechanism for the optimal design points A, B and C.
Conclusions
This paper has proposed a new and simple generic evolutionary multi-objective optimization algorithm based on HEPSO which is called MOHEPSO. Different properties of the proposed MOHEPSO method have made it robust and competitive among the other existing methods in the literature. In order to supply a proper infrastructure for comparison, a fuzzy elimination technique is utilized to prune the archive instead of previous techniques in the literature. Also, the Sigma method is used to find the personal best positions of particles. Moreover, a neighborhood relationship among all the non-dominated solutions based on the Euclidean distance is defined to find and update the global best positions. Several experiments have been conducted to compare the performance of the proposed method with other recent and well-known multi-objective evolutionary algorithms. As the experiments demonstrate, the obtained optimal Pareto fronts in the proposed algorithm (MOHEPSO) outperform three effective multi-objective optimization algorithms (i.e. MOEA/D, MOPSO Ingenious and NSGA-II) for different complex multi-objective test functions in terms of both solution diversity and convergence. Finally, the proposed algorithm is used for the optimum design of a fuzzy inverse dynamics controller for a parallelogram five-bar linkage mechanism. Two conflicting objective functions, the weighting normalized summation of angle errors and weighting normalized summation of control effort, are regarded to design the optimal controller. Three points of Pareto front, obtained from MOHEPSO, are selected to work out the ten parameters of the optimal fuzzy inverse dynamic control; hence, the designer has the ample opportunity to select the finest point based on the design criteria. The result elucidated that the first selected point had minimum normalized summation of angles errors and maximum normalized summation of control effort. However, the third point was the direct opposite of the first one. Indeed, it had maximum normalized summation of angles errors and minimum normalized summation of control effort. Thus, the designer can regard the second well-chosen point as the trade-off optimum choice. | 8,709.4 | 2018-01-01T00:00:00.000 | [
"Computer Science"
] |
Synchronization of Time Delayed Systems by Common Delay Time Modulations
We investigate the synchronization phenomenon between two identical time delayed systems with the common time delay, modulated by a chaotic or random signal. The phenomenon is verified by the conditional Lyapunov exponent. The relation between the present form of synchronization with generalized one is also discussed.
Introduction
In 1990, Pecora and Carroll 1 focused on synchronization between identical subsystems under a common forcing.Since then, it is of fundamental importance in a variety of complex physical, chemical, and biological models.Due to finite signal transmission times, switching speeds, and memory effects, systems with both single and multiple delays are ubiquitous in nature and technology.It is well known that dissipative systems with a nonlinear timedelayed feedback or memory can produce chaotic dynamics, and the dimension of their chaotic attractors can be made arbitrarily large by increasing their delay time sufficiently.High-dimensional chaotic systems are important in secure communications.If we consider delayed systems in which time delays are not constant but modulated with time 2-5 , then the corresponding communication model will be more secure.Therefore the study of chaos synchronization in modulated time-delayed system is of high practical importance.
It is interesting to observe that all real objects showing synchronous behavior to a variable extent are subject to the influence of noise 6 .For a system of nonlinear oscillators, in the absence of coupling or in the weakly coupling regime where synchronization does not occur, noise applied identically to each oscillator can induced synchronization 7 .Common noise is also of great relevance to biological systems.In ecology, similar environment shocks may be responsible for synchronization of different populations over a large geographical region 8 .In neural systems, different neurons connected to another group of neurons will receive a common input signal which often approaches a Gaussian distribution as a result of integration of many independent synaptic currents 9 .
It is well known that noise-induced synchronization has been widely studied 7, 10, 11 .Theoretical results have inspired some experimental works since the noise-induce synchronization was observed in a biological system between two pairs of uncoupled sensory neurons.In this paper, we show the synchronization between two uncoupled time-delayed systems where their common delays are modulated by chaotic signal or noise.In all of the previously studied noise-induced synchronizations, the noise is present explicitly in the coupled equation.In our synchronization phenomenon chaotic or noising force is not explicitly present in equation but it modulates the common delay time.The synchronization is also verified by evaluating the largest conditional Lyapunov exponent.The relation with generalized synchronization is also discussed.
The rest of this paper is organized as follows.In Section 2, the general theory of timedelay modulation-induced synchronization 12-16 is discussed.Numerical simulations are shown in Section 3. Lorenz system and time-delay Ikeda system are considered as drive and response systems, respectively.In Section 4, the relation with generalized synchronization is discussed.Finally some conclusions are drawn in Section 5.
General Theory for Time-Delay-Induced Synchronization
We consider two time-delayed systems which are driven by a common chaotic time delay as where x and y are the dynamical variables of the two systems that are governed by vector fields f 1 and f 2 , respectively, and τ t is common identical chaotic signal which is driven by a signal of another chaotic system.The chaos-driven dynamical systems are illustrated in Figure 1.The two dynamical systems 1 and 2 are supposed to be identical and have different initial conditions.
For complete synchronization, it is assumed that f 1 ≈ f 2 .In general, synchronization can be achieved only when there is an interaction between the dynamical systems 2.1 .Since there is no direct coupling between x t and y t , the interaction must be provided by the common time-delay modulation τ t .The synchronization is said to occur if lim t → ∞ y t − x t 0 for any initial conditions, where • • • represents Euclidean norm.When the common delay time is constant, the two systems 2.1 are independent and they never synchronizes.To emphasize, when the common delay times are modulated in time, we observe that the two systems 2.1 are synchronized.We consider the two cases 2.1 that are driven by common time delay which are chaotic and random signals.We consider the system 2.1 in the form
2.2
The class of system 2.2 covers many famous chaotic time-delayed systems, such as the Ikeda system 17 , the Mackey-Glass system, Logistic system, and prototype system.The linear stability of the complete synchronization state x t y t is characterized by a quantity called the Largest Conditional Lyapunov Exponent LCLE .A precise and useful criterion for synchronization is the negativity of LCLE.
Let Δ t y t − x t be the synchronization error.Then the linearized error equation is obtained as We define the LCLE as follows 18 :
Numerical Simulation
For numerical simulation we consider two uncoupled identical time-delayed systems where their common delay time is modulated by chaotic signal from another chaos-driven system.Take the Lorenz system 19 as driven: 3.1 The system 3.1 is chaotic for the set of parameter values σ 10, r 28, b 8/3 with initial condition u 0 0.4, v 0 0.5, w 0 2.02.Consider the coupled Ikeda system as response system:
3.2
Physically x is the phase lag of the electric field across the resonator, a is the relaxation coefficient for the dynamical variable, and m 1 is the laser intensity injected into the system.τ 1 is the round-trip time of the light in the resonator or feedback delay time in the coupled systems 20 .The Ikeda model was introduced to describe the dynamics of an optical bistable resonator and is well known for delay-induced chaotic behavior 20 .
The systems are chaotic for the set of parameter values a 1, m 1 4, and τ 1 2. At this point, the response systems 3.2 are independent time-delayed systems.We take delay time in the form τ 1 t |w t |.The modulated delay time τ 1 t as a function of time is presented in Figure 2 a .In Figures 2 b and 2 c , the temporal behavior of the response system 3.2 is, respectively shown, which is started from different initial conditions.From these figures it is shown that each of the response systems is in chaotic states.The corresponding synchronization error is shown in Figure 2 d .We emphasize that this phenomenon is purely originated from the common delay time modulation.If the modulation is turned off, the two response systems become two independent systems with a fixed time delay and so they cannot be synchronized.The largest conditional Lyapunov exponent is calculated and its negative approximately −3.6784 × 10 −2 .
Next we consider the Ikeda system with two time delays which are modulated by two chaotic signals.The response system is Next we consider the synchronization where the time delay is modulated by Gaussian noise.We take the system 3.2 where τ t |ξ t |, ξ t is the Gaussian noise where ξ t 0 and ξ t , ξ t 2Nδ t − t in which N is the noise intensity and • denotes the time average.We have integrated numerically the above-mentioned equation using the standard Euler method 21 , and specifically the evolution algorithm reads The time step used is Δt 0.001, and simulation range typically for a total time is of the order of t 10 5 .Figure 4 a shows the variation of τ 1 t by noising force, and the corresponding synchronization error is depicted in Figure 4 b .
Relationship with Generalized Synchronization
It is very important to discuss the relationship between the above synchronization and the generalized synchronization.Suppose that one wishes to determine whether there is a generalized synchronization between two unidirectionally coupled systems X and Y .One can imagine an auxiliary response system Z, which is identical to Y and subject to the same driving signal.Regarding whether there is a complete synchronization between Y and Z Abarbanel et al. 22 showed that an affirmative answer would imply a generalized synchronization between X and Y .But in the above synchronization phenomenon, common time delay is modulated by a chaotic forcing which is generated from another driving system or noising forcing.The functional dependency between the driving and response systems is not explicitly presented.That is to say, the effective force acting in the two response systems is quite different form that of the generalized synchronization.The feedback signal is not a common feeding signal as in the generalized synchronization but is proportional to the value of its own state vector.However, the force is proportional to the driving signal as well as the response one in this case, while the feedback force is proportional to the response signal only in our case.In this respect the above synchronization can be classified as an extension of the generalized synchronization.
Conclusions
We have investigated the synchronization between two identical time-delay systems where their common delay time is modulated by chaotic or random signals.The condition for synchronization through the analysis of the conditional largest Lyapunov exponent is discussed.The difference from the previous study is that in our analysis the driving common signal is fed into the delay time implicitly while in the previous study the driving signal is explicitly introduced.We also discussed the relation to generalized synchronization.This type of chaos synchronization is highly applicable in population dynamics, neural network, secure communication, and so forth.
3 . 3 where τ 1 tFigure 2 :
Figure 2: a Variation of modulated time-delay τ 1 t ; b temporal behavior of x t and c y t ; d corresponding synchronization error.
Figure 3 :
Figure 3: Variation of a x t and b y t ; c synchronization error.
Figure 4 :
Figure 4: a Time delay τ 1 t is modulated by noise with noise intensity N 1.0; b corresponding synchronization error. | 2,296 | 2011-05-26T00:00:00.000 | [
"Physics"
] |
Exploring Continual Learning for Code Generation Models
Large-scale code generation models such as Copilot and CodeT5 have achieved impressive performance. However, libraries are upgraded or deprecated very frequently and re-training large-scale language models is computationally expensive. Therefore, Continual Learning (CL) is an important aspect that remains under-explored in the code domain. In this paper, we introduce a benchmark called CodeTask-CL that covers a wide range of tasks, including code generation, translation, summarization, and refinement, with different input and output programming languages. Next, on our CodeTask-CL benchmark, we compare popular CL techniques from NLP and Vision domains. We find that effective methods like Prompt Pooling (PP) suffer from catastrophic forgetting due to the unstable training of the prompt selection mechanism caused by stark distribution shifts in coding tasks. We address this issue with our proposed method, Prompt Pooling with Teacher Forcing (PP-TF), that stabilizes training by enforcing constraints on the prompt selection mechanism and leads to a 21.54% improvement over Prompt Pooling. Along with the benchmark, we establish a training pipeline that can be used for CL on code models, which we believe can motivate further development of CL methods for code models.
Introduction
Code generation models (Nijkamp et al., 2022b;Wang et al., 2021b;Le et al., 2022;Fried et al., 2022) can increase the productivity of programmers by reducing their cognitive load.These models require significant computation to train as they have billions of parameters trained on terabytes of data.Hence, they are trained once and are * Work conducted during an internship at Amazon † Corresponding author<EMAIL_ADDRESS>for Prompt Pooling with Teacher Forcing when learning multiple tasks sequentially.First, we initialize the prompt pool with (key, prompt) pairs (denoted by rectangles).Next, each (key, prompt) pair is assigned to either a single task or is shared by two tasks (denoted by colors).When learning Task 1 (green color), we obtain the query (green circle) for a given example and select the top-k (k=2 here) pairs from the assigned (key, prompt) pairs, highlighted in the figure.These selected pairs are then trained for the example.A similar process is followed for subsequent tasks.During inference, we remove task assignments and select the top-k pairs across all the pairs.then used repeatedly for several downstream applications.However, as software development constantly evolves with new packages, languages, and techniques (Ivers and Ozkaya, 2020), it is expensive to retrain these models.Therefore, it is essential to continually improve these models to avoid errors, generate optimized code, and adapt to new domains and applications.We explore continual learning (CL) (Ring, 1998;Thrun, 1998) abilities of code-generation models and aim to improve them.Specifically, we present a CODETASK-CL benchmark for code-based CL and aim to train a model on sequentially presented tasks with different data distributions without suffering from catastrophic forgetting (CF) (McCloskey and Cohen, 1989).This occurs when the model overfits the current task, resulting in a decline in performance on previously learned tasks.
Given the lack of CL benchmarks for the code domain, we create a benchmark called CODETASK-CL using existing datasets.It consists of tasks like code completion (Iyer et al., 2018(Iyer et al., , 2019;;Clement et al., 2020), code translation (Chen et al., 2018;Lachaux et al., 2020), code summarization (Wang et al., 2020a,b), and code refinement (Tufano et al., 2019).This benchmark presents a new and challenging scenario as it necessitates the adaptation of the model to varying input and output programming languages.Along with this benchmark, we also present a training framework to easily apply CL methods to code generation models.
Next, we evaluate the effectiveness of popular CL methods from NLP and Vision domains in the context of code generation models.We consider prompting methods (Wang et al., 2022b;Li and Liang, 2021a) and experience-replay (De Lange et al., 2019) due to their good performance for pre-trained models (Wu et al., 2022a).We also experiment with Prompt Pooling (PP) (Wang et al., 2022c), an effective prompting-based method for CL in the vision domain.Our results show that Prompt Pooling suffers from catastrophic forgetting on our proposed CODETASK-CL benchmark because of the complex distribution shift from varying input and output programming languages across tasks.With further investigation, we find that the unconstrained prompt selection mechanism leads to an unstable training problem.To address this, we propose our method Prompt Pooling with Teacher Forcing (PP-TF), which imposes constraints on prompt selection during training by assigning certain prompts to fixed tasks during training (see Figure 1).This results in stable training and better performance.Interestingly, we find when a replay buffer is available, the simple experience-replay (De Lange et al., 2019) method outperforms other CL methods and achieves performance similar to a multitask baseline (Crawshaw, 2020) where all tasks are provided at once.
In summary, our contributions include: (1) being the first study on CL for code generation tasks, (2) establishing a benchmark and a novel pipeline that supports CL for code generation to motivate future work, (3) identifying and addressing the unstable training issue of Prompt Pooling through our proposed method PP-TF, and (4) discussion on the best CL methods to use in different use cases.
Related Work
Code Generation Models.Code generation and language modeling for source code is an emerging research field experiencing active growth.Several model architectures have been examined recently, including encoder-only models (Feng et al., 2020;Guo et al., 2020), encoder-decoder models (Ahmad et al., 2021;Wang et al., 2021b), and decoder-only models (Nijkamp et al., 2022b;Chen et al., 2021;Nijkamp et al., 2022a).However, none of these models have been studied in the context of continual learning.
Continual Learning.There are various methods for Continual Learning (CL) and they fall into three categories: Regularization, Replay, and parameter isolation methods.Regularization methods (Kirkpatrick et al., 2017;Zenke et al., 2017;Schwarz et al., 2018) assign importance to model components and add regularization terms to the loss function.Replay methods (De Lange et al., 2019;Rebuffi et al., 2017;Lopez-Paz and Ranzato, 2017;Chaudhry et al., 2018) retain a small memory buffer of data samples and retrain them later to avoid catastrophic forgetting (CF).Parameter isolation methods, such as prompting-based methods (Wang et al., 2022b,a;Li and Liang, 2021a;Liu et al., 2021;Qin and Eisner, 2021), introduce or isolate network parameters for different tasks.For a more comprehensive overview of all CL methods, we refer the reader to Delange et al. (2021); Biesialska et al. (2020).
To the best of our knowledge, there are currently no studies or benchmarks for CL on code generation models.Therefore, we evaluate the effectiveness of prompting (Wang et al., 2022b;Li and Liang, 2021a) and experience replay (Chaudhry et al., 2018;Buzzega et al., 2020) based methods, which have demonstrated strong performance in CL on large pretrained models (Raffel et al., 2020).We do not consider regularization methods as they are not effective in continually learning large-scale pretrained models (Wu et al., 2022b).Next, we discuss our proposed benchmark and methods.
CODETASK-CL Benchmark
We present the CODETASK-CL benchmark to assess the CL abilities of code generation models.We also provide a novel training pipeline that can be used to continually train and evaluate code generation models.All of the datasets used to create the CODETASK-CL benchmark are available under the MIT license and more details on the dataset splits and input-output domains are in Table 2. dataset provided by Tufano et al. (2019) consisting of pairs of faulty and corrected Java functions.
Evaluation
Next, we define the metrics used to evaluate a model continually on these datasets.We follow Lu et al. (2021) and evaluate each task using BLEU (Papineni et al., 2002).We follow (Chaudhry et al., 2018) to continually evaluate model's performance.We measure the average BLEU after learning all the tasks as, <BLEU> = 1 N N k=1 b N,k , where N is the total number of tasks and b i,j represents the BLEU score on task j after learning task i.Additionally, we report the average forgetting metric, denoted by <Forget>, to assess the model's ability to retain performance on previously learned tasks.This metric is calculated as the average difference between the maximum accuracy obtained for each task t and its final accuracy, given by <Forget>
Prompt Pooling With Teacher Forcing
Prompt Pooling (Wang et al., 2022c) is a highly effective technique that possesses two key benefits.Firstly, the number of prompts required does not increase linearly with the number of tasks.Secondly, the prompts within the pool can be utilized across multiple tasks, thereby enabling the reuse of previously acquired knowledge.These abilities are advantageous in real-world scenarios, particularly when a model needs to be continually adjusted to accommodate a large number of users/tasks.
In Prompt Pooling (PP), a set of learnable prompts P = {P i } M i=1 are defined and shared by multiple tasks.We follow Wang et al. (2022c) and utilize a query and key-matching process to select the prompts for each task.This process has four steps: (1) a learnable key, represented as k i ∈ R d , is defined for each prompt, resulting in a prompt pool of the form {(k i , P i )} M i=1 ; (2) a query function q(x) is defined, which takes an input x from a given task and produces a query vector q x ∈ R d ; (3) the top-k keys are selected based on the cosine similarity between the query q x and all the key vectors {k i } M i=1 ; (4) we obtain the final input vector x p by pre-pending the example x with the prompts corresponding to the selected keys.Then x p is fed into the pre-trained model f and we minimize the following loss function to only optimize the selected prompts and the corresponding keys while keeping the pre-trained model fixed.
where L LM is the language modeling loss, y is the target sequence given the input x, K s is the set of selected keys from Step (3) above.
The query-key mechanism described above is an Expectation-Maximization (EM) (Moon, 1996) procedure.Given an example, we first select the top-k keys based on the cosine similarity (E-Step) and then train these selected keys to pull them closer to the query (M-Step).The training is stable when all tasks are jointly learned.However, in the CL context, tasks are sequentially trained which makes training unstable.Hence, we propose Prompt Pooling with Teacher Forcing (PP-TF) that removes the E-Step by assigning each {(k i , P i )} pair to fixed tasks and only performs the M-Step of optimizing the keys.To encourage knowledge sharing, we allow a few {(k i , P i )} pairs to be shared across tasks (see Figure 1).With these assignments/constraints in place, when training on task t, we use teacher forcing to select top-k prompts that are assigned to the task.Thus, for learning task t, our loss function becomes, where, K t denotes the prompts assigned to task t for teacher forcing.As training progresses, the queries and keys learn to align in a stable manner, while also allowing for information sharing among tasks through the shared prompts.During inference, we discard the assignment for (key, prompt) pair and use cosine similarity to select the top-k pairs across the whole pool.
Experiments
We focus on the scenario of known task identities for continual learning.This is commonly the case in code-related domains and task identities can also be determined through input and output analysis in certain situations.In the field of NLP and Vision, methods utilizing experience replay and prompting have been highly effective for CL on large pretrained models (Wang et al., 2022c(Wang et al., , 2021a;;Wu et al., 2022a).Moreover, regularization methods are shown to not work well in conjunction with pre-trained models (Wu et al., 2022a), and hence, we skip them from our study.Next, we present these methods along with some baseline methods.
Baselines
Sequential Finetuning (Yogatama et al., 2019) updates all model parameters for every incoming task in a sequential manner.This approach has been shown to suffer from catastrophic forgetting and serves as a lower bound for CL methods.
Individual Models (Howard and Ruder, 2018) finetune a separate models for each new task.This is considered an upper bound for CL methods.
Multitask Learning (Crawshaw, 2020) simultaneously learns multiple tasks at once, without experiencing distribution shift, resulting in a strong performance.For multitask learning, we prepend the task descriptors to the input and follow Wang et al. (2021b) to ensure balanced sampling across tasks with varying dataset sizes.Shared Prompt Tuning (SP) defines M soft continuous prompts (Li and Liang, 2021b) which are added and fine-tuned for each example from all tasks.They are trained via gradient descent while keeping the pretrained model's parameters fixed.
Task Specific Prompt Tuning (TSPT) defines a total of M soft continuous prompts (Li and Liang, 2021b) that are divided across N tasks, resulting in ⌊ M N ⌋ task-specific prompts.Experience Replay (ER) (Riemer et al., 2019) involves maintaining a memory buffer B of examples from the previous task.The buffer randomly stores an equal number of samples from each past task and is used to retrain the model at later stages.Moreover, as several of the other methods outlined in this study can benefit from ER, we also include results with and without the utilization of ER.
Task-CL Experiments
We use CodeT5 model (Wang et al., 2021b) as our pre-trained model when learning the CODETASK-CL benchmark.In Table 1, we report results for a single run on the methods described above and their ER variants.For more implementation details and hyperparameters used please refer to Appendix A.1.First, we find that the popular prompt pooling demonstrates catastrophic forgetting with a test BLEU score of 22.79%.Even when using ER with PP the performance is 39.78% which is still much worse than other methods.In contrast, PP + TF even without ER outperforms PP and PP + ER by 21.54% and 4.55% respectively.Moreover, our results show that the CodeT5 + ER method which finetunes the full CodeT5 model with ER performs the best with an average test BLEU score of 49.21%.Please refer to Appendix A.3 for experiments on the effect of buffer size on performance.
Discussion: We find that task-specific prompts are more effective than other prompting-based CL methods.However, due to their high storage requirements that scales linearly with the number of tasks, this approach is not feasible for large-scale applications where the model needs to be adapted for a large number of users or tasks.In contrast, a memory buffer might be available due to privacy concerns (Yoon et al., 2021) in many situations.In such cases, the PP-TF is the recommended method.Given these findings, we believe that the current Prompt Pooling based methods can be further improved in order to reuse knowledge across tasks.
Training Instability of Prompt Pooling
To show the root of catastrophic forgetting in prompt pooling, we evaluate how queries and keys align in the representation space after learning each task.To do so, we first select a subset of 5k training samples from four tasks resulting in 20k examples.We utilize a fixed codeT5 encoder as our query function that encodes provided examples to obtain queries.These queries remain unchanged during training and the keys are initialized using the data.
We then use principal component analysis (PCA) (Pearson, 1901) on the queries and keys to obtain the first three principal components and plot them.
After learning each task, we repeat the PCA step on the fixed queries and the updated prompt keys.
From Figure 2, we observe before the training starts, the keys (represented by red crosses) are evenly distributed among the queries of different tasks.However, after completing the training on the first task (CodeGen), most of the keys move toward the queries associated with that CodeGen (denoted by orange stars).This indicates that the prompts corresponding to these keys were primarily used for the CodeGen task and were trained by it.As a large portion of the prompts from the pool are utilized during the training of the Code-Gen task, there are no key vectors available for allocation to the second task (CodeTrans).As a result, when learning the CodeTrans, some keys used for the previous task are pulled toward Code-Trans's queries and the corresponding prompts are updated.As each subsequent task is introduced, the key vectors are dynamically adjusted to align with the current task's queries, leading to a unstable process of matching in which updates to the key-prompt pairs are frequently in conflict with the previous tasks.Hence leading to catastrophic forgetting on the previous tasks.
Conclusion
In conclusion, we have introduced a novel benchmark, CODETASK-CL, tailored to cover a broad spectrum of tasks in the code domain, aiming to fuel advancements in Continual Learning (CL) for large-scale code generation models.Our study underscores the shortfalls of popular CL methods like Prompt Pooling when applied to coding tasks, predominantly due to catastrophic forgetting.However, we demonstrate that our proposed method, Prompt Pooling with Teacher Forcing (PP-TF), can effectively mitigate this issue, leading to a significant improvement of 21.54% over the baseline.Furthermore, we establish a comprehensive training pipeline catering to CL on code models.We believe that our contributions, both in the form of the CODETASK-CL benchmark and the PP-TF method, will ignite further exploration and innovation in CL techniques specifically designed for the dynamic and evolving realm of code generation.
Limitations
This work primarily focuses on evaluating the efficacy of existing continual learning (CL) meth-ods for code generation models.It is important to note that many of these methods were specifically designed for natural language processing or computer vision domains and may not directly transfer to the code generation domain.Nevertheless, we have made efforts to identify and address any issues encountered during our analysis.It should be acknowledged, however, that the scope of our work is limited by the selection of methods and the benchmark used.While we have utilized the most popular CL methods from various categories, there may be methods that have not been included in this study due to their inefficacy in natural language processing or computer vision tasks but may be effective in code generation.As such, we encourage further research within the community to explore the potential of CL methods for code-generation models.
Figure 1 :
Figure1: We show the process of prompt selection for Prompt Pooling with Teacher Forcing when learning multiple tasks sequentially.First, we initialize the prompt pool with (key, prompt) pairs (denoted by rectangles).Next, each (key, prompt) pair is assigned to either a single task or is shared by two tasks (denoted by colors).When learning Task 1 (green color), we obtain the query (green circle) for a given example and select the top-k (k=2 here) pairs from the assigned (key, prompt) pairs, highlighted in the figure.These selected pairs are then trained for the example.A similar process is followed for subsequent tasks.During inference, we remove task assignments and select the top-k pairs across all the pairs.
Figure 2 :
Figure 2: We plot the evolution of keys during the training process along with the fixed queries when sequentially learning, Code Generation → Code Translation → Code summarization → Code Refinement Tasks.
Table 1 :
Replay [5k] Code Gen. Code Trans.Code Summ.Code Ref. <BLEU Test > <BLEU Val > <Forget Val > BLEU scores on the test set for the individual tasks and average BLEU (↑) and Forgetting (↓) metrics after sequentially learning Code Generation → Code Translation → Code summarization → Code Refinement Tasks. | 4,424 | 2023-07-05T00:00:00.000 | [
"Computer Science"
] |
Gauge theory applied to chiral magnets
This paper employs a non-abelian gauge theory to derive the relation between a chiral crystal structure and the bulk magnetic DMI energy term. We apply the method to the B20 chiral compounds, in which the chirality develops along the diagonals of the cubic crystal, and we derive, in this framework, the corresponding isotropic Lifshitz invariant. © 2022 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). https://doi.org/10.1063/9.0000322
I. INTRODUCTION
MnSi, FeGe, MnGe and Fe(Co)Si, belonging to the B20 crystal structure type, exhibit non-trivial magnetisation configurations.4][5][6] The interest in such magnetic states lies in the possibility to manipulate skyrmions as logic bits through the spin-transfer torque effect, driven by a low electric current density. 7,84][5][6][7] In thin films and multilayers, the symmetry is broken at the interface between two metals, and at the interface between metals and ferromagnets due to the spin-orbit interaction.0][11][12][13][14] A similar role is played by the presence of an external electric field. 15In bulk crystals, such as the non-centrosymmetric B20 compounds, the crystal possesses chirality in specific directions, and the relevant Lifshitz invariant term of diagonal type stabilises both Bloch-type skyrmions and helical states.The corresponding interaction is the so-called bulk DMI.This interaction should be related to the point group symmetry of the crystal structures; however, this connection is not always evident. 16,17This paper employs a non-abelian gauge theory to address the problem.9][20] By considering the case of the pure gauge field, the critical outcome of the theory is to substitute the traditional spatial derivative with a gauge covariant derivative.In the micromagnetic energy functional, the substitution affects the exchange interaction term, which is usually proportional to the square of the space derivatives of the magnetisation vector.Therefore, if the substituted exchange term is expanded to recover the original term, one finds two additional energy terms: a DMI-like term and an anisotropy type term.We apply the theory to non-centrosymmetric crystals and exploit the Neumann's principle 21 stating that the invariance of a crystal under the action of its own point group symmetry implies the invariance of its physical properties.In the specific case of B20 compounds with tetrahedral symmetry, we show that the corresponding energy term is the bulk DMI one.The structure of the paper is the following: in section II, we describe the chiral properties of the B20 compounds, in section III, we introduce the Lifshitz invariants and finally, in section IV, we present the gauge covariant derivative to be associated with the chiral tetrahedral point group symmetry.
II. CRYSTAL SYMMETRY A. Chiral tetrahedral point group symmetry
Chiral crystals can occur in space groups (SGs) containing only proper symmetry elements (rotations, translations and rototranslations), i.e. the ones that do not contain any mirroring operation or inversion point.These space groups, called Sohncke groups, can be further divided into chiral SGs (i.e. the 11 pairs of SG which are in enantiomorphic relation) and non-chiral SGs. 22Achiral space groups belonging to the Sohncke group contain a 2 1 rotation axis and produce chiral crystals only because the asymmetric unit is chiral: crystals of this type can thus be right-handed or left-handed without changing SG, which is the reason why the SG is classified as achiral.On the other hand, chiral SGs contain at least one screw axis different from the 2 1 axis, and they allow the chiral crystallisation even of achiral building blocks since the SG permits only a clockwise or an anticlockwise rotation.In this case, left-handed and right-handed crystals are classified under different SGs of one of the enantiomorphic pairs, e.g.P3 1 and P3 2 .In the next section, we point out our attention to the cubic B20 structure type, in which well known chiral structures crystallise, such as the monosilicides and monogermanides of the transition metals (TM), among which MnSi, FeGe, MnGe and Fe(Co)Si exhibit magnetic ordering.The B20 structure is described by the space group P2 1 3 (#198 of the international tables for crystallography), which exhibits the tetrahedral symmetry of the point group 23, (i.e. the chiral tetrahedral point group), characterised by three orthogonal 2-fold rotation axes and four 3-fold axes, centred among the three orthogonal directions.All the twelve symmetry operators of the crystal class can be obtained by multiplying only two generating matrices representing the described rotations. 23In Appendix, the two generators of an irreducible representation of the tetrahedral point group 23 (i.e. the subgroup of symmetries operations that leave one point fixed) are reported.
B. B20 chiral crystals
The B20 structure compounds were synthesised for a few transition metals (TMs) with group 14 elements (especially Si or Ge).The unit cell contains 4 TM atoms and 4 Si(Ge) atoms placed in a tetrahedral position along the cube body diagonal.By looking at the crystal from different perspectives (Fig. 1), one can notice the peculiar features related to the crystal chirality: i) the (112) projection reveals that atoms of both species are placed on (111) planes in alternating dense and sparse layers; ii) from the (111) projection, one can see that both TM and Si (or Ge) atoms form helical structures of opposite handedness, with an axis parallel to one of the cube diagonals, and iii) each helix involves only one atom for each of the dense layers, while atoms of the sparse layers do not participate to any of the helices.By looking at the TM species, only one can classify the chirality of the crystal.The specular crystal with opposite chirality can also occur in the same space group since P2 1 3 is an achiral Sohncke SG.B20 are therefore characterised by intrinsic handedness due to the crystal structure, with each atomic species forming helices of the characteristic period ( √ 3la, being la the lattice parameter) in <111> directions.From the magnetic point of view, the magnetic moments helix that develops in B20 compounds with magnetic ordering may display the same handedness of the crystal chirality or not, depending on the composition of the specific compound, 24 and it typically shows a period of 1-2 orders of magnitude larger (10-230 nm), 25 revealing that the connection between atomic arrangement and magnetic configuration is not straightforward.However, the tetrahedral crystal symmetry has a definite role in determining the magnetic structure since the magnetic helicity has been observed to propagate along the cube body diagonals. 26
III. LIFSHITZ INVARIANTS
The DMI-like energy term is described by the Lifshitz invariants L ijk = m k ∂imj − mj∂im k and, because of the antisymmetry L ijk = −L ikj , is written as where T il is a matrix with 3 × 3 components that we can subdivide into an antisymmetrical, a symmetrical, and a diagonal parts T il = T A,il + T S,il + T D,il .The antisymmetrical part T A,il = −T A,li can be written as T A,il = ϵ ilk T A,k where T A,k are components of a vector.These antisymmetric terms correspond to the so-called surface DMI terms and stabilise magnetic structures with a Néel type chirality.This can be seen, for example by choosing T k = Tz, ∂i = ∂x and my = 0.In this case, the minimum energy configuration, considering the competition with the exchange energy A(∂imj) 2 , corresponds to a spiral state with wave-number q x = Tz/A, therefore a positive rotation around the y-axis, if Tz > 0 (Fig. 2 top).The symmetric terms correspond to an anisotropic modification of the previous terms and will not be considered here.The diagonal part of the tensor gives rise to the bulk DMI energy terms.The energy term is written as Such a compact form is expanded as follows: In a material with an easy axis anisotropy along the z direction, the terms Txx and Tyy produce a Bloch type chirality.For example, choosing Txx, ∂i = ∂x and mx = 0 the corresponding stable state is a spiral state with q x = Txx/A (Fig. 2 middle).Then Txx > 0 produces a negative rotation around the x-axis.In a material with a hard axis anisotropy along the z direction, the term Tzz produces a helical state.For those with ∂i = ∂z and mz = 0 the stable state is a helical state with q z = −Tzz/A corresponding to a negative rotation around the z-axis for Tzz > 0 (Fig. 2 bottom).By considering the case with all the coefficients equal each others, i.e.Txx = Tyy = Tzz = T, the sum of the three diagonal terms give rise to the isotropic bulk DMI energy density that is written as The bulk DMI terms generated by the diagonal part of the tensor of the Lifshitz invariants are those expected due to the chiral properties of the crystal.Nevertheless, as the description in terms of the invariants is thoroughly phenomenological, the coefficients should be derived based on the specific symmetries possessed by the crystal.
IV. GAUGE, POINT GROUP SYMMETRY AND NEUMANN'S PRINCIPLE
The purpose of this section is to introduce a non-abelian gauge field Ai with values in the non-abelian SO(3)-group (i.e. the group of the rotation in the Euclidean space).Such a non-abelian gauge field imposes a modification in the definition of the partial derivatives that occur in the exchange energy term of the classical micromagnetic energy.With this method and the Neumann's principle, we directly found the relationship among the gauge covariant derivative, the point group symmetry of the material, and the Lifshitz invariants.Firstly, the gauge covariant derivative is introduced through the general concept of non-abelian gauge fields as in Refs.15 and 28-31 Secondly, the non-abelian gauge field Ai transforms according to the following rule Ai → R τ AiR + R τ ∂iR, with R an element of the SO(3)-group 32 represented as where ψ is the rotation angle, and n the rotation axis.Moreover, we assume the following relation J l = iΣ l where J l matrices are the generators of the Lie algebra of the SO(3)-group, [J l , J k ] = iϵ lki Ji.However, we limit our attention to the vacuum state of the system, classically a pure gauge field configuration (i.e.Ai → R τ ∂iR).
Hence the pure gauge covariant derivative assumes the following form where ψ is an axis of rotation whose module quantifies the rotation angle around itself.Finally, the micromagnetic energy, in a ARTICLE scitation.org/journal/advgauge-invariant fashion, taking into account the generalised exchange interaction, it is written as follows We now focus our attention on a DMI-like term in the second row of the Eq.( 9) where L ijk is the Lifshitz invariant.The coefficient ∂iψ l corresponds to the T il of the section III.By choosing as axes of rotation the four diagonal of the B20 crystal where the chirality develops, one can obtain the corresponding energy of the bulk DMI type.The phenomenological constitutive properties of the B20 crystal regarding the DMI-like energy term are described by the matrix T il .The crystal B20 is invariant under the action of an irreducible representation of the generators of the point group symmetry 23, lk (n, ψ)e l ⊗ e k .The Neumann's principle applied to the DMI-tensor is By solving the linear system Eq.( 11) (see Appendix) we obtain the bulk DMI-like energy term, and the permitted coefficients of the matrix T il are only diagonal terms with equal values.
V. DISCUSSION AND CONCLUSIONS
Starting from general considerations about the irreducible representation of point group symmetry of the B20 structure, we have provided a detailed construction of a non-abelian gauge theory applied to this class of crystals.The presented results give an insight into the relationship between the Lifshitz invariants and the gauge covariant derivative, providing a connection between the partial derivatives of the local rotation ∂iψ l and the DMI tensor T il by using the Neumanns' principle.The construction shown can easily extend beyond the tetrahedral point group symmetry 23 to other ones.Moreover, this model may be extended by adding permitted magnetocrystalline anisotropy, although further analysis within this framework will be required to include, for example, the fourth-order magnetocrystalline anisotropy term.
APPENDIX
We derive the DMI-tensor explicitly for a cubic crystal with the point group symmetry 23 (i.e. the chiral tetrahedral point group).An irreducible representation of the tetrahedral point group consists of two linearly independent elements able to generate the whole symmetry class of the crystal constituted by 12 elements.Moreover, since the tetrahedral group 23 is the rotation symmetry group of the regular tetrahedron, we use the same notation for the rotation matrices specifying the direction ez, n = − 1 lk (n, Now we apply the Neumanns' principle, 21 and we explicitly compute the components of the DMI-tensor T, T = R (1) (ez, π)TR τ (1) (ez, π) In order to fulfil both constraints in Eqs.(A2) and (A3) the DMI-tensor is defined as follows )
FIG. 1 .
FIG. 1.A B20 crystal formed by 3x3x3 unit cells viewed along [111] or[112] directions.The three topmost dense (111) planes containing TM atoms (green) are displayed in color.For example, the atoms involved in one of the right-handed helices of the TM species pointing in the [111] direction have been highlighted with the yellow crosses.Such helices are present along each of the 4 cube body diagonals.The size of TM atoms has been slightly increased to better display the helix chirality.Pictures of the crystal structures have been produced with Vesta 3 software.27
FIG. 2 .
FIG. 2. Static configurations stabilised by the Lifshitz invariants of the DMI represented along one representative axis.Top: the so-called surface DMI term generates the Néel type chirality, i.e. the antisymmetric Lifshitz invariants.Middle and bottom: Bloch type chirality (middle) and helix (bottom) stabilised by the bulk DMI, i.e. the diagonal Lifshitz invariants. | 3,166.8 | 2022-03-01T00:00:00.000 | [
"Physics"
] |
Anisotropic Dzyaloshinskii-Moriya interaction protected by D2d crystal symmetry in two-dimensional ternary compounds
Magnetic skyrmions, topologically protected chiral spin swirling quasiparticles, have attracted great attention in fundamental physics and applications. Recently, the discovery of two-dimensional (2D) van der Waals (vdW) magnets has aroused great interest due to their appealing physical properties. Moreover, both experimental and theoretical works have revealed that isotropic Dzyaloshinskii Moriya interaction (DMI) can be achieved in 2D magnets or ferromagnet-based heterostructures. However, 2D magnets with anisotropic DMI haven't been reported yet. Here, via using first-principles calculations, we unveil that anisotropic DMI protected by D2d crystal symmetry can exist in 2D ternary compounds MCuX2. Interestingly, by using micromagnetic simulations, we demonstrate that ferromagnetic (FM) antiskyrmions, FM bimerons, antiferromagnetic (AFM) antiskyrmions and AFM bimerons can be realized in MCuX2 family. Our discovery opens up an avenue to creating antiskyrmions and bimerons with anisotropic DMI protected by D2d crystal symmetry in 2D magnets.
INTRODUCTION
Topological non-trivial magnetic structures such as chiral domain walls, 1 merons, 2 bimerons 3,4 and skyrmions 5,6 have attracted great research interests due to their rich physical properties and widespread application prospects in spintronic devices. Among these spin textures, magnetic skyrmions have been extensively studied due to their small size, low energy consumption and low driving current density. 7 ultrathin epitaxial Au/Co/W(110) with C2v crystal symmetry, 22 and antiskyrmions with anisotropic DMI are reported in acentric tetragonal Heusler compounds with D2d crystal symmetry 23,24 and non-centrosymmetric tetragonal structure with S4 crystal symmetry. 25 In parallel with the development of hot study of skyrmions in these traditional bulk and multilayer thin films, 2D magnets, e.g., Fe3GeTe2, 26 CrI3, 27 CrGeTe3, 28 MnSe2 29 and VSe2 30 with longrange magnetic orderings have been extensively reported in the last few years, which have been providing an ideal platform to study fundamental properties of magnetism such as magnetooptical and magnetoelectric effect for ultracompact spintronics in reduced dimensions.
Moreover, recent works have proposed that Néel-type skyrmions with isotropic DMI can be realized in 2D Janus magnets, e.g., MnXY, 31 CrXY 32 and multiferroics structures, e.g., CrN,33 BaTiO3/SrRuO3, 34 In2Se3/MnBi2Se2Te2. 35 However, it is worth noting that anisotropic DMI has not been reported yet in 2D magnets. Different from previous materials with isotropic DMI vector along with x and y directions, MCuX2 with special crystal symmetry has an anisotropic DMI vector. Skyrmion Hall effect will cause FM skyrmions with opposite topological charges to propagate in opposite direction, instead of moving parrel to injected current. Antiskyrmions Hall angle is strongly dependent on the direction of apply current related to the internal spin texture of antiskyrmion. When applying spin-polarized current to drive an antiskyrmion, the propagation direction of the antiskyrmion will follow the current direction without topological skyrmion Hall effect. [36][37][38][39][40][41] Therefore, it is possible to achieve the zero antiskyrmion Hall angle in a critical current direction.
In experiments, in order to discover 2D materials, a lot of efforts have devoted to finding materials with characteristics of weak interlayer bonds, which allow their exfoliation down to single layer by mechanical or liquid-phase approach 42 ( , ) and higher t2 ( , , ) levels due to the influence of crystal field. The t2 ↔ ↔ super-exchange interaction favors to the appearance of FM coupling, while the ↔ direct exchange and ↔ direct exchange prefer to AFM coupling. When the d orbit is more than half-filled with electrons, the AFM coupling mainly benefits from ↔ direct exchange. Thus, in the MCuX2 family with d orbit no less than half-filled, the competitive AFM coupling is stronger. Similar results are also depicted in the zinc-blende binary transition metal compounds. 50 Collecting all 3d TM atoms magnetic moments in MCuX2 monolayer [see Table 1]. We can find that Mn atoms have the highest magnetic moments and they monotonically decrease on both sides of Mn. It is obvious that the overall trend across 3d TM row obeys the Hund's rule. 51 Figure 2 shows the calculated NN DMI of MCuX2 structures based on the chirality-dependent total energy difference approach. 52 It is found that all systems have anisotropic DMI and dx and dy have opposite signs along x and y directions, which is consistent with the DMI analysis at the beginning. Besides, the DMI strength varies from 0 to 15 meV/atom. These DMIs are very large compared to many state-of-the-art FM/HM heterostructures and 2D Janus structures, e.g., Co/Pt (∼3.0 meV) 52 shown in Supplementary Figure 3. The main reason is that the magnetic moment of Ni atoms in the system is very small, which leads to a small contribution to DMI. Of course, we also check the different U value from 2 to 4eV, a very small magnetic moments are obtained.
DMI of monolayer MCuX2
Next, according to the Moriya's rules 10 plane perpendicular to the propagation direction <110>. However, we ignore the NNN DMI in our theoretical calculations, because we find that the DMI between NN and NNN atoms differs by about two orders of magnitude by using the four-state energy-mapping analysis, 55 e.g.
VCuSe2 and MnCuSe2 (NNN DMI is -0.082meV and -0.075meV). Although the NNN DMI is neglected, we still observe the magnetic structure of Bloch-type helicoid from the results of the micromagnetic simulation.
Chiral spin textures of monolayer MCuX2
Furthermore, we performe the atomistic micromagnetic simulation based on first-principles calculated materials parameters are shown in Table 1 by using the VAMPIRE software package. 56 To get the dynamics of magnetization, the Landau-Lifshitz-Gilbert (LLG) equation was used with the Langevin dynamics as follows: where is the normal unit vector of ith magnetic atom, + is the gyromagnetic ratio and ' In addition, we also calculate magnetic parameters J1, J2, K and D of monolayer VCuSe2 when the tensile strain increases from 1% to 5%, as shown in Table 2. We find that NN and NNN FM exchange coupling strengths decrease a lot while DMI changes slightly, resulting in the large ratio of D/J. Therefore, antiskyrmion with smaller diameter is achieved under tensile strain. [7] The phase diagram of VCuSe2 monolayer under different stress and temperature is shown in Figure 5.
DFT calculations
First-principles calculations are carried out based on density functional theory (DFT) implemented in Vienna ab-initio Simulation Package (VASP). 61 We adopt Perdew-Burke-Ernzerhof (PBE) functionals of the generalized gradient approximation (GGA). 62 as the exchange correlation potential, and use projector augmented plane wave (PAW) method 63,64 to deal with the interaction between nuclear electrons and valence electrons. We set the cutoff energy of 520 eV for the plane wave basis set, and 24×24×1 k-point with Γ-centered meshes for the Brillouin zone integration. Partially occupied d orbitals of transition metal atoms are treated by GGA+U 65 with U = 3 eV for the 3d orbitals of M and Cu elements. We set a vacuum layer with thickness of 25Å in the z direction to ensure that there is no interaction between the periodic images. The convergence criteria of the total energy in the ion relaxation process and the Hellmann-Feynman force between atoms were set to be 1×10 -7 eV and 0.001eV/Å, respectively. To describe our magnetic system, we adopt the following Hamiltonian model: where Si (Sj) is the normal spin vector of ith (jth) magnetic atom, the J1 and J2 represent exchange coupling constants between Nearest-Neighbor (NN) and Next-Nearest-Neighbor (NNN) atoms, respectively. K is magnetic anisotropy constant and Dij is the DMI vectors. The methods to calculate J, K, D is described in the experimental section.
Magnetic parameters
Exchange coupling constant: we construct a 2×2×1 supercell to study three different magnetic where the positive/negative value corresponds to FM/AFM coupling.
Magnetic anisotropy energy K: magnetic anisotropy energy is defined as the energy difference between in-plane magnetized [100] axis and out-of-plane magnetized [001] axis: NN Dzyaloshinskii-Moriya interaction (NN-DMI) D: we performed DMI calculations using the chirality-dependent total energy difference method. 52 First of all, a 4× 1 × 1 supercell is constructed to obtain the charge distribution of system's ground state by solving the Kohn -Sham equations in the absence spin orbit coupling (SOC). Then, SOC is included and we set spin spirals to determine the self-consistent total energies in the clockwise and anticlockwise rotation. Finally, the energy difference between clockwise and anticlockwise rotation is calculated to obtain the anisotropic D. The DMI can be obtain by following formula:
Phonon spectrum
In calculations, based on the PHONOPY code, 66 | 1,946 | 2022-06-08T00:00:00.000 | [
"Physics"
] |
A mixed-methods evaluation of community-based healthy kitchens as social enterprises for refugee women
Background The aim of this study is to investigate the potential impact of a community-based intervention - the Healthy Kitchens, Healthy Children (HKHC) intervention - on participating women’s household’s economics and food security status, decision making, mental health and social support. Methods We established two healthy kitchens in existing community-based organizations in Palestinian camps in Lebanon. These were set up as small business enterprises, using participatory approaches to develop recipes and train women in food preparation, food safety and entrepreneurship. We used a mixed-methods approach to assess the impact of participating in the program on women’s economic, food security, decision making, social and mental health outcomes. A questionnaire was administered to women at baseline and at an 8-month endpoint. The end line survey was complemented by a set of embedded open-ended questions. Results Thirty-two Palestinian refugee women were employed within the kitchens on a rotating basis. Participating women had a 13% increase in household expenditure. This was translated into a significant increase in food (p < 0.05) and clothing expenditures (p < 0.01), as well as a reduction in food insecurity score (p < 0.01). These findings were supported by qualitative data which found that the kitchens provided women with financial support in addition to a space to form social bonds, discuss personal issues and share experiences. Conclusions This model created a social enterprise using the concept of community kitchens linked to schools and allowed women to significantly contribute to household expenditure and improve their food security.
Background
The Palestinian refugee presence in Lebanon dates back to 1948, with the majority of refugees living in urban camps with deteriorating infrastructure. In Lebanon, Palestinians face social and political exclusion including restrictions on employment [1], and have fragile livelihoods and high rates of poverty. Almost two thirds of the population lives below the poverty line and severity of household food insecurity is highly sensitive to changes in household income [1]. Female-headed households are particularly at risk of food insecurity as women in this population experience high rates of unemployment [1]. Female labor force participation rate is also low at 17%, likely related both to gendernorms and low education levels [2,3]. In addition, food insecurity has been shown to negatively affect dietary diversity as well as physical, mental and social health in Palestinian refugees living in Lebanon [4].
One intervention with promise to address these issues is participation in community kitchens. Community kitchens are defined as community-based cooking programs which aim to enhance food preparation skills [5]. Most commonly, participants in community kitchens are trained in budgeting, menu planning, food hygiene, cooking skills, and may also receive nutrition education.
Community kitchens involve regular meetings of participants to prepare meals which are then shared. The main differences between community kitchens and other food assistance programs are their collaborative, participatory aspects and their potential to foster social skills and social support [5]. Evidence for the impact of community kitchens comes almost exclusively from interventions implemented for low-income communities in highincome countries (Canada, Australia, Scotland), and highlights their role in increasing self-efficacy, social engagement, access to employment, and mental health [6][7][8]. A systematic review showed that by building a safe environment and decreasing social isolation, community kitchens enhance social interaction [7].
Findings regarding the effects of community kitchens on food security and nutritional status are however, less conclusive. Several studies report improvements in knowledge about nutrition, healthy food purchases and practices as well as increased dietary diversity [6,[9][10][11]. One study showed that participating in community kitchens was associated with improved short-term food security, decreased food security-related psychological stress, and increased awareness of food-related issues [12]. However, this study and others cite the need for further investigation of the effects of community kitchens on long-term food security, particularly as community kitchens do not significantly change the economic status of households, and thus have limited capacity to improve food security status [10,12,13]. Two reviews of the impact of community kitchens have concluded that there is insufficient evidence regarding whether community kitchens can address long-term resource-related food insecurity [7,14]; this is likely due to the lack of an income-generating component in the models that have previously been implemented.
Integrating a livelihood-generating component into community kitchen interventions thus has the potential to tackle this shortfall, and impact food security status. In this study, we evaluated the potential of such a model in a refugee population living in a middle-income context, and investigated changes in household economics and food security status, decision making, mental health and social support in participating women.
Study context and intervention description
The UN Relief and Works Agency for Palestine refugees (UNRWA) has been providing assistance and protection to registered Palestinian refugees in Lebanon since 1950. Inside the urban camps where 63% of Palestinians refugees in Lebanon reside, UNRWA provides infrastructure (water, electricity, housing) in addition to education, health care and other welfare services to eligible refugees [1]. In this context, we designed the Healthy Kitchens, Healthy Children (HKHC) intervention to address longterm food insecurity in two UNRWA camps in Beirut, Lebanon. Two community kitchens were established as small business enterprises for Palestinian women and were linked to two UNRWA elementary schools to prepare and cater healthy snacks to school children for the duration of one academic year. The intervention took place in two camps (Bourj el Barajneh and Shatila). We worked with UNRWA's social services program to identify already existing community-based women's organizations (CBOs) that would be willing to participate in the intervention and whose community centers were in close proximity to UNRWA elementary schools.
The CBO in Bourj el Barajneh already had a small functional kitchen that was used for intermittent social events or activities. As for the CBO in Shatila, a meeting room was converted into a kitchen. We renovated the kitchens and equipped them with the necessary items to increase their capacity to produce food on a larger scale and to ensure food safety and hygiene.
The community kitchen intervention involved two components. The first component included a week-long training on-site in the kitchens covering topics related to entrepreneurship (organizational, managerial, purchasing and budgeting skills), food preparation, food safety and hygiene, nutrition, and the development of standardized recipes of healthy school snacks. The training program was developed specifically for the context and implemented in Arabic. The training was tailored for women with low literacy and used visual and practical methods, including strategies for bulk purchasing within the context of Palestinian refugee camps. Training was conducted on-site in the kitchens and involved on-the job skills acquisition. A monthly snack menu was developed by the women using a participatory process including focus group discussions during which women suggested Palestinian dishes that could be made for the school snacks and may appeal to primary-school aged children. The study nutritionist worked with the women to adapt the recipes based on recommended nutrient content for mid-morning school snacks.
The second component provided an employment opportunity by involving women in catering daily healthy school snacks produced in the kitchens, to children aged 5 to 12 years who were attending two UNRWA schools. The school enrollment rate of 7 to12-year-old children generally reaches 97% [1]. Linking the kitchens with schools ensures a market for food produced in the community kitchens. In addition to increased income and wealth creation, another motive to participate in such programs in the refugee context is preserving identity and supporting participants' displaced ethnic communities [15]. In the HKHC intervention, incorporating traditional Palestinian meals in the monthly snack menu was one way of promoting Palestinian culture amongst the younger school generation that eat more non-traditional and fast-food meals.
The intervention involved working in the kitchen for 2 to 3 days a week for 6-h shifts per working day, throughout a period of 8 months (October to June; the duration of the school year). Women who participated arrived at the kitchen early in the day (around 7 am), prepared food and were able to complete all tasks by 1 pm, to be home when their children arrived from school. One of the CBOs had a nursery that provided childcare for preschool children, the other CBO allowed women to bring young children to the centre to be cared for by CBO staff. During the academic year, the two Healthy Kitchens supplied a healthy mid-morning snack of around 313 kcal, 5 days a week to 714 children attending two UNRWA elementary schools. The snacks were subsidized and schoolchildren were asked to pay 0.25USD per snack, totaling 5USD per month for 20 snacks. Women's additional income (from snack sales and the subsidy from the program) was equivalent to 110 USD per month.
Recruitment
With the help of UNRWA's Relief and Social Services office, Palestinian women living in the camps were identified and contacted by social workers and CBO staff to participate in this intervention. Social workers and CBOs reached out to women who had either applied for the UNRWA social safety net program, or had attended previous CBO activities (language literacy, computer literacy and hairdressing classes), and had previously expressed a need/willingness to work. Fifty one women initially expressed an interest in participating in the intervention and attended an information session about the study. During this session, a detailed explanation of the intervention including time commitment, rotation schedule and monetary compensation was communicated to the women. In addition, written informed consent was sought from all women participants at the beginning of the study. All protocols were approved by the Institutional Review Boards of the American University of Beirut (AUB) and the University of Maryland.
Evaluation design and data collection
A mixed-methods approach was used to evaluate the intervention. Data were collected by trained staff including UNRWA social workers. Training of data collectors was conducted by the research team at AUB.
Quantitative data
Quantitative data were collected using a sociodemographic and economic questionnaire, which included questions on household assets, household income and food expenditure; and for each household member data were collected on employment and educational status. The questions on economic status were administered at baseline and endline to all participating women; one woman did not complete the questionnaire at endline, and therefore data are available on 32 women in total.
Household food insecurity was assessed using the 7item Arab Family Food Security Scale (AFFSS), previously validated for this population [16]. Positive responses to the questions were summed and households were classified as food secure (score 0-2), moderately food insecure (score 3-5) and severely food insecure (score 6-7). Questions on coping strategies were adapted from the Coping Strategies Index [17].
Decision making power was assessed using several domains adapted from the Women's Empowerment in Agriculture Index (WEAI) which includes questions on access to and decision-making power over income and expenditures, as well as decisions related to meal planning, healthcare, family planning, and visits to family or relatives [18]. The two components of this questionnaire module asked "When decisions are made regarding the following aspects of household life, who is it that normally takes the decision?", which enables the collection of responses that indicate sole or join decision making, as well as "To what extent do you feel you can make your own personal decisions regarding these aspects of household life if you want (ed) to?". Questions related to previous employment history and whether the participant was actively seeking a job were used to assess the motivation of the participant -engagement in more prior activities reflected higher motivation levels.
Respondents were also asked about their health status using the self-rated health question (SRH), with responses ranging from "very good" to "not good at all" [19]. Mental health was assessed using the validated Mental Health Inventory (MHI-5) in Arabic, with higher scores indicating better mental health [20]. The total score was normalized to a 0-100 scale for this analysis. Both SRH and MHI-5 have been previously validated and used in this population [1,[20][21][22]. The 10-item Duke Social Support Index (DSSI) was translated and used as a continuous measurement to determine the participant's level of social support [23]. Although this tool has not been validated in this setting specifically, it has been validated and used among several vulnerable groups in various contexts [24][25][26].
Qualitative data
Qualitative data collection entailed semi-structured interviews with the 32 women at the end of the study. Face-to-face interviews were conducted by two trained research assistants not involved in training and implementation of the intervention and lasted between 30 and 60 min each. The interviews were guided by a topic guide that included questions around women's experience in the community kitchen, interactions with others, financial wellbeing, perceived impact of the intervention, and advantages and disadvantages of the project. The open-ended questions also aimed at capturing in-depth descriptions of the role of the project in the financial and social wellbeing of the women. The interviews were tape-recorded, transcribed, and translated from Arabic to English.
Data analysis
Baseline differences between the women who dropped out versus those who remained in the study were tested using non-parametric methods (quantitative Wilcoxon ranksum) and Fisher exact tests. To examine the association between paired baseline and endline outcomes, McNemar chi-square test (nominal variables) and Wilcoxon signrank (continuous variables) were used. A pvalue of 0.05 was used to indicate statistical significance. All analyses were performed using Stata 13 (StataCorp).
Transcripts of interviews were analyzed using thematic analysis in NVivo10 (QSR International). An initial reading of transcripts led to a preliminary list of recurrent themes. Guided by the original research questions and the themes that emerged, we organized the data into categories and added illustrative quotes. All codes were revised by two researchers to check the consistency in the categorization of the text.
Characteristics of the study population
A convenience sample of 51 women was initially recruited to the intervention. However, in the first week, when they became aware of the time commitment necessary, 18 women dropped out. One further woman did not complete the endline questionnaire. There were no significant differences in household expenditure, food security and mental health characteristics at baseline between women who remained in the study and those who left the study. However, the women who remained in the study were from larger households (median number of household members 6 [4-7] vs 4 [3][4][5][6]) (p < 0.05), had a higher social support score (median score 24 [22][23][24][25] vs 22 [20][21][22][23][24]) (p < 0.05) and higher motivation levels, indicated by previous engagement in income generating activities (16/32 (50.0%) vs 2/18 (11.1%) (p < 0.05).
Throughout the 8 months, 32 women participated in the intervention and completed both baseline and endline assessment. Baseline characteristics of these 32 women are presented in Table 1. Median age of participants was 41 years (range 18-64 years), and the majority were married (n = 28/32). Few women completed middle school (n = 7/32), and their median household expenditure was 815 USD per month (162 USD per capita) at baseline (Table 1).
Household economics
Additional income generated from the intervention was equivalent to a median of 110 USD per month. Total household expenditure increased by 13%, which was due to a significant increase in expenditure on food (p < 0.04), clothing (p < 0.001) and entertainment (p < 0.001) ( Table 2). There was also an increase, although not significant, in water, transportation, tuition, and healthcare expenditures per capita (data not shown).
Some respondents (20 women) in qualitative interviews stated that the additional income improved their overall financial status while others felt that the income was minimal (6 women). Moreover, some women (7 women) stated that earning money incentivized them to work. Others (7 women) indicated that the income allowed them, to a certain extent, to be self-sufficient. One woman mentioned, "I am taking this money. I earned it, I did it, I worked hard to get it so that was something very nice to me that I became productive". Another woman stated, "But I started feeling that I am productive in something, even if I [just use the money to] recharge [credit] to my phone. If I get my daughter shoes, or glasses, or if I get myself a watch, I started feeling that I am doing something, something from my own effort, something that has a value" [W2].
Working helped women to no longer view themselves as an economic burden on their husbands; the intervention allowed them to diverge from societal norms which impose that women are completely dependent on their husbands for money. When talking about her colleague in the kitchen, one woman mentioned, "her husband used to give her money for the house expenses, [now] she is [contributing to paying off] the loan without the constant worry" [W4].
Moreover, some women stated that the income earned contributed to paying off debt (2 women), while others used it as extra money to buy clothes for children (4 women) and for the healthcare of sick family members (4 women). Income earned was also used towards their children's education (3 women One woman described the project as a means to provide financial stability; "There is a proverb that says, "a pebble stabilizes a big jar". This [work] is the pebble that stabilized the big jar. In summary, it is helpful. Thank God, yes, it is true that the amount [salary] is not big but, it closed a big gap" [W13].
Food security, nutrition knowledge and behavior change
At baseline, 12/32 (37.5%) of the women reported that their households experienced moderate food insecurity and 6/32 (18.7%) reported severe food insecurity. This was somewhat reduced at endline to 8/32 (25.0%) moderately food insecure and 5/32 (15.6%) severely food insecure ( Table 2). Median AFFSS score went from 4 [2; 5] at baseline to 2 [0; 3] at endline (p < 0.01), which was reflected in a reduction in food-related coping strategies and accepting gifts (p < 0.01) and a borderline significant decrease in borrowing money to obtain food (p = 0.070) and borrowing food (p = 0.063). There were no significant differences in household diet diversity between baseline and endline.
However, several women reported changes in food preparation behaviors (23 women). They mentioned the fact that they reduced salt, butter, oil, and fried food consumption at home. One woman said, "I used to put a lot of salt [in my cooking] so I started lessening it, my hand was loose with the salt, now I put less" [W2].
The women also highlighted that they implemented the food safety practices they learned in the kitchens to their homes. Their favorite topics were mostly related to hygiene: "For example the vegetables [cutting] board, you can't use it for meat. I am now doing this at home. I now have more boards for the vegetables, for the onions, for the meat" [W57]. Overall, the participants were pleased that the information learned had an impact on their daily lives, as food is a central part of their responsibilities at home. Participants mentioned that they improved their cooking skills and learned new recipes that they now prepared at home.
Decision making, skills acquisition and personal growth
The women reported a shift in their own decisionmaking power regarding major household expenditures from 3/32 (9.4%) at baseline compared to 13/32 (40.6%) at endline (p < 0.05) And although not significant, we observed an increase in the extent to which women felt they could make their own decisions regarding employment, daily meal preparation, major, minor household expenditures and getting advice on healthcare ( Table 2).
Women also expressed that the skills gained throughout the training taught them to calculate expenses and scale up kitchen operations (9 women). One woman reported, "I didn't know how to calculate the amounts. I used to spend a whole day deciding. And some days we used to get more ingredients.
[now] I know how much I need to order. I can tell you the price right away. And I know right away the stuff I need to purchase" [W4]. They expressed satisfaction with learning new tactics on how to shop for groceries efficiently and be more economical in their work.
On a personal level, women reported overcoming shyness, being more responsible, and feeling efficient. All 32 women also perceived growth within the group where they learned to work as a team, how to delegate tasks appropriately, and to collectively encourage each other. One woman noted, "the project got me out of the The women were also seen as key players in the school feeding program which gave them a sense of accomplishment. Several women embraced the fact that they were able to provide for themselves while providing for others and serving the community (8 women). They received positive feedback on the quality of the food and community members were interested in their work: "All my neighbors are fans, when I show up, they ask me "what did you cook today?", so I tell them, there are things they don't know so I explain" [W70].
Mental health and social support
Data from the quantitative component of the study showed a non-significant increase in MHI-5 score, and no significant difference in the social support scale ( Table 2). 21/32 women had some level of increase in MHI-5 score between baseline and endline, whereas 11/ 32 women had a decrease.
However, many of the women highlighted in the interview that the kitchen provided an escape from their everyday lives, whether it was to temporarily distance themselves from family, from the environment at home, or as a break from their daily routine (15 women). Several women noted that they saw value in being occupied with their work and felt more energetic throughout their day in contrast to their otherwise sedentary lifestyle (15 women). Many women stressed that their work in the kitchen was a source of happiness, boosted their morale, confidence, self-worth, and reduced anxiety (20 women). In addition, women expressed that the kitchen provided additional social support (28 women). Throughout their work in the kitchen, the women stepped out of their comfort zone, shared problems with each other, and valued teamwork: "We are very collaborative; this is why we feel comfortable. For example, if there's a pile of dishes to wash, you don't have to tell someone to do it, she would come on her own and wash them" [W68]. The women felt they gained a sense of friendship from working in the Healthy Kitchens. This happened through overcoming social isolation and emotionally supporting each other in a hospitable environment: "You feel all women are united, there's no gossiping, you feel that we're all sisters sitting in one big house and cooking for our families. The experience I lived was great" [W62].
Women reported learning how to listen to each other and communicate effectively. One woman indicated, "so at home, each woman would cook her way and give orders to her family. Then, we progressively learned to talk to each other without being sensitive [ …]" . [W44].
Investigating the discrepancy
In order to explore the discrepancy observed between the null quantitative results on mental health and social support with the generally positive qualitative findings, we disaggregated both quantitative and qualitative data by MHI5-score change (Comparing women whose MHI-5 score decreased between baseline and endline with women whose scores increased).
Women with decreased mental health scores were more likely to be have been severely food insecure at baseline (p < 0.01) and had higher debt (not statistically significant) than the women who had an improvement in mental health scores (Fig. 1).
Those who exhibited the sharpest decrease in mental health referred to a set of underlying circumstances that affected their mental health, such as choosing not to share problems with anyone, expressing being satisfied at work but not at home because of family problems and family breakups, and the need to support sick family members (Fig. 1).
Discussion
This study provides evidence of the potential of the HKHC model to improve economic status, food security and entrepreneurial skills of women living in marginalized low-income communities as well as providing a place for social interaction.
The integration of an income-generating component into community kitchens transformed these into social enterprises, allowing women living in this traditionally patriarchal community to "enact agency within the context of constraint" [27][28][29]. That is, in Palestinian refugee communities, as in many Arab countries, societal norms limit the sectors in which women can work and may require them to obtain permission from their husbands or fathers to seek employment [30,31]. Where formal employment is limited [1,31], particularly among married women, home-based self-employment has been argued to provide a safe environment from which women can challenge their traditional roles [27,29], and allow them to become productive household members. The HKHC model broadens the concept of home-based entrepreneurship, by applying it at the community level, where, along with creating income generating potential, it built a sense of social support, while not challenging traditional social norms [32,33].
In fact, women who participated in the intervention increased their spending, self-sufficiency, and decisionmaking around this spending, ranging from minor expenditures such as clothing and entertainment to major expenditures such as debt repayments, in spite of sociocultural barriers. Working and earning money provided participants with a sense of self-satisfaction [27].
Although the income generated was modest, participants' household food security significantly improved from baseline. In low-income populations, women are especially vulnerable to food insecurity [34]. Studies examining the impact of food-based interventions on empowerment of women in the household and on their food security status showed the importance of including women as active players in solutions to address household food insecurity [35,36]. In our study, there was a significant increase in women's own decisionmaking power regarding major household expenditure and, although non-significant, there was an increase in women making their own decisions regarding daily meal preparation. Evidence from Africa, Asia and Latin America shows that women's access to income, or an increase in household decision-making regarding expenditure, is associated with improvements in household food security [37]. This is partly due to women spending a significantly higher proportion of their income compared to men on food and health-related issues [37] and to their role in food production, food preparation and childcare [34,38].
The HKHC intervention provided the women with knowledge on how to improve food security, in addition to some financial means to access food through generated income. Socioeconomic status has been shown to be positively and significantly associated with food security in the Palestinian refugee population and in other vulnerable populations [1,39]. The significant increase in food expenditure coupled with the nutrition training component of the intervention may indicate improved access to food and knowledge and therefore, improved food security. We did not measure household food consumption and could not examine the impact of improved food security on food choices. This may be of interest in future studies.
Studies have shown that women's mental health is strongly associated with food security [34,40,41]. In addition, social support can be obtained through the workplace and is associated with better physical and mental health [42] and improved job satisfaction [43]. The HKHC intervention thus attempted to improve mental health by addressing food security and increasing the social network of women, however although the majority of women who participated in the study had an increase in mental health score, these were not statistically significant increases. Participants indicated the importance of work in giving them a break from their daily routine and providing them with an opportunity to leave the house and interact with other women. Participants commented on their feelings of satisfaction with the work they were doing and in the team approach to their work. They reported that the kitchen provided a space and opportunity to share their problems with one another and bring women out of their isolation, these results align with those of another study [11]. In most Fig. 1 Changes in mental health inventory score and themes associated with the change developing countries, the informal work sector is the primary source of employment for women, with the vast majority of these women being home-based workers [44]. Although regional data are scarce, one study suggests that the majority of working women in Jordan are self-employed and operate from their homes [28,45]. Home-based workers have been shown to be less likely to develop social ties outside the family compared to those working outside of the home. Also, home-based workers may be at higher risk of poverty compared to individuals in formal wage employment [46]. The HKHC intervention allowed women to participate in a safe environment within the formal work sector, using a community-based approach to expand on the concept of "safe" work environment.
Despite many positive comments from women, one third of participants did not show improvement in their mental health and in fact, some showed a decline. We attempted to further investigate the reason behind these discrepancies by comparing characteristics of women who had improved mental health scores with those who did not. Our data showed that the presence of an underlying stressful condition, not addressed in this study, linked to conflict and health complexities, was among the key barriers to changes in mental health conditions of these participants. These women, however, continued to work and some expressed positive feelings towards the project despite hardships they were facing in their home lives.
At the onset of the project more women signed up to participate in the project, however, 18 dropped out within the first week. Due to time constraints, employment affects women's ability to provide childcare and perform their other household chores such as cooking. In fact, the main reasons for not working reported by women ages 25-54 include housework and other family reasons [31]. Our study took this into consideration in the design of the HKHC model, which was informed by focus groups conducted with women at the onset of the intervention when schedules and time commitments were discussed. The intervention was designed to avoid major changes in their existing routines and the CBOs provided options for preschool childcare. Despite these efforts, the women who remained in the study had better social support compared to those who dropped out. Studies have shown that family support can provide emotional and instrumental support to working women [47]. Also, support towards childcare has a positive impact on preventing work-family conflict [48,49], and may reduce perceptions of formal employment interfering negatively with domestic responsibilities [50]. In societies with more traditional gender roles, the redistribution of power within the household and active participation from men in childcare responsibilities is required to facilitate women's ability to work [51].
The main limitation of this study lies in the fact that there was no counterfactual (control group that did not receive the intervention), and we cannot therefore directly attribute the outcomes measured in the study to the intervention. The quantitative results are also limited by the small sample size which may not have allowed us to detect significant changes in mental health and social support, among other variables. The high drop-out rates of women at the beginning of the intervention reflect the challenges for women to participate in such programs. The women who remained in the study had higher social support and more work-experience which indicate that in this society, social support is essential in order for women to be able to work outside the home. However, the qualitative data provided a depth of knowledge that supplemented our quantitative data. Although it is possible that responses in the interviews were influenced by social desirability bias, we employed interviewers that had not been part of the intervention team, and assured anonymity in the research process to minimize this to the extent possible. Another limitation is that changes in decision making roles and other domains of empowerment can entail a longterm process. The duration of the study may therefore not have been long enough to incur changes of a longer-term nature.
The two CBOs have now established themselves as catering businesses and are sustaining their operation by providing food for schools, a pre-school and an orphanage, as well as catering for local events (Ramadan dinners for older adults, festivals). Future studies would benefit from longer-term follow up to assess sustained differences in participants' lives, in addition to considering quasi-experimental or randomized designs with larger sample sizes to enable the assessment of effectiveness of the intervention.
Conclusion
The findings from this study lend insight into the design and potential of community kitchens among lowincome, food insecure populations; refugee or migrant populations; and women with limited formal work experience. The HKHC model created a social enterprise using the concept of community kitchens linked to schools and allowed women to significantly contribute to household expenditure and improve their food security. Results highlight the importance of using a multi-sectoral approach to address the social determinants of food insecurity in vulnerable women living in chronic political and economic constraints.
Community kitchens are increasingly being used in high-income countries (the United States for example), with the aim to provide professional development, supplementary income, and a flexible, safe work environment for low-income or refugee/migrant women who wish to enter the workforce. As refugee resettlement and migration continue to rise, the HKHC intervention outlines a roadmap for how such a model could provide women a route into the formal workforce using their skills as talented home cooks, build on that skillset in a safe environment, contribute to poverty reductionwhich is at the heart of food insecurity -and provide a supportive network. | 7,788.2 | 2019-11-29T00:00:00.000 | [
"Economics"
] |
P2X1 and P2X7 Receptor Overexpression Is a Negative Predictor of Survival in Muscle-Invasive Bladder Cancer
Simple Summary Bladder cancer is one of the most common malignancies. The prognosis is particularly poor for advanced cancer. There is a need for new prognostic markers to guide treatment decisions and promote the development of novel therapeutic strategies to increase the success rate of current treatment regimens. The tumor microenvironment is characterized by an increase in extracellular ATP levels. ATP-recognizing P2X receptors have been implicated in the growth and metastasis of various malignant cancers. Here, we analyzed the potential of different P2X receptor subtypes as prognostic markers in muscle-invasive bladder cancer. In vitro experiments confirmed a growth-promoting effect of extracellular ATP and P2X receptors on bladder cancer cells. In agreement, our analyses of cancer tissue samples from 173 patients showed that expression of P2X1 and P2X7 receptors is an independent negative predictor of survival and a potential therapeutic target in muscle-invasive bladder cancer. Abstract Bladder cancer is amongst the most common causes of cancer death worldwide. Muscle-invasive bladder cancer (MIBC) bears a particularly poor prognosis. Overexpression of purinergic P2X receptors (P2XRs) has been associated with worse outcome in several malignant tumors. Here, we investigated the role of P2XRs in bladder cancer cell proliferation in vitro and the prognostic value of P2XR expression in MIBC patients. Cell culture experiments with T24, RT4, and non-transformed TRT-HU-1 cells revealed a link between high ATP concentrations in the cell culture supernatants of bladder cell lines and a higher grade of malignancy. Furthermore, proliferation of highly malignant T24 bladder cancer cells depended on autocrine signaling through P2X receptors. P2X1R, P2X4R, and P2X7R expression was immunohistochemically analyzed in tumor specimens from 173 patients with MIBC. High P2X1R expression was associated with pathological parameters of disease progression and reduced survival time. High combined expression of P2X1R and P2X7R increased the risk of distant metastasis and was an independent negative predictor of overall and tumor-specific survival in multivariate analyses. Our results suggest that P2X1R/P2X7R expression scores are powerful negative prognostic markers in MIBC patients and that P2XR-mediated pathways are potential targets for novel therapeutic strategies in bladder cancer.
Introduction
Urinary bladder cancer accounts for about 573,000 newly diagnosed cancer cases and 212,000 deaths worldwide each year [1]. Approximately 25% of new bladder cancer cases
Patients
This retrospective study was approved by the Medical Ethics Committee of Ludwig Maximilian University (LMU) Munich (reference number . The study group consisted of 173 patients who were diagnosed with MIBC and underwent radical cystectomy at the Department of Urology (LMU Munich) between 2004 and 2014. Surgery was performed with curative or palliative intent using standardized procedures for urinary diversion by ileal conduit or ileal neobladder formation. All histological specimens were systematically reviewed to confirm tumor type, grade, and stage by experienced pathologists. Staging was performed according to the AJCC/UICC TNM staging guidelines (8th edition) and the latest WHO classification of genitourinary tumors (5th edition) [28,29]. Patients who had received neoadjuvant therapy or intravesical therapy with Bacillus Calmette-Guérin or mitomycin prior to surgery were excluded. Other exclusion criteria were incompletely archived material or incomplete medical history, low-grade tumors, and cancer types that did not fulfill the histopathological criteria for urothelial carcinoma and its histological subtypes as defined by the WHO 2022 classification [29]. Follow-up was performed according to the European Association of Urology guidelines [5]. Patient characteristics, clinicopathological parameters, DFS, TSS, and OS were documented and analyzed for associations with P2X1R, P2X4R, and P2X7R expression.
Tissue Microarray Construction
Expression of P2X1R, P2X4R, and P2X7R was examined in tissue microarrays (TMAs) using formalin-fixed, paraffin-embedded (FFPE) tissue blocks from 173 MIBCs. Punchedout 1 mm tumor cores from three different tumor areas per patient were arrayed into new TMA blocks. Triplicates of each sample were used to minimize tissue loss and to overcome tumor heterogeneity. The embedded tissue was cut into 4 µm thick slices and used for immunohistochemical analyses.
Immunohistochemistry
Antibody staining was established with appropriate isotype and system controls. Tonsil tissue was used as a positive control and was included in each staining run. All antibodies had been validated by the manufacturers against Western blot and RNA-seq data on a wide range of normal and cancer tissue.
For P2X1R staining, antigen retrieval was carried out by heat treatment using the antigen retrieval AR-10 kit (DCS, Hamburg, Germany; HK057-5KE). Slides were incubated with the polyclonal rabbit anti-human P2X1R primary antibody (1:300; Abcam, Berlin, Germany; ab74058) for 60 min at room temperature. Bound antibodies were detected by the use of the ImmPRESS Anti-Rabbit IgG Polymer Kit (Vector Laboratories, Newark, CA, USA; MP-7401).
Semiquantitative Analysis of P2X Receptor Expression
Immunoreactivity of P2XRs was scored using the histochemical scoring system (Hscore) as previously described [30]. The H-score incorporates both the staining intensity and the percentage of stained cells at each intensity level. Intensity was scored as 0 (no evidence of staining), 1 (weak staining), 2 (moderate staining), and 3 (strong staining). The final H-score is the sum of the intensity values multiplied by the percentage of stained cells. Three tumor cores per patient were scored separately and averaged to obtain the final H-score of each tumor sample. If a TMA core did not contain any tumor tissue, it was excluded from the calculations of the final score. In total, stained tumor core sections from 171 patients were available for evaluation of P2X1R and P2X4R expression, and tumor cores from 172 patients were stained and evaluated for P2X7R expression. The median H-score was used as a cutoff value to define groups with low and high P2XR expression (P2X1R: H-score ≥ 25; P2X4R: H-score ≥ 10; P2X7R: H-score ≥ 60). The H-score for the combined expression of P2X1R and P2X7R was calculated as the sum of the single scores of both receptors. For this analysis, tumor cores from 170 patients were available. Again, tumors with low or high combined expression of P2X1R and P2X7R were distinguished from each other based on the median H-score (H-score ≥ 85).
Carboxyfluorescein Succinimidyl Ester (CFSE)-Based Proliferation Assay
Proliferation of T24, RT4, and TRT-HU-1 cells was assessed using the CellTrace™ CFSE Cell Proliferation Kit (Invitrogen) as recommended by the manufacturer with a few modifications. Briefly, cells were detached by trypsin, washed with phosphate-buffered saline (PBS), and resuspended at 5 × 10 6 cells/mL in CFSE staining solution (5 µM in PBS). After 15 min incubation at 37 • C, ten volumes of cell culture medium were added, and the cell mixture was incubated for another 5 min. Then, cells were pelleted by centrifugation, washed once with culture medium, and seeded at 15,000 cells per well in a 24-well cell culture plate. If indicated, antagonists of P2X1R (10 µM NF023; Tocris, Bristol, UK), P2X4R (10 µM 5-BDBD, Tocris), or P2X7R (10 µM A438079, Tocris), the P2 receptor antagonist suramin (100 µM; Sigma-Aldrich), the ATP-degrading enzyme apyrase (1 U/mL; Sigma-Aldrich), the ATP release blocker carbenoxolone (CBX; 20 µM; Sigma-Aldrich), or the natural P2XR ligand ATP (10 µM; Sigma-Aldrich) were added, and the cells were maintained in a final volume of 500 µL in 5% CO 2 /95% air at 37 • C. Final drug concentrations were chosen based on previous studies by us and others [18,19]. At the indicated times, cells were detached by trypsin and analyzed with a NovoCyte 3000 flow cytometer (Agilent). Single cells were identified by plotting FSC-A vs. FSC-H, and the CFSE median fluorescence intensity (MFI) of at least 8000 single cells was determined. Exponential growth curves were generated by plotting 1/MFI as a function of time [31]. The function N(t) = N(0) × e kt was fitted to the data by non-linear regression (SigmaPlot 12.5, Systat Software Inc., San Jose, CA, USA) to determine the growth rate k and the doubling time, DT (DT = ln2/k).
ATP Concentrations in Cell Culture Supernatants
T24, RT4, and TRT-HU-1 cells were seeded in 24-well plates at a density of 3.75 × 10 5 cells per well in a final volume of 500 µL and allowed to attach for 3 h in an incubator adjusted to 5% CO 2 /95% air at 37 • C. Cell culture supernatants were collected, cooled in an ice bath for 10 min, and centrifuged twice to ensure cell-free samples. Perchloric acid (400 mM) was added to the supernatants to stop any enzymatic activity. Then, samples were processed, and ATP concentrations were determined by high-performance liquid chromatography (HPLC) as previously described [32].
Statistics
Statistical analyses were performed with SigmaPlot 12.5 software. OS was defined as the time between primary surgery and death from any cause. For TSS, death caused by bladder cancer was defined as the clinical endpoint. Patients who were alive at the end of follow-up were censored. DFS refers to the time between primary surgery and relapse. Patients were censored for DFS if recurrent cancer and metastasis were absent at the end of follow-up or at the time of death. Survival curves were calculated using the Kaplan-Meier method and compared by the log-rank test. Associations between P2XR expression, demographic characteristics (age, sex), and clinicopathological parameters (pT staging category, pN stage, distant metastasis (M), lymphovascular invasion (L), blood vessel invasion (V), perineural invasion (Pn), resection margin (R), UICC stage, receipt of adjuvant therapy) were examined using the Chi-square test or unpaired t-test for categorical and numerical variables, respectively. To assess the prognostic value of low or high P2XR expression, multivariable hazard ratios (HRs) were determined by Cox proportional hazard regression and adjusted for the parameters that correlated with P2XR expression (pN stage, M status, UICC stage, L status). One-way ANOVA and the Holm-Sidak test were used to analyze results from cell culture experiments. Values of p < 0.05 were considered statistically significant.
Extracellular ATP Levels Are Increased in Bladder Cancer Cell Line Cultures of High Malignancy
While numerous studies in recent years have established pro-tumorigenic roles of extracellular ATP and P2X receptors in various cancers [8,17,[20][21][22][23][24][25], comparable data on bladder cancers are scarce. Therefore, we first tested in a cell culture model with bladder cells of different grades of malignancy how ATP signaling affects bladder cancer cell proliferation. We used TRT-HU-1 cells, an hTERT-immortalized and non-transformed human urothelial cell line [33], RT4 cells, a low-grade, well-differentiated, non-invasive papillary transitional cell carcinoma cell line, and T24 cells, a transitional cell carcinoma cell line that was established from a bladder cancer patient with a high-grade, invasive urothelial carcinoma and that expresses various purinergic receptor subtypes [34,35]. ATP accumulation in the TME is driven by hypoxia, cell death, or ATP release from stimulated inflammatory cells [8]. In addition, tumor cells themselves may actively release ATP and maintain high ATP concentrations in their surroundings to fuel their own growth [19]. To test whether intrinsic surrounding ATP levels correlated with growth in bladder cancer cells, we compared ATP concentrations in the supernatants of T24 cell cultures with those of RT4 and TRT-HU-1 cells and determined the doubling times of each cell line. ATP concentrations were more than twice as high in supernatants of highly malignant T24 cells than in RT4 or TRT-HU-1 cell culture supernatants ( Figure 1A). Consistent with a lower grade of malignancy, both of these cell lines grew significantly slower than T24 cells with doubling times of 21.37 ± 0.24 h (TRT-HU-1) and 23.25 ± 0.78 h (RT4), respectively, compared to 15.1 ± 0.02 h in T24 cells (p < 0.001; Figure 1B). These data suggest that ATP may indeed exert pro-tumorigenic effects on bladder cancer cells.
Extracellular ATP Promotes Proliferation of High-Grade T24 Cells through P2X Receptors
We and others have previously reported that P2X1R, P2X4R, and P2X7R have growthpromoting effects in different cancer cells in vitro and in vivo [19,[36][37][38]. To test whether this also applies to bladder cancer cells, we treated T24 cells with specific receptor antagonists and assessed proliferation. Inhibition of P2X1R (with NF023), P2X4R (with 5-BDBD), or P2X7R (with A438079) significantly increased the doubling time from 15.31 ± 0.21 h in untreated cells to 17.04 ± 0.04, 17.14 ± 0.25, and 17.71 ± 0.09 h, respectively (p < 0.001; Figure 1C,D). Combined treatment with the P2X1R and P2X7R antagonists had a significantly stronger inhibitory effect than treatment with either antagonist alone (doubling time: 18.24 ± 0.25 h). In agreement with these findings, the general P2 receptor inhibitor suramin suppressed proliferation even further by more than 40% (doubling time: 21.70 ± 0.76 h). ATP (10 µM), the natural P2XR agonist, slightly increased proliferation (doubling time: 14.58 ± 0.53 h) while treatment with the ATP-hydrolyzing enzyme apyrase had the opposite effect. Similarly, CBX, a hemichannel blocker that inhibits cellular ATP release through pannexin-1 and thus interferes with autocrine purinergic feedback signaling [39], inhibited proliferation ( Figure 1D). Taken together, these results support a growth-promoting role of autocrine P2XR signaling in T24 bladder cancer cells.
High P2X1 and High Combined P2X1/P2X7 Receptor Expression Scores Are Associated with Clinicopathological Indicators of Cancer Progression in MIBC Patients
The results of the in vitro experiments suggested that P2X1R, P2X4R, and P2X7R could be markers of bladder cancer growth and progression. To test this possibility, we next assessed the expression profiles of these receptors in MIBC and analyzed possible associations between expression intensity and clinicopathological characteristics. A total of 173 MIBC patients were included in the study. The mean age at the time of surgery was 67.1 (±9.0) years. Consistent with previous studies, the majority of patients were male (73.4%) [3]. During the course of the study, 58 (33.5%) patients received adjuvant chemotherapy and 43 (24.7%) patients were treated with adjuvant radiotherapy. Mean follow-up was 3.0 years, and 144 patients (83.2%) died during the follow-up period. P2X1R, P2X4R, and P2X7R expression was scored and classified as "high" or "low" as described above (Section 2.4). Membranous and cytoplasmic expression of all three receptor subtypes was detectable in tumor cells. In addition, tumor-infiltrating immune cells displayed a strong immunoreactivity for all receptors. Representative microphotographs are shown in Figure 2. We examined whether P2XR expression correlated with demographic or clinical (age, gender, adjuvant therapy) or histopathological (pT staging category, pN stage, UICC stage, M status, R status, L status, V status, and Pn status) characteristics. While there were no significant associations between the described variables and P2X4R or P2X7R expression, high P2X1R expression was significantly associated with lymphovascular invasion (L; p = 0.02), lymph node metastasis (pN; p = 0.03) and UICC stage (p = 0.01). In addition, high combined expression of P2X1R and P2X7R was significantly associated with lymphovascular invasion (L; p = 0.007), lymph node metastasis (pN; p = 0.007), distant metastasis (M; p = 0.046), and UICC stage (p = 0.001). Patient demographics, tumor characteristics, and their associations with P2XR expression are summarized in Table 1.
High P2X1 and High Combined P2X1/P2X7 Receptor Expression Scores Are Associated with Clinicopathological Indicators of Cancer Progression in MIBC Patients
The results of the in vitro experiments suggested that P2X1R, P2X4R, and P2X7R could be markers of bladder cancer growth and progression. To test this possibility, we next assessed the expression profiles of these receptors in MIBC and analyzed possible associations between expression intensity and clinicopathological characteristics. A total of 173 MIBC patients were included in the study. The mean age at the time of surgery was 67.1 (±9.0) years. Consistent with previous studies, the majority of patients were male (73.4%) [3]. During the course of the study, 58 (33.5%) patients received adjuvant chemotherapy and 43 (24.7%) patients were treated with adjuvant radiotherapy. Mean follow-up was 3.0 years, and 144 patients (83.2%) died during the follow-up period. P2X1R, P2X4R, and P2X7R expression was scored and classified as "high" or "low" as described above (Section 2.4). Membranous and cytoplasmic expression of all three receptor subtypes was detectable in tumor cells. In addition, tumor-infiltrating immune cells displayed a strong immunoreactivity for all receptors. Representative microphotographs are shown in Figure 2. We examined whether P2XR expression correlated with demographic or clinical (age, gender, adjuvant therapy) or histopathological (pT staging category, pN stage, UICC stage, M status, R status, L status, V status, and Pn status) characteristics. While there were no significant associations between the described variables and P2X4R or P2X7R expression, high P2X1R expression was significantly associated with lymphovascular invasion (L; p = 0.02), lymph node metastasis (pN; p = 0.03) and UICC stage (p = 0.01). In addition, high combined expression of P2X1R and P2X7R was significantly associated with lymphovascular invasion (L; p = 0.007), lymph node metastasis (pN; p = 0.007), distant metastasis (M; p = 0.046), and UICC stage (p = 0.001). Patient demographics, tumor characteristics, and their associations with P2XR expression are summarized in Table 1.
High P2X1 and High Combined P2X1/P2X7 Receptor Expression Scores Are Associated with Reduced Overall Survival
Median OS of all patients after initial surgery was 17.0 months. Median OS in patients with low P2X1R expression was 23.5 ± 45.6 months and significantly longer than in patients with high P2X1R expression (13.3 ± 44.1 months; Figure 3A). There was no significant difference in median OS between patients with low (16.6 ± 46.0 months) and high (17.8 ± 44.9 months) P2X4R expression ( Figure 3B). Similarly, OS did not differ between patients with high or low P2X7R expression (17.6 ± 55.8 vs. 16.4 ± 37.5 months; Figure 3C). However, patients with a high combined P2X1R/P2X7R score showed a significant decrease in OS (12.7 ± 25.4 months) as compared to patients with a low P2X1R/P2X7R score (36.4 ± 56.0 months; Figure 3D).
High P2X1 and High Combined P2X1/P2X7 Receptor Expression Scores Are Associated wit Reduced Overall Survival
Median OS of all patients after initial surgery was 17.0 months. Median OS in pa tients with low P2X1R expression was 23.5 ± 45.6 months and significantly longer than i patients with high P2X1R expression (13.3 ± 44.1 months; Figure 3A). There was no sig nificant difference in median OS between patients with low (16.6 ± 46.0 months) and hig (17.8 ± 44.9 months) P2X4R expression ( Figure 3B). Similarly, OS did not differ betwee patients with high or low P2X7R expression (17.6 ± 55.8 vs. 16.4 ± 37.5 months; Figure 3C However, patients with a high combined P2X1R/P2X7R score showed a significant de crease in OS (12.7 ± 25.4 months) as compared to patients with a low P2X1R/P2X7R scor (36.4 ± 56.0 months; Figure 3D).
High Expression of P2X1 and P2X7 Receptors Is Associated with Reduced Tumor-Specific Survival
Bladder-cancer-associated death occurred in 94 patients (54.34%,) and median TS was 15.07 months. TSS in patients with high P2X1R expression (11.8 ± 10.0 months) wa significantly lower than in patients with low P2X1R expression (16.4 ± 20.8 months ; Fig ure 4A). P2X4R expression did not significantly affect TSS ( Figure 4B). Patients with low P2X7R expression tended to have improved TSS compared to patients with high P2X7R expression, and this seemed to especially apply to patients who survived the first tw
High Expression of P2X1 and P2X7 Receptors Is Associated with Reduced Tumor-Specific Survival
Bladder-cancer-associated death occurred in 94 patients (54.34%,) and median TSS was 15.07 months. TSS in patients with high P2X1R expression (11.8 ± 10.0 months) was significantly lower than in patients with low P2X1R expression (16.4 ± 20.8 months; Figure 4A). P2X4R expression did not significantly affect TSS ( Figure 4B). Patients with low P2X7R expression tended to have improved TSS compared to patients with high P2X7R expression, and this seemed to especially apply to patients who survived the first two years after diagnosis; however, the difference in median TSS between the two groups was not statistically significant (p = 0.051; Figure 4C). In contrast, a significant reduction in TSS was associated with tumors displaying a high P2X1R/P2X7R score when compared to tumors with a low P2X1R/P2X7R score (12.5 ± 9.8 months vs. 18.7 ± 22.5 months; Figure 4D).
Cancers 2023, 15, x FOR PEER REVIEW 11 of 1 years after diagnosis; however, the difference in median TSS between the two groups wa not statistically significant (p = 0.051; Figure 4C). In contrast, a significant reduction in TS was associated with tumors displaying a high P2X1R/P2X7R score when compared t tumors with a low P2X1R/P2X7R score (12.5 ± 9.8 months vs. 18.7 ± 22.5 months; Figur 4D).
High Combined Expression of P2X1 and P2X7 Receptors Is Associated with Reduced Disease-Free Survival
Median DFS of all patients was 11.4 months. It was not affected by the expressio intensity of any single P2XR subtype ( Figure 5A-C). However, there was a significan reduction in median DFS in patients with high combined P2X1R/P2X7R scores (8.7 ± 28. months) as compared to patients with low P2X1R/P2X7R scores (26.0 ± 57.8 months Figure 5D).
High Combined Expression of P2X1 and P2X7 Receptors Is Associated with Reduced Disease-Free Survival
Median DFS of all patients was 11.4 months. It was not affected by the expression intensity of any single P2XR subtype ( Figure 5A-C). However, there was a significant reduction in median DFS in patients with high combined P2X1R/P2X7R scores (8.7 ± 28.3 months) as compared to patients with low P2X1R/P2X7R scores (26.0 ± 57.8 months; Figure 5D).
Combined Expression of P2X1 and P2X7 Receptors Is an Independent Predictor of Overall and Tumor-Specific Survival
Multivariable survival analyses were calculated for OS, TSS, and DFS and adjuste for clinicopathological covariates that correlated with P2XR expression (pN stage, M status, L status, UICC stage; Table 1). A high combined P2X1R/P2X7R expression scor independently predicted reduced OS (HR = 2.42; 95% confidence interval [CI]: 1.28-4.55 p = 0.006) and TSS (HR = 2.79; 95%-CI = 1.28-6.13; p = 0.01) in MIBC patients. For P2X1R P2X4R, or P2X7R expression, no statistically independent influence on survival wa found. The results of the multivariate analysis are summarized in Table 2.
Combined Expression of P2X1 and P2X7 Receptors Is an Independent Predictor of Overall and Tumor-Specific Survival
Multivariable survival analyses were calculated for OS, TSS, and DFS and adjusted for clinicopathological covariates that correlated with P2XR expression (pN stage, M status, L status, UICC stage; Table 1). A high combined P2X1R/P2X7R expression score independently predicted reduced OS (HR = 2.42; 95% confidence interval [CI]: 1.28-4.55; p = 0.006) and TSS (HR = 2.79; 95%-CI = 1.28-6.13; p = 0.01) in MIBC patients. For P2X1R, P2X4R, or P2X7R expression, no statistically independent influence on survival was found. The results of the multivariate analysis are summarized in Table 2.
Discussion
P2XRs are expressed by multiple malignant tumors and are increasingly recognized as prognostic indicators and potential therapeutic targets [8,12,40]. Here, we found that high expression of P2X1R alone or in combination with P2X7R predicts poor outcome in MIBC patients. P2X1R expression was associated with clinicopathological parameters of tumor progression and shorter median OS and TSS. In agreement, a recent study reported a link between high levels of P2X1R mRNA transcripts and shorter median OS in a cohort of bladder cancer patients from The Cancer Genome Atlas (TCGA) databank [34]. P2X1R expression was also previously implicated in the development of prostate cancer and acute pediatric leukemia [41,42].
We found that the combined P2X1R/P2X7R expression score increased the prognostic power as compared to P2X1R expression alone. High P2X1R/P2X7R scores were associated with reduced survival and an increased risk of distant metastasis. P2X7R is the most extensively studied P2XR subtype in the context of cancer (reviewed in [8,17,26,[43][44][45]). Its role in cancer progression is complex. A distinctive feature of P2X7R is its ability to form a cell-permeabilizing macropore when stimulated by high concentrations of ATP (0.3-0.5 mM), which can induce apoptosis and cell death [46]. In agreement, decreased expression of P2X7R is associated with tumor progression in some cancers [45,47]. However, the fact that P2X7R is widely expressed in many tumors indicates that tumor cells have developed strategies, like the expression of P2X7R isoforms, to circumvent pore formation while maintaining the function of P2X7R as a Ca 2+ membrane channel and stimulator of pro-proliferative metabolic pathways [8,36,37]. Accordingly, there are several studies demonstrating that P2X7R promotes tumor cell growth. For instance, P2X7R stimulation increases the proliferation of ovarian and pancreatic carcinoma, osteosarcoma, neuroblastoma, and leukemia cell lines [19,[48][49][50][51][52][53]. Furthermore, P2X7R overexpression was linked to poor outcome in gastric, liver, lung, colorectal, and renal cell carcinoma [20][21][22][23]54]. Consequently, selective P2X7R antagonists have been shown to inhibit tumor growth and cancer cell migration and invasion in vitro and in vivo [37,38,48,55].
In our MIBC patient cohort, median OS and TSS were not significantly affected by P2X7R expression alone but long-term survival appeared to be negatively impacted by high expression levels of P2X7R. A pro-tumorigenic role of P2X7R in MIBC is further supported by our finding that high combined expression of P2X1R and P2X7R was significantly associated with lymph node and distant metastases and reduced OS, TSS, and DFS. Furthermore, as opposed to P2X1R expression alone, high P2X1R/P2X7R expression was an independent negative predictor of OS and TSS after adjusting for key clinicopathological parameters. In support of these results, we found that simultaneous inhibition of P2X1R and P2X7R impaired proliferation of the highly malignant bladder cancer cell line T24 more strongly than selective blockade of the individual receptors. Like most ionic receptors, P2XRs can form heteromers with other P2XR subunits, which typically differ in their pharmacological and functional properties from the respective homomers [56]. However, hetero-oligomerization of P2X7R has not been described so far [57]. It seems therefore likely that both P2X1R and P2X7R exert their tumor-growth-promoting effects in bladder cancer independently from each other in an additive way. We have previously found that in Jurkat cells, a lymphoblastic leukemia cell line, tonic ATP release, and autocrine stimulation of P2X1R and P2X7R increase Ca 2+ influx and mitochondrial metabolism and promote proliferation [19]. These findings are consistent with substantial proof available for the trophic/growth-promoting effect of P2X7R that has been ascribed to its interaction with multiple pathways in the cellular energy metabolism [26,36,37,58]. The mechanisms behind the pro-tumorigenic effect of P2X1R stimulation are less well studied but might involve similar calcium-dependent pathways.
The results of our in vitro studies support the concept of a tonic growth-promoting stimulation of P2XRs in bladder cancer cells: Proliferation of T24 cells was not only promoted by stimulating cells with the natural P2XR agonist ATP. Proliferation was also impaired by inhibition of ATP release channels, degradation of cell-derived extracellular ATP by apyrase treatment, and blocking of purinergic receptors with antagonists, i.e., by interfering with autocrine purinergic signaling. Furthermore, we found that the highly malignant bladder cancer cell line T24 maintained higher surrounding ATP levels in the cell culture and grew significantly faster than bladder cancer cells of lower malignancy (RT24) or non-transformed immortalized TRT-HU-1 cells. This is consistent with reduced expression of ATP-converting ectonucleotidases and a reduction in the ATP-hydrolyzing capacity described in bladder cancer cells of high malignancy [59,60]. On the other hand, it was previously reported that high concentrations of ATP (1 mM) exert anti-proliferative effects on the highly malignant bladder cancer cell line HT-1376 [61]. The discrepancy in our findings can most likely be explained by cell-type specific differences and/or the use of higher ATP concentrations that are sufficient to induce macropore opening.
Only a few studies have investigated the role of P2X4R in cancer so far with sometimes contradictory results. Anti-proliferative effects of P2X4R were described in gastric and breast cancer cells in vitro [62,63]. On the other hand, P2X4R-mediated signaling processes enhanced tumor growth, invasion, and metastasis in breast cancer in vivo [64]. He et al. reported that P2X4R was the predominant P2 receptor in prostate carcinoma cells and demonstrated that inhibition of P2X4R impaired the growth and mobility of cancer cells [65]. In addition, a recently published study demonstrated that P2X4R-dependent signal transduction contributes to the survival of colon carcinoma cells under chemotherapy [66]. Even though we found that P2X4R blockade inhibited T24 cell proliferation in vitro, P2X4R expression was not significantly associated with survival in MIBC patients. One possible explanation is the influence of the tumor microenvironment and tumor-infiltrating lymphocytes on tumor progression in vivo. Purinergic signaling regulates immune cells in multiple ways [67]. The purinergic environment in the TME can promote both antitumor immunity and cancer immune evasion, depending on the expression profiles of purinergic receptors and other components of the purinergic signaling system, such as ectoenzymes, which determine ATP and adenosine concentrations in the TME. We focused in our study on P2X1R, P2X4R, and P2X7R, which are the predominant P2XRs in immune cells [27]. We found that these P2XRs are also expressed in tumor-infiltrating immune cells, with P2X4R in particular showing strong immunoreactivity. It is therefore likely that purinergic sig-naling affects not only cancer cell growth but also shapes the antitumor immune response against MIBC.
While we restricted our study to MIBC patients, who bear the poorest prognosis, most newly diagnosed bladder cancers are non-muscle-invasive bladder cancers (NMIBCs) [3,68]. NMIBC has a high recurrence rate, imposes a great psychological burden on patients, and is associated with high treatment costs [68]. An accurate risk classification is critical to avoid over-or undertreatment. Analogous to our findings in MIBC, P2X1R/P2X7R expression could be potentially useful in NMIBC as well to predict progression and guide individualized treatment decisions.
While we demonstrated that P2X1R/P2X7R expression is linked to lymph node and distant metastasis, the detailed underlying molecular mechanisms remain unclear. In addition, other purinergic receptor subtypes might affect malignancy factors such as tumor growth, tumor cell migration, invasion, and metastasis in different types of bladder cancer. Of note, high P2X6R expression was recently associated with prolonged survival in a mixed population of low-and high-grade bladder cancer patients [34]. Thus, further studies are needed to explore the potential prognostic and therapeutic value of the different P1 and P2 receptor subtypes in MIBC and NMIBC in more detail and define the role of purinergic signaling in bladder cancer development and progression as well as in the antitumor immune response.
Conclusions
The prognosis of advanced bladder cancer is poor and current therapies are often associated with severe side effects and significant costs. To avoid overtreatment, there is a high demand for new markers that can accurately assess the prognosis of patients with MIBC [67]. Our results suggest that P2X1R/P2X7R expression scores can be used as reliable and powerful prognostic markers. In addition, P2XR-mediated pathways may present potential targets for new innovative therapeutic strategies in MIBC. In particular, intravesical application of purinergic inhibitors could be explored as a new component of combinatorial therapies. However, further studies are first needed to characterize the role of purinergic signaling in bladder cancer growth and metastasis, as well as in the regulation of tumor-infiltrating immune cells, in more detail. Informed Consent Statement: Written informed consent was obtained from all subjects involved in the study.
Data Availability Statement: All data generated or analyzed during this study are included in the manuscript. Further inquiries should be directed to the corresponding author. | 7,152.6 | 2023-04-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Multi-Target Joint Detection and Estimation Error Bound for the Sensor with Clutter and Missed Detection
The error bound is a typical measure of the limiting performance of all filters for the given sensor measurement setting. This is of practical importance in guiding the design and management of sensors to improve target tracking performance. Within the random finite set (RFS) framework, an error bound for joint detection and estimation (JDE) of multiple targets using a single sensor with clutter and missed detection is developed by using multi-Bernoulli or Poisson approximation to multi-target Bayes recursion. Here, JDE refers to jointly estimating the number and states of targets from a sequence of sensor measurements. In order to obtain the results of this paper, all detectors and estimators are restricted to maximum a posteriori (MAP) detectors and unbiased estimators, and the second-order optimal sub-pattern assignment (OSPA) distance is used to measure the error metric between the true and estimated state sets. The simulation results show that clutter density and detection probability have significant impact on the error bound, and the effectiveness of the proposed bound is verified by indicating the performance limitations of the single-sensor probability hypothesis density (PHD) and cardinalized PHD (CPHD) filters for various clutter densities and detection probabilities.
Introduction
The problem of joint detection and estimation (JDE) of multiple targets arises from many applications in surveillance and defense [1], where the number of targets is unknown and the sensor may receive measurements generated randomly from either targets or clutters. There is no information about which are the measurements of interest or which are the clutters. The aim of multi-target JDE is to determine the number of targets and to estimate their states if exist using prior information, as well as a sequence of the sensor measurements. In recent years, multi-target JDE has attracted extensive attention, and many approaches for it have been proposed [2][3][4][5][6][7][8][9][10].
Obviously, it is very necessary to find an error (lower) bound to assess the achievable performance of the multi-target JDE algorithms for the given sensor measurements. It is well known that Tichavsky et al. [11] proposed a recursive posterior Cramér-Rao lower bound (CRLB) for evaluating the performance of nonlinear filters when a target was asserted and observed by a sensor. Then, the CRLB was extended to the cases in which clutter or missed detection was present in the sensor [12][13][14][15]. Nevertheless, these CRLBs [12][13][14][15] could barely be applied to such a JDE problem, since CRLB only considers the estimation error of a target state, but not the detection error of the target number (or
Background
• Set integral: For any real-valued function ϕ(X) of a finite-set variable X, its set integral is [4]: where X n = x (i) n i=1 ⊆ X n denotes a n-points set (that is, the cardinality of the set X n is n) and X n denotes the space of X n . In this paper, we note X 0 = ∅. • Multi-Bernoulli RFS: A multi-Bernoulli RFS X is a union of M independent Bernoulli RFSs X (i) , . Its density is completely described by parameter Υ = r (i) , p (i) M i=1 as [6]: where | · | denotes the cardinality of a set, r (i) ∈ (0, 1) denotes the probability of X (i) = ∅ and p (i) x (i) denotes the density of x (i) .
• Poisson RFS: An RFS X is Poisson if its density f (X) is: where υ(x) denotes the intensity function of the Poisson RFS X, η is the average number of elements in X and f (x) is the density of single element x ∈ X.
• Second-order OSPA distance: The OSPA distance of order p = 2 between set X and its estimateX is [19]: where Π n denotes the set of permutations on {1, 2, ..., n}, c > 0 denotes the cut-off parameter, max(·) or min(·) denotes the maximization or minimization operation and || · || 2 denotes the two-norm. The OSPA metric is comprised of two components, each separately accounting for "localization" and "cardinality" errors between two sets. The localization error arises from the estimates paired with the nearest truths, while the cardinality error arises from the unpaired estimates. Schuhmacher et al. [19] have proven that the OSPA distance with p ∈ [1, ∞) and c > 0 is indeed a metric, so it can be used as a principled performance measure. • Information inequality and CRLB: Given a joint probability density f (x, z) on X × Z, under regularity conditions and the existence of ∂ 2 log f (x, z)/∂x i ∂x j , the information inequality states that [20,21]: wherex(z) denotes an estimate of L dimensional vector x based on z, x l andx l (z) are, respectively, the l-th components of x andx(z), l = 1, ..., L, the notation E f means the expectation with respect to density f and J is known as the L × L Fisher information matrix: where [J] i,j denotes the element on the i-th row and j-th column of matrix J.
For the particular case in which the estimatorx(z) is unbiased (that is, E f [x(z)] = x), the information inequality of Equation (5) reduces to: which is a result known as the CRLB. The Fisher information matrix J in Equation (7) is also computed by Equation (6).
Note that the ordinary information inequality of Equation (5) holds without the unbiasedness requirement on the estimatorx(z). However, unbiasedness is critical in the CRLB of Equation (7).
Explanation: In the current set up of this paper, our attention is restricted to the unbiased estimator of multi-target states. Our future work will study the extension of the proposed bound to the biased estimator by using the ordinary information inequality of Equation (5).
Moreover, Equation (5) or Equation (7) is satisfied with equality depending on a very restricted condition. In [21], Poor concludes that, within regularity, the information lower bound is achieved (that is, the "=" in Equation (5) or Equation (7) holds) byx(z) if and only ifx(z) is in a one-parameter exponential family (e.g., the linear Gaussian models for target dynamics and sensor observation described in [11] for achieving the CRLB). More details about this can be found in [21].
• RFS-based multi-target dynamics and sensor observation models: Let x k ∈ X k denote the state vector of a target and X k the set of multi-target states at time k, where X k is the state space of a target. The multi-target dynamics is modeled by: where Ψ k|k−1 (x k−1 ) is the set evolved from the previous state ; Γ k is the set of spontaneous births.
Let z k ∈ Z k denote a measurement vector and Z k the set of measurements received by a sensor at time k, where Z k is the sensor measurement space. The single-sensor multi-target observation is modeled by: ; K k is the clutter set, which is modeled as a Poisson RFS with density: where κ k (z k ) is the clutter intensity, λ k is the average clutter number and f c,k (z k ) is the density of a clutter.
The transition model in Equation (8) jointly incorporates motion, birth and death for multiple targets, while the sensor observation model in Equation (9) jointly accounts for detection uncertainty and clutter. Assume that the RFSs constituting the unions in Equations (8) and (9) are mutually independent. The multi-target JDE at time k is to derive the estimated state set X k (Z 1:k ) using the collection Z 1:k =Z 1 , ..., Z k of all sensor observations up to time k. The paper aims to derive a performance limit to multi-target joint detectors-estimators for the observation of a single sensor with clutter and missed detection. The performance limit is measured by the bound of the average error between X k andX k (Z 1:k ).
Single-Sensor Multi-Target JDE Error Bounds Using Multi-Bernoulli or Poisson Approximation
At time k, the RFS-based mean square error (MSE) between X k andX k (Z 1:k ) is defined as: where e k X k ,X k (Z 1:k ) denotes the error metric between X k andX k (Z 1:k ), which is defined by the second-order OSPA distance in (4), f k ( X k , Z k | Z 1:k−1 ) denotes the density of (X k , Z k ) given Z 1:k−1 and γ k ( Z k | X k ) = f k ( Z k | X k ) denotes the likelihood for the total sensor measurement process. At time k, given multi-target state set X n k and sensor measurement set Z m k , all association hypotheses can be represented as a function from target index set {1, ..., n} to sensor measurement index set {0, 1, ..., m} [2]. Defining that: denotes the association hypothesis function with clutter and missed detection. That is, the t-th target k with θ n,m (t) > 0 generates a sensor measurement z (θ n,m (t)) k , t = 1, 2, ..., n. θ n,m satisfies the property that θ n,m (t) = θ n,m (t ) > 0 implies t = t .
Then, according to the sensor observation model in Equation (9), the likelihood γ k Z m k X n k with Poisson clutter and missed detection can be denoted as [2]: where the summation is taken over all association hypotheses θ n,m , and G k z is defined as: while the notation κ Z denotes: For deriving the error bound for multi-target JDE, the following two conditions must be satisfied as in [16]: 1. MAP detection criterion: This is applied to determine the number of targets: given a measurement set Z k at time k, the cardinality of the estimated state setX k (Z k ) is obtained as the maximum of the posterior probabilities P k ( |X k | = n| Z 1:k ): The reason for the use of the MAP detection rule will be clearly explained later in Remark 1 after Theorems 1 and 2. 2. Unbiased estimation criterion: This is a necessary condition for applying the CRLB of Equation (7) in the proof of Theorems 1 and 2.
Next, we derive the proposed bound by using multi-Bernoulli or Poisson approximation for multi-target Bayes recursion, which are stated in Assumptions A.1 and A.2, respectively.
• Assumption A.1: At time k, the set Γ k of spontaneous births is a multi-Bernoulli RFS with the parameter Υ Γ,k = r (in general, Υ Γ,k is known a priori). Then, the predicted and posterior multi-target densities , respectively. Specifically, the parameter of a multi-Bernoulli RFS that approximates the multi-target RFS is propagated under this assumption. The recursions for Υ k|k−1 and Υ k have been presented in [6]. • Assumption A.2: At time k, the set Γ k of spontaneous births is a Poisson RFS with the intensity . Then, the predicted and posterior multi-target densities f k|k−1 ( X k | Z 1:k−1 ) and f k ( X k | Z 1:k ) are approximated as the Poisson densities with intensities υ k|k−1 (x k ) and υ k (x k ), respectively. Specifically, the intensity of a Poisson RFS that approximates the multi-target RFS is propagated under this assumption. The recursions for υ k|k−1 (x k ) and υ k (x k ) have been presented in [4]. Theorem 1. Suppose that Assumption A.1 holds; at time k, given the predicted multi-target multi-Bernoulli , the error for joint MAP detection and unbiased estimation of multiple targets with the state model in Equation (8) and the sensor observation model in Equation (9) is bounded by: where: where ξ n k Z m k Z 1:k−1 denotes a function of Z m k and n given Z 1:k−1 .
Theorem 2. Suppose that Assumption
where: The proofs of Theorems 1 and 2 can be found in Appendices A and B. In the following, we refer to the bound in Theorem 1 or 2 as the multi-Bernoulli approximated bound (MBA-B) or the Poisson approximated bound (PA-B), respectively.
• Remark 1: It is well-known that the lower bound is independent of the specific estimation methods.
However, it is necessary for the use of the MAP detection rule in deriving the bounds in Theorems 1 and 2. The reasons are as follows.
First, we have known that the error metric e k X k ,X k (Z 1:k ) in Equation (11) is the second-order OSPA distance in Equation (4). Obviously, the estimated target number has to be considered in the OSPA distance. At time k, the estimated target number depends on the measurement set Z k received by the sensor. We assume that if Z k ∈ Zn k , which is a subspace of the measurement space Z k , then the estimated target number by the detector isn (n = 0, 1, ..., N). Therefore, to compute the MSE σ 2 k in Equation (11), we have to partition the measurement space Z k into the regions of Z 0 k , Z 1 k , ..., Z N k , which correspond to all possible estimated target numbersn = 0,n = 1, ...,n = N, respectively. In addition, Z 0 k , Z 1 k , ..., Z N k are mutually disjoint and cover Z k . In the proof of Theorems 1 and 2, to obtain the bound on σ 2 k in Equation (A13) (Equation (A13) is the extended form of the MSE σ 2 k in Equation (11) for the detector that minimizes the σ 2 k in Equation (11) and its intricate interconnection with the estimator that may jointly achieve a lower σ 2 k using the MAP detector. A detailed analysis is presented in [16] to illustrate the complicated dependency of the detector and estimator for minimizing the MSE σ 2 k . As a result, without the MAP detector restriction, it is nearly impossible to characterize the joint detector-estimator that minimizes the MSE σ 2 k in Equation (11) due to their extremely complex interrelationship in determining the number of targets and estimating the states of existing targets.
In summary, with the MAP detection constraint, the estimated target number at time k can be determined just by the detector (that is, independent of the estimator). However, this may make the minimum MSE defined by Equation (11) unachievable. Therefore, imposing the MAP constraint can be regarded as an approximated method to obtain the proposed JDE bounds. In our future work, we will study the JDE error bound without the MAP detection constraint. [11]. The recursion of J (t),n,n,m k depends on the propagation of parameter Υ k|k−1 or intensity υ k|k−1 (x k ) of multi-Bernoulli or Poisson RFS that approximates the predicted multi-target RFS.
• Remark 3: In the special case of no clutter or missed detection, we have K k = ∅ and p D,k (·) = 1 for the sensor observation model in Equation (9). The numbers of estimated targets, true targets and measurements are obviously equal in this case, |X k | = |X k | = |Z k |. As a result, multi-target JDE reduces to multi-target state estimation only (that is, target detection no longer exists here, and so, the restriction of MAP detection can be omitted) using the sensor measurement. Moreover, given multi-target state set X n k , the total likelihood reduces to: and the second-order OSPA distance reduces to: because there is no need to consider the cut-off c for cardinality mismatches here. Only for the special case, a theoretically rigorous (that is, without multi-Bernoulli or Poisson approximation to multi-target Bayes recursion) single-sensor multi-target error bound can be derived in [18] using a PCRLB-like recursion.
Numerical Examples
A maximum of 10 targets appears on a two-dimensional region S = [−50, 50] × [−50, 50] (in m) with various births and deaths. The targets are observed by a single sensor with clutter and missed detection throughout a surveillance period of T = 25 time steps. The sensor sampling interval is ∆t = 1 s. At time k, the state of a target is [ẍ k ,ÿ k ] T denote the position, velocity and acceleration vectors along the x axis and y axis, respectively. The state transition density f k|k−1 ( x k | x k−1 ) is assumed to be: where N (·; m, Q) denotes the density of a Gaussian distribution with mean m and covariance matrix Q and F k and Q k are the state evolution matrix and process noise covariance matrix at time k, respectively. Assuming that the kinematics of each target is governed by the constant acceleration (CA) model [22], we have: where ⊗ denotes the Kronecker product, I n is the identity matrix of dimension n and q CA = 0.01 m/s 2 is the standard deviation of process noise, i.e., acceleration. Target births and deaths occur at random instances and states. The probability of target survival is p S,k (·) = 0.9. The state of a target birth satisfies one of the distributions p (i) where ρ k , o k are, respectively, the range and bearing measurements of the target and R k = diag(ς 2 ρ , ς 2 o ) is the sensor measurement noise covariance matrix. In this example, we assume that ς ρ = 2.5 m, ς o = 0.1 rad. The detection probability of the sensor is p D,k (·) = p D . The average clutter number and the density of the clutter are λ k = λ and f c,k (z k ) = U (z k ; S), where U (·; S) = 1/10 4 denotes the density of a uniform distribution over the region S.
For Assumption A.1, the parameter for the multi-Bernoulli set Γ k of spontaneous births is . For Assumption A.2, the intensity for the Poisson set Γ k of spontaneous births Γ (x k ). Then, the proposed bound (MBA-B or PA-B) in this example can be easily obtained by substituting these parameters into Theorem 1 or 2.
The second partial derivative involved in Equation (20) From Figure 1, it can be seen that both bounds are asymptotically convergent for various p D and λ. As the number of sensor measurement scans increases, they will get closer. The bounds for the case p D = 1, λ = 50 are the smallest in the three cases. However, it is somewhat surprising that the bounds for the case p D = 0.2, λ = 250 are lower than the bounds for the case p D = 0.6, λ = 150. Moreover, the bigger λ becomes for p D , or the lower p D becomes for λ, the longer the convergence time of the bounds seems to be. Figure 1 indicates that clutter density and detection probability of the sensor do have a significant impact on the proposed bound.
To verify the effectiveness of the proposed bounds, we compare the steady-state bounds with the JDE errors of the single-sensor PHD and CPHD filters, which are the average of 200 MC runs of their time-averaged OSPA distances between the true and estimated state sets. The comparison results are presented in Figure 2.
From Figure 2, we can obtain the following observations.
1. The proposed bound does not always increase with λ for given p D or decrease with p D for given λ. This is because of the two contrary effects generated by the increase of λ or p D when p D < 1 or λ > 0: reducing the possibility for missed targets and increasing the possibility for false targets. If the bound is dominated by the former, then it decreases with λ or p D ; otherwise, it increases with λ or p D . Moreover, PA-B is a little higher than MBA-B when λ is relatively large or p D is relatively small. However, they are very close in general. A possible reason for this is that the multi-Bernoulli assumption (Assumption A.1) outperforms the Poisson assumption (Assumption A.2) slightly for approximating the multi-target Bayes recursion under lower signal-noise-ratio (SNR) conditions. 2. Although the JDE errors of the single-sensor PHD and CPHD filters are a little higher than the proposed bound, all of them are always close versus λ and p D . The extra errors of the two filters are generated by the first-order moment approximations for the posterior multi-target density and the clustering processes involved in their particle implementations for state extraction. Figure 2 also shows that the CPHD filter outperforms the PHD filter. The reason for this is that the former can propagate the cardinality distribution and, thus, has more stable target number estimation than the latter. 3. The bigger λ becomes for given p D , or the lower p D becomes for given λ, the bigger the gaps between the errors of the two filters and the proposed bound will be. This is because the aforementioned approximation errors of the two filters increase as λ becomes bigger or p D becomes smaller. However, the maximum relative errors of the PHD and CPHD filters, which seem to appear in the case of p D = 0.2 and λ = 300, do not exceed 15% and 8% of MBA-B, as well as 12% and 5% of PA-B in any case, respectively. In fact, the total average relative errors of the two filters are about 7% and 4% of MBA-B, as well as about 6% and 3% of PA-B for various λ and p D , respectively.
Finally, the comparison results in Figure 2 show that for various clutter densities and detection probabilities of the sensor, the proposed bounds are able to provide an effective indication of performance limitations for the two single-sensor multi-target JDE algorithms.
Conclusions
Within the RFS framework, we develop two multi-target JDE error bounds using the measurement of a single sensor with clutter and missed detection. The multi-Bernoulli and Poisson approximation to multi-target Bayes recursion are used in deriving the results of the paper, respectively. The proposed bounds are based on the OSPA distance rather than the Euclidean distance. The simulation results show that the clutter density and detection probability of the sensor significantly affect the bounds and verify the effectiveness of the bounds by indicating the performance limitations of the single-sensor PHD and CPHD filters in various sensor measurement environments.
Our future work will focus on the following four aspects: 1. Extending the results to the case of multiple sensors; 2. Extending the results to the case of the biased estimator by using the ordinary information inequality of Equation (5) | 5,327 | 2016-01-28T00:00:00.000 | [
"Mathematics"
] |
The Roles of the Cortical Motor Areas in Sequential Movements
The ability to learn and perform a sequence of movements is a key component of voluntary motor behavior. During the learning of sequential movements, individuals go through distinct stages of performance improvement. For instance, sequential movements are initially learned relatively fast and later learned more slowly. Over multiple sessions of repetitive practice, performance of the sequential movements can be further improved to the expert level and maintained as a motor skill. How the brain binds elementary movements together into a meaningful action has been a topic of much interest. Studies in human and non-human primates have shown that a brain-wide distributed network is active during the learning and performance of skilled sequential movements. The current challenge is to identify a unique contribution of each area to the complex process of learning and maintenance of skilled sequential movements. Here, I bring together the recent progress in the field to discuss the distinct roles of cortical motor areas in this process.
INTRODUCTION
The production of sequential movements is a fundamental aspect of voluntary behavior. Many of our daily actions, such as playing a musical instrument, handwriting, typing, etc., depend on attaining a high level of skill in the performance of sequential movements. The performance of sequential movements can be acquired and improved to the expert level through extensive practice (Rosenbaum, 2010). Such performance can be maintained as a motor skill. How the brain binds elementary movements together into skilled sequential movements has been a fundamental problem of systems neuroscience.
In this review, I will focus on the roles of the SMA, PMd, and M1 in skilled sequential movements, i.e., those acquired through repetitive practice and internally generated from long-term memory. I will especially focus on the spatial sequence tasks as this type of task was used in non-human primate studies after extensive practice ( Table 1). The current challenge is to identify a unique contribution of each area to the complex process of acquisition and retention of sequential movements. Interventional studies in non-human primates could represent a valuable complement to neuroimaging studies. These methods can critically address the causal relationship between the activity in a brain area and behavior. I will aim to integrate recent discoveries regarding the cortical control of skilled sequential movements at multiple levels of complexity by highlighting interventional (e.g., inactivation) studies in nonhuman primates.
LEARNING OF SKILLED SEQUENTIAL MOVEMENTS
An important characteristic of learning skilled sequential movements is that individuals seem to go through several learning stages (Fitts and Posner, 1967;Hikosaka et al., 2002;Doyon et al., 2003;Doyon and Benali, 2005;Rosenbaum, 2010;Dayan and Cohen, 2011;Schmidt and Lee, 2011). An improvement of performance can be detected as changes in the speed and accuracy during learning (Figure 1). During an initial learning stage of skilled sequential movements, a subject improves performance of sequential movements relatively fast. The subject tends to make a large number of errors and highly variable movements with a lack of consistency from trial to trial, but achieves large performance improvements (Fitts and Posner, 1967;Rosenbaum, 2010;Dayan and Cohen, 2011;Schmidt and Lee, 2011). Later, the subject improves the performance more slowly over multiple sessions of practice by making fewer and smaller errors. The durations of the learning stages are highly specific to the tasks, subjects, and definitions. For example, the fast stage of learning to perform a sequential finger opposition task was defined as an initial within-session improvement phase in a human study (Karni et al., 1995(Karni et al., , 1998, whereas the fast stage of learning to play an advanced piano piece may last months. Monkeys performing a sequential reaching task rapidly improved their response speed over about 50 days (Matsuzaka et al., 2007). Despite timing differences, the learning curves on different skill tasks follow the same patterns of initially fast, then slowing performance improvements with further practice. Through extensive and repetitive training, subjects can further improve their performance to the expert level. Then, the skill will become almost automatic with very small variability and small improvement (Fitts and Posner, 1967;Rosenbaum, 2010;Schmidt and Lee, 2011).
The progress in the learning of sequential movements is associated with a shift in functional MRI (fMRI) activation from the anterior regions to the posterior regions of the brain (Grafton et al., 1994;Sakai et al., 1998;Coynel et al., 2010). The change in fMRI activation is shown to be associated with improvement in the task performance during learning (Bassett et al., 2015;Reddy et al., 2018). This suggests that the extent of contribution of each area may change during learning. Hikosaka et al. (2002) proposed that a subject learns the spatial features of sequences during the fast learning stage and then learns the motor features of the sequences during the slow learning stage. In the following sections, I will discuss the contributions of the SMA, PMd, and M1 to the learning and performance of spatial sequence tasks and how the skilled sequential movements are maintained after extensive practice.
SUPPLEMENTARY MOTOR AREA
Classically, the preparation for and the generation of sequential movements have been thought to depend on the supplementary motor area (SMA) and the pre-SMA (Roland et al., 1980;Brinkman, 1984;Goldberg, 1985;Dick et al., 1986;Halsband, 1987;Halsband et al., 1993;Tanji andShima, 1994, 1996a,b;Grafton et al., 1995;Shima et al., 1996;Tanji et al., 1996;Gerloff et al., 1997;Strick, 1997, 2001;Nakamura et al., 1998;Tanji, 1998, 2000;Tanji, 2001;Hikosaka et al., 2002). Human patients with lesions that include these areas had deficits in performing self-initiated movements, sequential movements, and/or speech (Goldberg, 1985;Dick et al., 1986;Halsband et al., 1993). In agreement with the reports of human patients, studies in non-human primates clearly demonstrated the contributions of the SMA and pre-SMA to the learning or performance of sequence tasks composed of non-spatial movements (Brinkman, 1984;Halsband, 1987;Tanji andShima, 1994, 1996a,b;Shima et al., 1996;Tanji et al., 1996;Shima and Tanji, 1998, 2000, 2006Tanji, 2001; Table 2). Neural recordings in monkeys demonstrated that neurons in the SMA and the pre-SMA respond preferentially to a specific order of movements rather than a single movement (Tanji andShima, 1994, 1996a;Shima and Tanji, 2000). The inactivation of these areas in monkeys demonstrated the contributions of the SMA and pre-SMA in the performance of a sequence composed of non-spatial movements (Shima and Tanji, 1998). When the SMA or pre-SMA was bilaterally inactivated by injecting muscimol (GABA A agonist), the inactivation disrupted the monkey's performance of sequences of arm movements guided by memory, leaving the execution of simpler, single movements unaffected (Shima and Tanji, 1998).
On the other hand, non-human primate studies using spatial sequence tasks suggested that spatial and non-spatial sequences may be learned and controlled by different cortical circuits (Tanji, 2001;Ohbayashi et al., 2016). Even though neural activity reflected a specific order of movements in a sequence in both types of tasks, the results of an inactivation study using a spatial sequence task were different from the results using the nonspatial sequence task. Neurons in the SMA and the pre-SMA respond preferentially to a specific order of movements rather than a single movement during the performance of internally generated spatial sequence tasks (Clower and Alexander, 1998;Nakamura et al., 1998;Lee and Quessy, 2003). The activity of the SMA neurons reflected a particular serial position in a sequence (Lee and Quessy, 2003). The pre-SMA is particularly active for the learning of new sequences of movements, but not the production of movement components (e.g., reaching) (Hikosaka et al., 1996). Furthermore, the 2-deoxyglucose (2DG) signals of these areas after extensive practice (>12 months) reflected the effect of long-term training on spatial sequences Strick, 1997, 2003). The 2DG signal is suggested to be associated with presynaptic activity at both excitatory and inhibitory synapses and reflects the metabolic activity of synapses (discussed in Picard and Strick, 2003). In the studies, the monkeys were trained on remembered sequential movements or visually guided reaching for years (>12 months). After extensive practice, both the SMA and the pre-SMA displayed substantial uptakes of 2DG in association with visually guided reaching movements (Picard and Strick, 2003). On the other hand, 2DG incorporation in the SMA and pre-SMA was relatively low in the case of remembered sequential reaching movements (Picard and Strick, 1997). The differential metabolic activities of the pre-SMA and SMA in the two tasks suggested that these areas may be reorganized with overtraining on the remembered sequences after extensive practice. Therefore, the results from neural recording and 2DG showed that neurons in both the SMA and pre-SMA may play roles in the learning and performance of spatial sequence tasks.
Nevertheless, an inactivation study using spatial sequences provided results different from the inactivation study using Neurons exhibited anticipatory activity related to specific sequences. After muscimol injection, the number of errors in the sequential movements increased.
Ohbayashi, 2020 M1 >100 days Anisomycin injection, muscimol injection Anisomycin injection disrupted the performance of the memory-guided sequential reaching, but not the visually guided reaching. Muscimol injection disrupted the performance of both the memory-guided sequential reaching and visually guided reaching.
Matsuzaka et al., 2007 M1
>2 years Neural recording ∼40% of the task-related neurons were differentially active during the memory-guided sequential reaching and visually guided reaching.
Picard et al., 2013 M1 ∼1-6 years 2DG, neural recording 2DG uptake was lower in monkeys that performed sequential reaching guided by memory compared with the 2DG uptake in monkeys that performed visually guided reaching. 2DG uptake was lower in monkeys that were trained for a longer duration.
Frontiers in Behavioral Neuroscience | www.frontiersin.org FIGURE 1 | Schematic diagram of the learning of skilled sequential movements. During learning, individuals seem to go through several learning stages. The subject improved the performance of sequential movements relatively fast initially and, later, more slowly over multiple sessions of practice. Through repetitive practice, subjects improve their speed and accuracy of sequential movements. The performance of sequential movements can be improved to the expert level through extensive practice and can be maintained as a motor skill.
non-spatial sequences. Hikosaka's group trained monkeys to learn a spatial sequence of reaching movements to targets (Nakamura et al., 1999). In their sequence task, reaching movements in space were required, so that the selection of spatial end points in sequential reaching was a critical factor to control. When the pre-SMA was bilaterally inactivated by injecting muscimol, the inactivation disrupted the monkeys' learning of a new sequence of movements, but not the performance of the memorized sequence of movements. Interestingly, local inactivation of the SMA did not significantly disrupt the learning or the performance of the sequential reaching task (Nakamura et al., 1999). The result suggests that the SMA may not be critically involved in the performance of this type of spatial sequences at the tested stage of learning, even though neurons exhibited the sequence-related activity. Clearly, the inactivation results highlight the most unique contribution of the targeted area to the sequence task. Taken these findings together, the pre-SMA seems to be more critically involved in the cognitive aspects of acquiring a novel sequence of movements compared to the SMA (Hikosaka et al., 1996;Shima et al., 1996;Nakamura et al., 1998Nakamura et al., , 1999Tanji, 2000, 2006). The SMA seems to be involved in the temporal organization of multiple non-spatial movements into a sequence (Boecker et al., 1998;Shima and Tanji, 1998;Tanji, 2001;Nachev et al., 2008;Orban et al., 2010;Wiener et al., 2010;Cona and Semenza, 2017). On the other hand, for the spatial sequence tasks, even though neural activity of the SMA neurons reflected aspects of sequences, its role is still debatable and needs to be further investigated. The results of the spatial sequence task suggested that the effect of muscimol injection in the SMA could be compensated by another motor area, possibly the PMd, which is anatomically connected with both the SMA and M1. In the next section, I will discuss the role of the PMd in the performance of internally generated sequential movement tasks.
DORSAL PREMOTOR CORTEX
The dorsal premotor cortex (PMd) has been regarded as the area for the visual guidance of motor behavior in many studies (Kalaska and Crammond, 1995;Johnson et al., 1996;Wise et al., 1997;Hoshi and Tanji, 2007;Averbeck et al., 2009). Moreover, the PMd is suggested to be involved in the cognitive aspects of visually guided motor tasks, such as mental rehearsal and a decision making (Cisek and Kalaska, 2002, 2004Pesaran et al., 2008). Considerable evidence suggests that the PMd is specifically involved in the guidance of movements based on memorized arbitrary sensorimotor associations (Passingham, 1988;Mitz et al., 1991;Kurata and Hoffman, 1994). Firstly, lesions or the inactivation of the PMd produces deficits on tasks that rely on the associations between an arbitrary visual cue (e.g., color or shape of a visual stimulus) and a movement (Halsband and Passingham, 1982;Passingham, 1988;Kurata and Hoffman, 1994). For example, Kurata and Hoffman trained monkeys to learn the visuo-motor association task in which the monkeys were required to move their wrist to the right or the left direction based on the color of a conditional cue (Kurata and Hoffman, 1994). Then, they locally inactivated the PMd by injecting a small amount of muscimol at sites where the preparatory neural activity was recorded during the performance of the conditional visuomotor association task. The local inactivation of PMd disrupted the monkeys' performance of the visuo-motor association task. Secondly, neurons in the PMd show a sustained activity that is specifically related to the performance of these visuo-motor association tasks (Kurata and Wise, 1988;Mitz et al., 1991;Kurata and Hoffman, 1994). The PMd neurons showed sustained activity after the presentation of the arbitrary visual cue during the movement preparation period (Kurata and Wise, 1988;Kurata and Hoffman, 1994). Although these findings are in line with the proposal that the PMd plays a crucial role in the visual guidance of movements in general, they specifically point to the important contribution of the PMd to memory-guided movements in which selection, preparation, and execution of movements are guided by memorized visuo-motor associations (Halsband and Passingham, 1982;Wise, 1985;di Pellegrino and Wise, 1993;Kurata and Hoffman, 1994).
Moreover, human imaging studies consistently reported the activity of the PMd during the performance of sequential movements (Dayan and Cohen, 2011;Hardwick et al., 2013). The studies indicated that the PMd may be a structure of key importance for sequence learning and may contribute to sequence learning by selecting appropriate responses. This idea was verified by a study using non-human primates. The role of the PMd in internally guided sequential reaching was studied using neural recordings and local inactivation (Ohbayashi et al., 2016). Monkeys were trained to perform two types of reaching tasks (Figures 2A-C). In one task, the movements were instructed by spatial visual cues (random task, visually guided reaching; Figure 2B), whereas in the other task, sequential movements were internally generated from memory after extended practice (repeating task, internally generated sequential movements guided by memory; Figure 2C; Ohbayashi et al., 2016;Ohbayashi and Picard, 2020). After more than 50 days of training on the tasks, the group examined neural activity in the arm area of the PMd, which was identified by intracortical microstimulation. About 40% of the neurons displayed responses that were enhanced in one task compared with the other (i.e., differential neurons). Approximately half of the differential neurons displayed enhanced activity during the repeating task, internally generated sequential movements. In the same study, the PMd was locally and transiently inactivated by injecting a small amount of muscimol into the arm representation area of the PMd after more than 50 days of training (Figures 3A-E). The inactivation of the PMd had a marked effect only on the performance of sequential movements that were guided by memory, but not on the performance of visually guided reaching (Figures 3D,E; Ohbayashi et al., 2016). Even though comparable numbers of neurons displayed enhanced activity during the internally guided sequential reaching and visually guided reaching, movement performance during the visually guided reaching was unaffected by the PMd inactivation. Furthermore, the monkeys made two types of errors after the inactivation of the PMd: errors of accuracy and errors in direction. Accuracy errors reveal an execution deficit: the monkeys reached in the correct direction for the next target in the sequence, but the movement end points were outside of the correct target. Direction errors indicate a deficit in the selection of the next target in a sequence: the monkeys reached in the direction opposite to the correct target. The inactivation results provide a clear demonstration of the importance of the PMd in the performance of internally generated sequential movements. Similarly, an inactivation of the left PMd of a human subject using transcranial magnetic stimulation (TMS) disrupted the performance of internally generated sequential movements (Wymbs and Grafton, 2013). In the study, human subjects practiced sequence production task using either a button box or a laptop keyboard with their right hand. After 30 days of practice, when the left PMd was stimulated, the error rate during the retrieval of practiced sequences increased.
Taken together, the results suggest that, although the PMd neurons are active during both visually guided and internally generated sequential movements, the PMd plays an important role in the internal generation of sequential movements. The inactivation results demonstrated that the PMd is involved in guiding sequential movements based on internal instructions after practice. With practice on sequential movements, the animal could learn arbitrary motor-motor associations of elements in the sequence and perform the practiced sequence in a seamless and predictive manner. Therefore, one possible interpretation is that the PMd inactivation disrupted the arbitrary motormotor associations in the same way as lesions of the premotor cortex disrupt an animal's performance of arbitrary sensorimotor associations (Halsband and Passingham, 1982;Wise, 1985;Passingham, 1988;di Pellegrino and Wise, 1993;Kurata and Hoffman, 1994). This is consistent with human imaging studies in which performance of the serial reaction time task (SRTT) variants elicited the bilateral PMd activity (e.g., Hardwick et al., 2013). Hardwick et al. (2013) suggested that the left PMd of humans is "a critical node in the motor learning network" for sequential movements. Further studies are necessary to explore the role of the PMd during early learning and after extensive practice, as well as in different types of sequential movements such as non-spatial sequence tasks.
PRIMARY MOTOR CORTEX
The primary motor cortex (M1) controls muscle activity through its projections to the spinal cord, and its contribution to patterning muscle activity has been extensively studied (Evarts, 1981). Growing evidence suggests that M1 is involved in both the learning and maintenance of motor skills (e.g., Shibasaki et al., 1993;Karni et al., 1995Karni et al., , 1998Ungerleider et al., 2002;Floyer-Lea and Matthews, 2004). For example, human imaging To make a correct response, the monkey is required to contact a yellow target cue displayed on the touch monitor. The yellow target is presented at one of five squares displayed on the touch monitor. Squares are arranged in a horizontal row and identified as numbers 1 to 5 from left to right. (B) Random task. A new target cue is presented in pseudo-random order in one of the five squares. A new target is presented 100 ms after the monkey made a correct response or immediately after an error. Therefore, the monkey performs visually guided reaching from a target to the next target. (C) Repeating task. Targets are presented according to a predetermined sequence (left). As the monkey learns the sequence, the monkey started to touch the target in the sequence before the presentation of the visual cue (right). After extended practice, the monkeys perform the task without help of visual cues (modified from Ohbayashi and Picard, 2020).
studies have shown that the fMRI blood oxygen level-dependent (BOLD) signal in M1 is modulated by the learning of sequential movement tasks. Karni et al. (1995) reported that after 3 weeks of practice on finger opposition sequences, the extent of M1 activation evoked during the performance of a trained sequence was significantly larger compared with the extent of activation evoked by the control task. The change in the BOLD signal in M1 persisted for several months. Moreover, the effects of prolonged and repetitive practice on the functional organization and cortical structure in M1 have been studied with musicians (i.e., the experts of sequential movements). The functional activation in M1 during the performance of sequential tasks is reduced or becomes more focused in professional musicians compared to non-musicians or amateurs (Hund-Georgiadis and von Cramon, 1999;Jancke et al., 2000;Krings et al., 2000;Haslinger et al., 2004;Meister et al., 2005). The reduced activation after years of extensive training is considered as evidence for the increased efficacy of the motor system and the need for a smaller number of active neurons to perform a highly trained set of sequential movements (Jancke et al., 2000;Krings et al., 2000;Haslinger et al., 2004;Meister et al., 2005). These suggested that the M1 of musicians is reorganized after years of extensive practice on sequential movements. The view that M1 is reorganized after extensive practice on sequential movements has also been supported by studies focused on the anatomical and functional changes of musicians' M1. The volume of M1 is reported to be larger in professional musicians compared to that in amateurs or non-musicians (Amunts et al., 1997;Gaser and Schlaug, 2003a,b;Draganski and May, 2008;Herholz and Zatorre, 2012;Zatorre et al., 2012;Sampaio-Baptista and Johansen-Berg, 2017;Wenger et al., 2017). The motor representations of the body parts used for skilled performance are enlarged in professional musicians compared with nonmusicians (Elbert et al., 1995;Schwenkreis et al., 2007). The structural changes were proposed to be supported by processes occurring at the synapse level, including intracortical remodeling of dendritic spines and axonal terminals, glial hypertrophy, and synaptogenesis (Anderson et al., 1994; Draganski and May, Injections were done at sites in which intracortical stimulation evoked shoulder or elbow movements (i.e., arm representation) in the PMd. (D) Reaching end points of movements from target 5 to target 3 before and after muscimol injection in the PMd. Left: pre-injection; right: post-injection. Top: random task; bottom: repeating task. The monkey was performing sequence 5-3-1 during the repeating task. E A : accuracy errors, a reach performed in the correct direction (e.g., to the left), but to an end point outside of the correct target. Gray dots: correct response; black dots: error response. The percentages of trials ending in each target are given below the targets. *p < 0.05. (E) Error rates of the random task (left) and the repeating task (Right) in the injection session in (D). After muscimol injection, the number of errors increased dramatically in the repeating task, but not in the random task (modified from Ohbayashi et al., 2016. Copyright 2016 2008; Herholz and Zatorre, 2012;Zatorre et al., 2012). These studies suggested that increased synaptic efficacy as a result of extensive practice may contribute to changes in structural volume. Similarly, the plasticity of the white matter structure was correlated with skill practice, such as the number of practice hours (Bengtsson et al., 2005;Han et al., 2009). Bengtsson et al. (2005) discussed that increased myelination, caused by neural activity in fiber tracts during training, could be a mechanism underlying the observed increased volume of white matter. Taken together, extensive practice on sequential movements is suggested to lead to the increased synaptic efficacy in M1 through the remodeling of dendritic spines and axonal terminals, synaptogenesis, increased myelination, and glial hypertrophy. The change of fMRI activation in the M1 of humans, decreased 2DG signal in the M1 of non-human primates, and the enlarged volume of the M1 in musicians may all reflect the reorganization in M1 with extensive practice.
Recent fMRI studies suggested that M1's contribution to structured and higher-order aspects of sequential movements may be limited when the training duration was short (Yokoi and Diedrichsen, 2019). In the study, human subjects practiced higher-order sequences that are composed of chunks of short sequences of keyboard pressing. Then, the authors examined whether this hierarchical structure was reflected in the brain activity patterns of the participants using fMRI data (Yokoi and Diedrichsen, 2019). The authors concluded that single-finger movements were represented in M1 and higher-order sequences were represented throughout the frontoparietal regions of the cortex after 1 week of training. The neural basis for the acquisition and retention of long and high-ordered sequences after extensive practice should be further investigated in future studies.
Neurophysiological studies in non-human primates showed that the neural activity in M1 is modulated by the sequence components. When a monkey performs sequential movements, the neural activity of M1 neurons reflects aspects of the sequential movements (Hatsopoulos et al., 2003;Lu and Ashe, 2005). The effect of extensive practice on the neural and metabolic activities in the M1 of monkeys was examined after 1-6 years of training on a sequential reaching task (Matsuzaka et al., 2007;Picard et al., 2013). In these studies, the monkeys were trained to perform the internally generated sequential reaching task and visually guided reaching task for 1-6 years (Figure 2). Then, the neural and metabolic activities were compared between these two conditions to elucidate the characteristics specific to the extensively trained sequential movements. After extensive training on the two tasks (∼2 years), Matsuzaka et al. (2007) recorded the activity of single neurons in the proximal arm representation of M1. In this experiment design, the movements were performed either in the context of an internally generated trained sequence or of a visually guided reaching on the same experiment day (e.g., movement from target 5 to target 3 in Figures 2B,C). Therefore, the comparison of activity for movements performed in two different contexts (i.e., internally generated sequence or visually guided reaching) revealed changes of activity associated with training, even though the activity patterns of the neurons before training were unknown (i.e., not recorded). Forty percent of the task-related neurons in M1 were differentially active during the performance of the visually guided and internally generated sequential reaching. The majority of differentially active neurons had enhanced activity for the trained, internally generated sequential reaching (Matsuzaka et al., 2007). Similarly, the uptake of 2DG was examined in the arm area of M1 after extensive training of the sequential reaching task (Picard et al., 2013). Uptake of 2DG is suggested to be associated with presynaptic activity at both excitatory and inhibitory synapses (discussed in Picard and Strick, 2003;Picard et al., 2013). They found that the uptake of 2DG was low in monkeys that performed highly practiced, internally generated sequences of movements compared with the 2DG uptake in monkeys that performed visually guided reaching (Picard et al., 2013). Surprisingly, the low uptake of 2DG was not matched by low neural activity in the same area. Neural activity in arm M1 during the internally generated movements was comparable to that observed during the visually guided movements. Therefore, there was a marked dissociation between the metabolic and neural activities in M1. These observations imply an increase of the synaptic efficacy in M1 after extensive practice, which led to M1's contribution to the planning and generation of sequential movements.
M1 is critical for implementing motor output, so that it has been challenging to test its involvement in the acquisition or the maintenance of motor sequences. Lesions or inactivation of M1 will abolish the motor commands to the spinal cord that generates muscle activity. A few studies reported that, when M1 was inactivated, subjects made more errors in the performance of trained sequential movements compared with that before the inactivation (Lu and Ashe, 2005;Cohen et al., 2009;Censor et al., 2014). However, because M1 is critically involved in motor execution, an advanced approach is required to further understand how M1 contributes to internally generated sequential movements without the confound of basic motor deficits. This was achieved in a recent study by selectively manipulating protein synthesis in the M1 of nonhuman primates in order to disrupt information storage in this cortical area (Figure 4; Ohbayashi, 2020). In the study, the monkeys were trained on two tasks: internally generated sequential movements (repeating task, guided by memory; Figures 2C, 4A) and reaching movements guided by visual cues (random task, visually guided reaching as a control task; Figure 2B; Ohbayashi, 2020). After the monkeys practiced each sequence for more than 100 training days and started to perform the memorized sequential movements predictively, the protein synthesis inhibitor anisomycin was injected into the arm representation of M1 to test M1's involvement in the maintenance of sequential movements after extensive practice ( Figure 4B). Anisomycin injections had a significant effect on the performance of the sequential movements guided by memory during the repeating task. The injections resulted in a significant increase in the number of errors (Figures 4C,D) and a significant decrease in the number of predictive responses, an indication of sequence learning, during the repeating task. Moreover, the monkeys made errors reaching in the direction opposite to the correct target (Figure 4C, bottom). This type of error suggests a deficit in selecting the movement component in the sequence. In contrast, performance of the visually guided movements during the random task was not significantly disrupted. Interestingly, inactivation of M1 using muscimol injection disrupted the performance on both the random and repeating tasks, suggesting that the inactivation of M1 caused a deficit of motor production (Ohbayashi, 2020). Differences in the effects between anisomycin injection and muscimol injection suggest that the anisomycin injection disrupted the performance of internally generated sequential movements by interfering with the information storage in this area. This observation emphasizes the importance of M1 for the generation of sequential movements guided by memory. The results suggest that, although M1 is critical for movement production, it also is involved in the maintenance of skilled sequential movements (Ohbayashi, 2020).
Protein synthesis inhibitors have been widely used in rodents to study the neural basis of learning and memory. The studies have been conducted in rodents, especially extensively in the context of fear conditioning (Davis and Squire, 1984;Nader et al., 2000a,b;Kandel, 2001;Dudai, 2004Dudai, , 2012Dudai and Eisenberg, 2004;Kelleher et al., 2004;Rudy, 2008a,b). De novo protein synthesis, during or shortly after the initial training, is shown to be essential in the consolidation of long-term memory (Davis and Squire, 1984). Moreover, when a protein synthesis inhibitor (e.g., anisomycin) was given during the retrieval, the performance of retrieved task was disrupted (Nader et al., 2000a;Nader, 2003;Lee et al., 2008). The studies suggested that the neural trace may be destabilized upon retrieval through protein degradation and then rebounded through protein synthesis Injections were done at sites in which intracortical stimulation evoked shoulder or elbow movements (i.e., arm representation). (C) Reaching end points of trials from target 2 to target 4 before and after anisomycin injection. Left: pre-injection; right: post-injection. Top: random task; bottom: repeating task. The monkey was performing sequence 1-2-4 during the repeating task. E A : accuracy errors, a reach performed in the correct direction (e.g., to the right), but to an end point outside of the correct target; E D : direction errors, a reach performed in the direction opposite to the correct target. Gray dots: correct response; black dots: error response. The percentages of trials ending in each target are given below the targets. *p < 0.05. (D) Averaged error rates of six injection sessions in the random task (left) and the repeating task (right). After anisomycin injection, the number of errors increased dramatically in the repeating task, but not in the random task (modified from Ohbayashi, 2020).
during the "reconsolidation" process (Nader et al., 2000a,b;Sara, 2000;Nader, 2003;Lee et al., 2008;Rudy, 2008a,b;Dudai, 2012). Thus, the injection of the protein synthesis inhibitor disrupted the task performance as the inhibitor prevented the synthesis of the proteins needed to reconsolidate the memory trace (Nader et al., 2000a;Lee et al., 2008;Dudai, 2012). The series of studies also proposed that the destabilized trace may be bidirectionally modified to be weakened or strengthened, so that the neural trace can be "updated" (Sara, 2000;Dudai and Eisenberg, 2004;Rudy, 2008b;Dudai, 2012). Although it is unclear whether these proposals can be generalized to other forms of memory, they may inform us of the way by which anisomycin injected in M1 interfered with the performance of the well-practiced sequential movements (discussed in Ohbayashi, 2020). The neural basis for motor skill improvement needs to be further investigated in future studies.
The rodent studies provide valuable insights into the reorganization of the motor cortex during motor skill learning, even though the rodent motor system and the range of motor skills differ from those of human and non-human primates (Dum and Strick, 1991;He et al., 1993He et al., , 1995Rathelot et al., 2016). Early in the learning of the reach-grasp task, the expressions of transcription factors (e.g., an immediate early gene, c-fos) increase within the rodent's motor cortex and remain elevated in the plateau phase of the learning curve relative to control animals (Kleim et al., 1996). The increase of gene expression precedes both the changes in synapse number and motor map reorganization (Kleim et al., 1996). The injection of protein synthesis inhibitors into the motor cortex of rodents disrupted the maintenance (Kleim et al., 2003) or the learning (Luft et al., 2004) of the skilled forelimb reaching. In these studies, the rats were trained to reach and grasp for a food pellet placed outside the cage (Kleim et al., 2003;Luft et al., 2004). The injection of anisomycin into the motor cortex after the training disrupted the performance of the skilled forelimb task and caused reductions in the synapse number and size in the motor cortex in vivo (Kleim et al., 2003). The injection of anisomycin into the rodents' motor cortex during the learning disrupted the learning of the motor skill task (Luft et al., 2004). Two photon imaging or electron microscopy studies have shown that skill training leads to the rapid formation of enduring postsynaptic dendritic spines and an increase in synaptic density in neurons in the motor cortex (Kleim et al., 2003;Xu et al., 2009;Yang et al., 2009;Yu and Zuo, 2011). When the newly modified dendritic spines during the training of rotarod tasks were optically manipulated to shrink in the motor cortex, the rodents' performance of the trained task was disrupted (Hayashi-Takagi et al., 2015). The study suggested that the structural plasticity of spines plays a critical role in the learning of motor skills in the motor cortex of rodents.
Taken together, these observations support the view that M1 is involved in skilled sequential movements, especially after extensive practice. The neural activity, metabolic activity, and the structural organization in M1 were influenced by extensive practice on the motor skill tasks. Further studies will expand our understanding of how M1 contributes to the continuous improvement of skilled sequential movements during repetitive practice as well as its contribution to fast learning.
COLLABORATION OF CORTICAL MOTOR AREAS
Studies on anatomical connectivity provided valuable insights into the interaction between multiple areas. The functional distinction of pre-SMA and SMA is supported by the differences in the anatomical connections between these areas (Luppino et al., 1990(Luppino et al., , 1993Bates andGoldman-Rakic, 1993, reviewed in Picard andStrick, 2001). Firstly, only the SMA has direct projections to the M1 and the spinal cord (Muakkassa and Strick, 1979;Strick, 1991, 1996;He et al., 1995;Wang et al., 2001). Secondly, the pre-SMA does not have substantial connections with M1 (Tokuno and Tanji, 1993;Galea and Darian-Smith, 1994;Lu et al., 1994;Hatanaka et al., 2001;Dum and Strick, 2005). Instead, the pre-SMA is densely interconnected with regions of the prefrontal cortex as well as from the rostral cingulate motor area and pre-PMd (F7) (Luppino et al., 1990(Luppino et al., , 1993(Luppino et al., , 2003Bates and Goldman-Rakic, 1993;Lu et al., 1994;Takada et al., 2004;Wang et al., 2005). Moreover, the pre-SMA does not appear to be densely interconnected with the SMA (Luppino et al., 1990(Luppino et al., , 1993(Luppino et al., , 2003Wang et al., 2001). These observations suggest that the SMA is a part of the cortical motor areas and that the pre-SMA can be functionally considered as a region of the prefrontal areas (Bates and Goldman-Rakic, 1993;Luppino et al., 1993;Lu et al., 1994;Picard and Strick, 2001). This view is consistent with the observations of the inactivation studies described above showing that the pre-SMA is involved in cognitive aspects such as the early learning of movement sequences, whereas the SMA is primarily involved in the performance of memorized movement sequences.
The anatomical connections of the M1, PMd, PMv (ventral premotor cortex), and the SMA of monkeys were precisely studied by Dum and Strick's group (Dum and Strick, 2005). The anatomy results showed that the digit representations of the PMd, PMv, and M1 are densely interconnected with each other. Thus, these three cortical areas form a network for the control of hand movements (Dum and Strick, 2005). The projections from the digit representation in the SMA to the PMd and the PMv are stronger than the SMA projections to M1 (Dum and Strick, 2005). This suggests that the SMA may influence through connections with the premotor areas rather than through M1. Overall, the laminar origins of neurons that interconnect the PMd, PMv, and M1 are typical of "lateral" interactions. Dum and Strick commented that "from an anatomical perspective, this cortical network lacks a clear hierarchical organization" (Dum and Strick, 2005). The strong, reciprocal interconnections suggest that these areas may act in concert with each other to produce commands for movements.
In fact, a subset of neurons in each premotor area exhibits activity for relatively simple movements as M1 neurons do (Kurata and Tanji, 1986;Shima et al., 1991;Cadoret and Smith, 1997). Furthermore, in non-human primate studies, aspects of practiced sequences were reflected in the neural or the metabolic activity in all the SMA, PMd, and M1 Strick, 1997, 2003;Matsuzaka et al., 2007;Picard et al., 2013;Ohbayashi et al., 2016;Ohbayashi, 2020). On the other hand, the injection of chemical agents in these areas showed that each premotor area is differentially involved in sequential movements. Inactivation of the SMA did not have an effect on the learning and performance of internally generated spatial sequences (Nakamura et al., 1999). Nevertheless, both the muscimol injection in the PMd and the anisomycin injection in M1 selectively disrupted the performance of internally generated sequences, but not the visually guided reaching (Ohbayashi et al., 2016;Ohbayashi, 2020). Moreover, both injections caused deficits in target selection, in which a monkey reached to the direction opposite to the correct target (Ohbayashi et al., 2016;Ohbayashi, 2020). Together with the dense anatomical connection between the PMd and M1, as described above, these suggested a possibility that anisomycin injection disrupted the interaction from the PMd to M1, which resulted in the deficit in the performance of internally generated spatial sequences. These suggest that the PMd functions as a major source of input to M1 to guide the performance of internally generated spatial sequences after practice. More experiments are required to tease out the exact nature of interactions between M1 and the premotor areas in the learning and performance of sequential movements.
SUMMARY
The performance of sequential movements can be improved to the expert level and maintained as a motor skill through extensive practice. Functional imaging studies in humans show that a brain-wide network subserves the performance of skilled sequential movements. Interventional studies in non-human primates advanced our understanding of its neural basis. The results of interventional studies suggested that each motor area in the network makes a distinct contribution to skilled sequential movements. The SMA is involved in the temporal organization of multiple non-spatial movements into a sequence and the execution of the sequential actions. Its role in spatial sequences is still debatable and needs to be further investigated. The PMd may act as a key structure for the learning of sequential movements by contributing to the selection of appropriate responses. Specifically, the PMd may be critical for the acquisition and maintenance of arbitrary motor-motor associations. In M1, the neural activity, metabolic activity, and structural organization were shown to be modified by extensive practice on sequential movements. M1's involvement in sequential movements after extensive practice was verified by an interventional study using an inhibitor for protein synthesis. These studies suggest that the PMd functions as a major source of input to M1 to guide the performance of internally generated sequences. Together, the PMd and M1 may be parts of the key structures for the learning and maintenance of internally generated sequential movements.
The involvements of these areas along the dimensions of time (i.e., learning stages) and sequence category (e.g., spatial and non-spatial) need to be further explored in future experiments.
AUTHOR CONTRIBUTIONS
The author confirms being the sole contributor of this work and has approved it for publication.
FUNDING
This work was supported by the National Institutes of Health grant R21NS101499 to MO and the Brain Sciences Project of the CNSI and NINS BS291006 to MO. | 9,422 | 2021-06-09T00:00:00.000 | [
"Biology",
"Psychology"
] |
“ STUDY OF SOLVENT-SOLVENT INTERACTION IN A AQUEOUS MEDIUM AT DIFFERENT TEMPERATURES BY ULTRASONIC TECHNIQUE ”
The basic parameters like velocity (U), density(ρ) and viscosity (η) can be measured by ultrasonic Interferometer. From these parameters various thermodynamical and acoustical parameters such as adiabatic compressibility 's (β), specific acoustic impedance (Z), Solvation number (S ), Intermolecular free length (L ), have been estimated using standard relations from n f 0 0 measured values of Ultrasonic velocities, densities and viscosities in the wide range of concentrations at 35 C, 40 C and 0 45 C temperatures for Acetone + Propanol – 2 +Toluene tertiary system. The solvent-solvent interactions are studied on the basis of increase or decrease in ultrasonic velocity, density, viscosity and other derived acoustical parameters in terms of structure making and structure breaking tendencies of various solvent molecules.
I. INTRODUCTION
Developments which are taking place in this field have found great use of ultrasonic energy in the field of medicine, engineering, agriculture, technology and industry [5,6]. In chemical industries, ultrasonic energy is found useful in studying the chemical processes as well as different types of reactions in synthesis of chemical substances. Wong and Zhu [7] have studied the speed of sound in seawater as a function of salinity, temperature and pressure. Skumiel and Labowski [8] have given a theoretical analysis of the effect of an external constant magnetic field on the propagation of ultrasonic waves in electrically conducting liquids as well as the results of measurements carried out in mercury. Hanel [9] has analytically deduced an equation for the longitudinal sound velocity of thin film samples and the velocities of the tertiary liquid mixtures have been calculated. The compositional dependence of thermodynamic properties has proved to be a very useful tool in understanding the nature and extent of pattern of molecular aggregation resulting from intermolecular interaction between components. Ultrasonic waves with low amplitude have been used by many researcher to investigate the nature of molecular interactions and physio-chemical behavior of pure, binary, ternary and quaternary liquid mixtures.
A survey of literature indicates that excess values of acoustical parameters are useful in understanding the nature and strength of the molecular interaction in the pure, binary, ternary and quaternary liquid mixtures . Acoustic and thermodynamic parameters have been used to understand different kinds of association, the molecular packing, molecular motion and various types of intermolecular interactions and their strengths, influenced by the size in pure components and in the mixtures.
II. EXPERIMENTAL STUDIES
Ultrasonic velocity was measured using single crystal ultrasonic interferometer of 2 MH z frequency (Model M81) supplied by Mittal Enterprises, New Delhi, that has a reproducibility of 0.4ms -1 at 25 o C.The temperature was maintained constant by circulating water from a thermostatically controlled water bath (accuracy 0.1 0 C).The temperature of the cell was measured using a thermocouple (at the crystal) and found to be accurate The chemicals used were of AR grade, procured from BDH. All the chemicals were purified by standard procedures discussed by Armarigo and Perrin before use. Tertiary system is studied at different temperatures,
III. THEORY
Various physical parameters were evaluated from the measured values of ultrasonic velocity (U) and density (ρ) using the following standard formulae: Where k values for different temperatures were taken from the work of Jacobson; at 35,40 and 45 o C the K values are 637, 642 ,647 respectively.
where V and M are the molar volume and molecular weight of the mixtures, respectively.
Specific acoustic impedance (Z) =ρU (5) and the excess adiabatic compressibility (β E ) and excess intermolecular free length (L f E ) can evaluated by the following expressions: For β ideal and L f.ideal , the densities and the ultrasonic velocities of various components in pure state at the three given temperatures have been measured. Further, the velocities of both the systems at different concentrations and temperatures have been evaluated theoretically using volume additive rule as : Where U 1, U2 , and U 3 are the velocities of the three components of the ternary liquid mixture in pure state and , and are their volume fractions.
IV. RESULTS
Ultrasonic velocity, density, viscosity ,adiabatic compressibility and specific acoustic impedance for the acetone-propanol-2 and Toluene have been listed in
V. DISCUSSION
It is seen from the data that in Acetone, Propanol-2 and Toluene system ultrasonic velocity increases initially as When the temperature is increased the velocity maxima shifts towards lower concentration. This is because of the thermal energy which facilitates the breaking of bonds between the associated molecules of Acetone, Propanol-2 and Toluene. The increase in thermal energy weakens the molecular forces and hence decrease in velocity is expected. The observed acoustical parameters and their variation and temperature clearly indicate that the formation of complex between unlike molecules through hydrogen bonding.
From Table 1 At low temperature, these molecules may stay in associated form. The associated molecules are fairly in large size as compared to Propanol -2 and Acetone and Toluene may cause some structural changes resulting in the weakening of the intermolecular forces. The adiabatic compressibility (β) and intermolecular free length (L f ) both have an inverse relationship with ultrasonic velocity. The decrease in β with increase in concentration is an indicative fact that intermolecular forces are increasing which brings the molecules to a closer packing resulting into a decrease in L f .The specific acoustic impedance is governed by the initial and elastic properties of the medium. Therefore it is important to examine specific acoustic impedance in relation to concentration and temperature.
When the temperature is increased the velocity maxima shifts towards lower concentration. This is because of the thermal energy which facilitates the breaking of bonds between the associated molecules of Acetone, Propanol-2 and Toluene. The increase in thermal energy weakens the molecular forces and hence decrease in velocity is expected. The observed acoustical parameters and their variation and temperature clearly indicate that the formation of complex between unlike molecules through hydrogen bonding. The Ultrasonic studies provide comprehensive investigations between Acetone+ Propanol-2 + Toluene molecules arising dipole-dipole interactions. | 1,409.8 | 2017-01-15T00:00:00.000 | [
"Chemistry"
] |
Accuracy and inter-observer variability of 3D versus 4D cone-beam CT based image-guidance in SBRT for lung tumors
Background To analyze the accuracy and inter-observer variability of image-guidance (IG) using 3D or 4D cone-beam CT (CBCT) technology in stereotactic body radiotherapy (SBRT) for lung tumors. Materials and methods Twenty-one consecutive patients treated with image-guided SBRT for primary and secondary lung tumors were basis for this study. A respiration correlated 4D-CT and planning contours served as reference for all IG techniques. Three IG techniques were performed independently by three radiation oncologists (ROs) and three radiotherapy technicians (RTTs). Image-guidance using respiration correlated 4D-CBCT (IG-4D) with automatic registration of the planning 4D-CT and the verification 4D-CBCT was considered gold-standard. Results were compared with two IG techniques using 3D-CBCT: 1) manual registration of the planning internal target volume (ITV) contour and the motion blurred tumor in the 3D-CBCT (IG-ITV); 2) automatic registration of the planning reference CT image and the verification 3D-CBCT (IG-3D). Image quality of 3D-CBCT and 4D-CBCT images was scored on a scale of 1–3, with 1 being best and 3 being worst quality for visual verification of the IGRT results. Results Image quality was scored significantly worse for 3D-CBCT compared to 4D-CBCT: the worst score of 3 was given in 19 % and 7.1 % observations, respectively. Significant differences in target localization were observed between 4D-CBCT and 3D-CBCT based IG: compared to the reference of IG-4D, tumor positions differed by 1.9 mm ± 0.9 mm (3D vector) on average using IG-ITV and by 3.6 mm ± 3.2 mm using IG-3D; results of IG-ITV were significantly closer to the reference IG-4D compared to IG-3D. Differences between the 4D-CBCT and 3D-CBCT techniques increased significantly with larger motion amplitude of the tumor; analogously, differences increased with worse 3D-CBCT image quality scores. Inter-observer variability was largest in SI direction and was significantly larger in IG using 3D-CBCT compared to 4D-CBCT: 0.6 mm versus 1.5 mm (one standard deviation). Inter-observer variability was not different between the three ROs compared to the three RTTs. Conclusions Respiration correlated 4D-CBCT improves the accuracy of image-guidance by more precise target localization in the presence of breathing induced target motion and by reduced inter-observer variability.
Background
Breathing induced motion of tumors and organs-at-risk are significant sources of uncertainties in radiotherapy of pulmonary and abdominal targets [1], which affects the accuracy at all stages of the treatment process: target definition, safety margin selection, dose calculation, patient set-up and treatment delivery. Respiration correlated CT (4D-CT) imaging is considered as method of choice for treatment planning in the thoracic and abdominal region: 4D-CT reduces motion artifacts for precise target volume delineation and simultaneously allows patient-individual motion assessment for adjustment of safety margins [2,3]. It has been shown that image-guidance (IG) is most important to improve the overall accuracy of lung cancer treatment [4]; consequently, respiration correlated 4D-CT needs to be integrated into a consistent 4D IG work-flow [5].
Respiration correlated cone-beam CT (4D-CBCT) has been commercialized recently [6] and allows the realization of a volumetric 4D image guidance workflow. However, it remains unclear whether 4D-CBCT actually improves the accuracy of IG compared to conventional 3D-CBCT. Phantom studies have indicated similar accuracy of IG using 3D-CBCT and 4D-CBCT [7]. Additionally, IG using 4D-CBCT may be associated with potential disadvantages; image acquisition of a respiration correlated 4D-CBCT takes longer than 3D-CBCT, which affects patient through-put and may increase the risk of patient motion between imaging and treatment. Respiration correlated imaging at treatment delivery also increases the complexity of IG, which may introduce additional uncertainties in clinical practice. Finally, modern and cost-intense technologies are being discussed controversially especially in situations where clinical evidence is scarce.
Several centers have reported their experiences with IG using 3D-CBCT or 4D-CBCT and various techniques and work-flows have been used [8][9][10][11][12][13][14][15]. Therefore, it was the aim of this study to compare the accuracy of 3D-CBCT and 4D-CBCT based IG techniques. In order to have clinically representative results, only commercially available soft-and hardware was used and no research equipment was allowed. Additionally, all IG techniques were performed independently by three experienced radiation oncologists (RO) and by three experienced radiotherapy technicians (RTT) to evaluate interobserver variability.
Methods
This retrospective simulation study is based on 21 consecutive patients, who were treated with 4D-CBCT based image-guided SBRT for early stage primary nonsmall cell lung cancer (NSCLC) or pulmonary metastases; target characteristics are described in Table 1.
Clinical treatment planning and delivery
A respiration-correlated 4D-CT was acquired with a 24slice helical CT scanner (Somatom Sensation Open; Siemens Medical Solutions, Erlangen, Germany). A pressure sensor placed in an elastic belt around the abdomen generated the external breathing signal (Anzai AZ-733 V; Anzai Medical Solutions, Japan). Two 4D-CT series reconstructed at end-inhalation and end-exhalation phases were used for treatment planning in the Pinnacle treatment planning system (Philips Radiation Oncology Systems, Milpitas, CA, USA). The macroscopic tumor was delineated in the end-exhalation phase, where breathing motion and motion artefacts are expected to be smallest [1]. No margin was added for generation of the clinical target volume (CTV). The structure of the CTV was converted into a 3D mesh, propagated into the end-inhalation phase where the position of the CTV was adjusted. The internal target volume (ITV) was generated based on the CTV contours in endexhalation and end-inhalation and the planning target volume (PTV) was generated with a safety margin of 5 mm [4,16].
Treatment plans were generated for an Elekta Synergy S TM linear accelerator equipped with cone-beam CT technology (Elekta, Crawley, UK). The 4D-CT series in endexhalation and all planning contours were transferred as planning reference into the XVI TM image-guidance software, version 4.5 (Elekta, Crawley, UK). At treatment delivery, a respiration correlated 4D-CBCT was acquired using the standard parameters provided by the manufacturer for image acquisition and reconstruction (200°rotation for acquisition of 1320 frames within 4 minutes, 20 mA and 16 ms per frame, 120 kV, S20 filter).
Image-quality of 3D-CBCT and 4D-CBCT
Image quality of the 3D-CBCT and the 4D-CBCT was scored by six observes: three ROs and three RTTs. All observes had >2 years clinical experience in image-guided SBRT for lung tumors and had been trained by the manufacturer and by MG in the use of the XVI TM 4.5 system. The criterion for image-quality scoring was visibility of the pulmonary target for precise manual verification of the IG results. Score 1 was defined as clearly visible tumor without any difficulties in manual verification of IG results; score 2 was defined as visible tumor with difficulties in manual verification of IG results; score 3 was defined as image quality, where precise visual localization of the target is hardly or not possible for manual verification of IG results.
Image-guidance protocols
Three different IG techniques were evaluated independently by all six observers; other observer`s and the clinical results were made unavailable for all observers. The three IG techniques were performed in the following sequence.
Image-guidance using manual registration of the planning ITV contour and the verification 3D-CBCT (IG-ITV) This was the standard IG technique at our department prior to the introduction of the 4D-CBCT [8] and has been described as routine practice by other institutions [9][10][11]. The rectangular clipbox for automatic registration of the planning reference CT and the verification CBCT in the XVI TM software was confined to the vertebral spine on the level of the pulmonary tumor for evaluation of patient setup. The 4D-CBCT was visualized as a conventional "slow" 3D-CBCT, which was the average intensity projection (AIP) of all 4D-CBCT phases. The contours of the ITV and PTV were projected onto the AIP 3D-CBCT and their position was adjusted manually in all three planes to the motionblurred tumor.
Image-guidance using automatic registration of planning 4D-CT and the verification 4D-CBCT (IG-4D) Using respiration correlated 4D-CBCT for IG is our current standard of practice and simultaneously the proposed technique by the manufacturer and other institutions [12,13]. After bony registration as described above, a so-called mask was generated for automatic image registration of the target: only the volume of the reference planning CT within this mask is used by the XVI TM software for automatic soft-tissue registration. Generation of this mask was done independently by all observes via expansion of the CTV with a 2-3 mm margin and manual exclusion of all bony structures using a drawing tool (ribs, vertebrae, sternum). Automatic softtissue registration between the volume of the reference planning 4D-CT (end-exhalation phase) inside this mask and all ten phases of the respiration correlated 4D-CBCT was performed: the position of the target was identified in each breathing phase. The target position in the end-exhalation planning 4D-CT phase relative to the target position in the end-exhalation 4D-CBCT phase was then calculated as the tumor position error. Manual adjustment of the registration was allowed at the discretion of the observer.
Image-guidance using automatic registration of the planning 4D-CT and the verification 3D-CBCT (IG-3D)
This work-flow has been described by several institutions in literature [14,15]. Bony registration was performed initially and a mask for soft-tissue registration was defined as described above. Automatic registration was then performed between the planning end-exhalation 4D-CT series as reference and the verification AIP 3D-CBCT. No manual adjustment was allowed.
Statistical analysis
Statistica X was utilized for statistical analysis (Statsoft, Tulsa, OK, USA). Mann-Whitney-U test was performed for comparison of two subset analyses and Wilcoxon test was used for matched pair analyses. Chi-squared test was used for categorical variables. Differences were considered significant for p < 0.05.
Image quality scores were not significantly different between ROs and RTTs: averaged values were identical with 1.4 ± 0.7 and the worst score of 3 was given for 14 % and 12 % observations in the RO and RTT group (p = 0.57), respectively. Both RTTs and ROs scored the worst image quality of 3 significantly less frequently and the best score of 1 significantly more frequently in 4D-CBCT images compared to 3D-CBCT images.
Differences in target positions between IG technologies
Patient set-up errors, absolute tumor position errors and tumor base-line shifts relative to the bony anatomy are summarized in Table 2.
IG-4D using respiration correlated 4D-CBCT served as gold-standard for comparison with the two IG techniques using 3D-CBCT. Averaged results were calculated for all six observers and differences between 4D-CBCT and 3D-CBCT based IG are summarized in Table 3. Systematic differences of the tumor position between 4D-CBCT and 3D-CBCT based IG were <1 mm in all directions except a systematic difference of 1.2 mm in SI direction between IG-4D and IG-3D. Random variability between 4D-CBCT and 3D-CBCT based IG expressed as one standard deviation was <1 mm in LR direction and <2 mm in AP direction (p = 0.004). Variability was largest in SI with 1.5 mm and 4.3 mm for IG-4D vs. IG-ITV and IG-4D vs. IG-3D (p = 0.02), respectively. Differences of the tumor position as a 3D error vector were 1.9 mm ± 0.9 mm and 3.6 mm ± 3.2 mm for IG-4D vs. IG-ITV and IG-4D vs. IG-3D, respectively, and the difference between the two 3D-CBCT techniques in comparison to 4D-CBCT was statistically significant (p < 0.01).
There was one outlier where IG-3D resulted in a very large difference in the tumor position compared IG-4D of 15 mm. Average image quality score of the 4D-CBCT and 3D-CBCT was 1.8 and 2.8, respectively, indicating that automatic image registration in IG-3D failed because of a poorly visualized tumor position. The difference between IG-4D and IG-ITV was only 1.3 mm for that case indicating the manual registration was more effectively coping with the suboptimal image quality of the 3D-CBCT.
Absolute differences of the tumor position between IG-4D and IG-ITV were significantly correlated with the image quality score of the 3D-CBCT (p < 0.01): the absolute difference in SI was 2.1 ± 1.7 mm, 2.8 mm ± 2.9 mm and 5.8 mm ± 5.7 mm for an image quality score of 1, 2 and 3 (p < 0.01), respectively. This correlation was of borderline significance for IG-3D (p = 0.05) and no such correlation was observed for image quality scores of the 4D-CBCT.
Averaged results of the IG techniques were calculated separately for ROs and RTTs. The 3D difference of the target position between ROs and RTTs was 0.6 mm ± 0.8 mm using the IG-4D technique and 1.6 mm ± 0.9 mm using the IG-ITV technique (p < 0.001). Identical results were obtained by the ROs and RTTs using IG-3D, where no manual adjustment of the automatic image registration was allowed.
Inter-observer variability of the IG technologies
Inter-observer variability was calculated as one standard deviation between the six observers and as maximum range between the six observers (Table 4). For IG-3D, where no manual adjustment of the automatic image registration results was allowed, variability between the six observers was <1 mm in all cases (detailed results not shown). Inter-observer variability was significantly larger for IG-ITV compared to IG-4D: variability as one standard deviation was 1.5 mm and 0.6 mm in SI direction (p = 0.002) and the maximum range between the six observers was 3.8 mm and 1.8 mm on average, respectively.
For IG-ITV there was a significant correlation in linear regression analysis between inter-observer variability in SI direction and motion amplitude of the target (r 2 = 0.36; p < 0.001) (Figure 3): inter-observer variability was larger in mobile tumors. Such a correlation was not observed for IG-4D. Inter-observer variability was not significantly correlated with the image quality scores. Left-right direction (LR), superior-inferior direction (SI) and anterior-posterior direction (AP).
Inter-observer variability was analyzed separately among ROs and RTTs and no differences were observed.
Discussion
Image-guidance is considered a prerequisite for most accurate delivery of SBRT [17,18] and cone-beam CT is one of the most frequently used in-room imaging technologies. However, the details of CBCT based image-guidance are poorly defined. Phantom studies have suggested that 3D-CBCT might result in equivalent accuracy of IG compared to 4D-CBCT because the "slow-CT" character of the 3D-CBCT contains all necessary motion information for consistent integration of breathing motion into IG.
Wang et al. evaluated the accuracy of matching the planning ITV contour to the motion blurred target and an accuracy of 1 mm was described in that phantom study. Accuracy in clinical patient treatment was not evaluated. This IG-ITV technique has been practiced by several institutions for lung [8][9][10][11] and liver tumors [19], where feasibility in routine practice was described.
Hugo et al. performed a study where two imageguidance techniques were compared: 1) registration of a planning slow-CT scan and a verification 3D-CBCT (IG-3D) and 2) registration of a planning 4D-CT scan with a verification 4D-CBCT (IG-4D) [7]. Similar accuracy was described in the phantom part of the study. In the clinical part based on eight patients, the differences between the two techniques were about 1 mm without an influence of the motion magnitude on the accuracy of both techniques. Automatic registration of the planning CT and the verification 3D-CBCT has been reported by other clinical studies [14,15]; however the details of the IG work-flow were not provided.
In contrast to phantom studies describing no clinically relevant potential of 4D-CBCT to improve the accuracy of IG, our study based on 21 consecutive lung cancer patients does not support this conclusion. Six observers described improved visualization of the pulmonary targets in 4D-CBCT compared to 3D-CBCT, which is essential for precise verification of the image-guidance procedure by the ROs or RTTs. Two situations were identified where 4D-CBCT was especially superior to 3D-CBCT ( Figure 4): 1) small tumors with large motion amplitude and 2) tumors located immediately superior to the diaphragm, where motion blurring made separation of the tumor from the diaphragm difficult.
Differences between IG using 4D-CBCT as gold standard and the two IG techniques using 3D-CBCT were 3.6 mm (IG-3D) and 1.9 mm (IG-ITV) on average. These uncertainties of 3D CBCT IG appear especially large when compared to the average base-line shift of 4.9 mm in our study, the reason for performing softtissue IG. Korreman et al. estimated the residual uncertainty of the IG procedure to 20 % of the initial motion [5], which is optimistic based on our results. Differences in the tumor position between 4D-CBCT and 3D-CBCT based IG increased with increasing motion magnitude of the pulmonary targets and increased with worse image quality scores of 3D-CBCT. These results clearly indicate that 3D-CBCT is not fully sufficient for full motion integration into IG.
This finding of improved accuracy using 4D-CBCT compared to 3D-CBCT is in contrast to the study by Hugo et al. [7], which could be explained by two reasons. First, our study is based on a larger number of patients and poor image quality of the 3D-CBCT with larger uncertainties of IG was observed especially in small and mobile tumors and in tumors located immediately superior to the diaphragm. Detailed information about the tumor size and location was not provided by Hugo et al. such that it is unknown whether these patients "at risk" for decreased accuracy of IG using 3D-CBCT were represented in that study.
Second, we used the end-exhalation CT phase as planning reference for the IG-3D technique, because this phase should resemble most closely the 3D-CBCT: the tumor remains in the exhalation phase of the breathing cycle for the longest time [20] resulting in highest pixel intensities in the exhalation position of the 3D-CBCT. Based on the study by Hugo et al., a slow-CT or AIP as planning reference might improve the accuracy of 3D-CBCT based image guidance. However, acquisition of a planning slow-CT or reconstruction of an AIP was not possible using the Siemens 4D-CT scanner nor the Pinnacle treatment planning system and research software was not allowed in our study protocol. This was done to make results more representative for daily clinical practice outside of specialized academic departments.
In addition to the lower accuracy of 3D-CBCT based IG, clinically relevant inter-observer variability of IG-ITV was observed. Variability in SI direction expressed as one standard deviation between the six observers was 1.5 mm and the range between the six observers was 3.8 mm on average. This inter-observer variability of IG-ITV was correlated with the motion magnitude of the tumor, which highlights the difficulties of precise target localization using the motion blurred 3D-CBCT images. In contrast, inter-observer variability was substantially smaller for 4D-CBCT based IG and was not correlated with the motion amplitude of the target.
It was interesting to see very small differences between ROs and RTTs. Close agreement was observed in scoring the image quality of 3D-CBCT and 4D-CBCT: the image-quality improvement of 4D-CBCT compared to 3D-CBCT was of similar magnitude between both groups. Differences in image-guided target localization between RTTs and ROs were <1 mm for 4D-CBCT based IG-4D and <2 mm for 3D-CBCT based IG-ITV. Inter-observer variability was also not significantly different between ROs and RTTs.
To the best of our knowledge, there is no data in literature about patterns of practice of image-guidance with detailed description of the responsibilities of the different professional groups. An expert group of the European Society of Therapeutic Radiology and Oncology-European Institute of Radiotherapy (ESTRO-EIR) provided a detailed guideline about volumetric IGRT, which emphasized the importance of visual verification of the IGRT results; however, no responsibilities were described most likely because of the legal diversity in Europa [21]. Guidelines by the American Society for Therapeutic Radiology and Oncology (ASTRO) and the Table 4 Inter-observer variability for IG-4D and IG-ITV separately in left-right direction (LR), superior-inferior direction (SI) and anterior-posterior direction (AP) Inter-observer variability is shown for all six observers and separately for the three ROs and three RTTs. Figure 3 Correlation between the motion magnitude of the pulmonary target and inter-observer variability using the IG-ITV technique.
American College of Radiology (ACR) state that IGRT images need to be "reviewed by the physician initially and then periodically" during the treatment course; whether this review process takes place online prior to the treatment or offline after treatment delivery remains open [22]. Based on the results of our study combined with the German regulations and international guidelines, we changed our standard operation procedures for cone-beam CT based image-guidance in pulmonary SBRT. 4D-CBCT is the imaging modality of choice and 3D-CBCT is only used for verification after the IGRT couch shift and after treatment delivery. The IGRT process is reviewed online by the radiation oncologist prior to delivery of the first radiotherapy treatment fraction. At consecutive fractions, the IGRT process is performed by the RTTs and reviewed offline by the ROs; online review of all IG results by the ROs prior to each treatment fraction had been our standard of practice before. In cases of base-line shifts >1 cm, the responsible RO is informed for immediate review because of potential overdosage of critical organs at risk [23]. | 4,827.2 | 2012-06-08T00:00:00.000 | [
"Medicine",
"Physics"
] |
On scalar radiation
We discuss radiation in theories with scalar fields. Our key observation is that even in flat spacetime, the radiative fields depend qualitatively on the coupling of the scalar field to the Ricci scalar: for non-minimally coupled scalars, the radiative energy density is not positive definite, the radiated power is not Lorentz invariant and it depends on the derivative of the acceleration. We explore implications of this observation for radiation in conformal field theories. First, we find a relation between two coefficients that characterize radiation, that holds in all the conformal field theories we consider. Furthermore, we find evidence that for a 1/2-BPS probe coupled to N\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N} $$\end{document} = 4 super Yang-Mills, and following an arbitrary trajectory, the spacetime dependence of the one-point function of the energy momentum tensor is independent of the Yang-Mills coupling.
Introduction
The study of the creation and propagation of field disturbances by sources is one of the basic questions in any field theory. In classical electrodynamics, emission of electromagnetic waves by charged particles is of paramount importance, both at the conceptual and practical level [1]. Similarly, the recent detection of gravitational waves [2] provides a striking confirmation of General Relativity, and opens a new way to explore the Universe.
Understandably, radiation of massless scalar fields due to accelerated probes coupled to them, has received much less attention [3]. An exception is the study of radiation in scalartensor theories of gravity, since the radiation pattern can differ from General Relativity [4].
The comments above refer to classical field theories. Recent formal developments, like holography and supersymmetric localization, have allowed to explore radiation in the strong coupling regime of conformal field theories (CFTs), which if they admit a Lagrangian formulation, very often include scalar fields. Some of the results of these explorations are, however, unexpected and even conflicting, as we now review.
In field theory, radiation is determined by the one-point function of the energymomentum tensor of the field theory in the presence of an accelerated probe, which is described by a Wilson line W , Instead of computing T µν W for arbitrary trajectories, one can consider particularly simple kinematical configurations. A first possibility is motion with constant proper acceleration. The reason for this choice is that in any CFT, a special conformal transformation maps a worldline with constant proper acceleration to a static one, for which T µν W is fixed up to a coefficient [5] W T 00 W v=0 = h | x| 4 (1.2)
JHEP03(2020)087
where | x| is the distance between the static Wilson line, placed at the origin, and the point where the measure takes place. The coefficient h should thus capture the radiated power, at least for a probe with constant proper acceleration [6]. A second interesting kinematical situation is that of the probe receiving a sudden kick. The Wilson line associated to the probe exhibits a cusp, and its vacuum expectation value develops a divergence, characterized by the cusp anomalous dimension [7] Γ(ϕ), that depends on the rapidity of the probe after the kick. The expansion of Γ(ϕ) for small ϕ, Γ(ϕ) = Bϕ 2 + . . . (1.3) defines the Bremsstrahlung function B [8]. It was argued in [8] that this function determines the energy radiated by a probe coupled to a CFT, since it appears in If one grants this relation and further assumes that for arbitrary CFTs the radiated power is Lorentz invariant, one arrives at a Larmor-type formula where a λ is the 4-acceleration. It was further argued in [8] that in any CFT, the Bremsstrahlung function is universally related to the coefficient C D of the 2-point function of the displacement operator of any line defect [9], by 12B = C D . For Lagrangian CFTs with N = 2 supersymmetry this function can be computed using supersymmetric localization [6,10,11]. For N = 2 SCFTs it was argued [10,12] and then proved [13] that B = 3h. This relation is not satisfied in Maxwell's theory [12], proving that no universal relation between B and h exists that is valid for all CFTs. Turning to holography, radiation by accelerated charges in a CFT is studied by first introducing a holographic probe, a string or a D-brane. Computations can be done at the worldsheet/worldvolume level, or taking into account the linear response of the gravity solution due to the presence of the holographic probe. Intriguingly, these two methods do not fully agree. At the holographic probe level, the computation of [14], followed by [15,16] indicated that for a 1/2-BPS probe coupled to N = 4 super Yang-Mills, in the large N , large λ limit, the total radiated power is indeed of the form given by (1.5). The beautiful works [17,18] dealt with the backreacted holographic computations, see also [19][20][21]. The work [17] considered only a probe in circular motion, and found agreement with (1.5). However, the work [18] dealt with arbitrary trajectories, and found where γ is the usual Lorentz factor. The additional term in (1.6) would imply that the radiated power in N = 4 SYM is not Lorentz invariant. The work [17] was restricted to circular motion in a particular frame whereȧ 0 = 0, so by construction, it was not sensitive to the presence of the additional term in (1.6).
JHEP03(2020)087
The angular distribution of radiated power is a more refined quantity than the total radiated power. At strong coupling it has been studied in [17,18], where the angular distribution of radiation emitted by a 1/2-BPS probe coupled to N = 4 super Yang-Mills was determined holographically. Some of the features of the angular distribution of radiation found in [17,18] were unexpected, like regions with negative energy density, or its dependence on the derivative of the acceleration, eq. (1.6). This prompted [18] to consider them artifacts of the supergravity approximation.
In this work we revisit the issue of radiation in scalar field theory, bringing new insights to many of the issues reviewed above. Our key observation is rather elementary: scalar fields couple to the scalar curvature of spacetime via the term [22] ξRφ 2 so, even in flat spacetime, the energy-momentum tensor [23] and therefore the pattern of radiation, depend on ξ. In particular, radiation in conformal field theories requires considering conformally coupled scalars (ξ = 1/6) instead of minimally coupled ones, ξ = 0, as done in the field theory computations of [17,18].
Once we take this observation into account, we find that already at the level of free theory, radiation for a free conformal scalar displays the features that were found holographically for N = 4 super Yang-Mills: the radiated power is not Lorentz invariant, it depends onȧ and the radiated energy density is not everywhere positive. We conclude that these are generic features valid for all conformal field theories that include conformal scalars. In particular, eqs. (1.4) and (1.5) are not valid for arbitrary trajectories in CFTs with scalar fields.
Our observation also brings a new perspective to the lack of a universal relation between the coefficients B and h discussed above. In [1,24] a manifestly Lorentz invariant quantity, the invariant radiation rate R, was defined in the context of Maxwell theory. We extend the definition, and show that while in Maxwell theory R = P, this is not true in general CFTs. For the probes and CFTs considered in this work, R can be written as where B R is a new coefficient that in general differs from the Bremsstrahlung function B. Furthermore we find that the relation holds in all the cases considered. This relation has thus the potential to be universal for all probes and all CFTs. We turn then our attention to Lagrangian N = 2 SCFTs, and for N = 4 super Yang-Mills, we do find a surprise. The full one-point function of the energy density in the presence of a probe following an arbitrary trajectory has exactly the same spacetime dependence at weak and at strong 't Hooft coupling. This leads us to conjecture that this quantity is protected by non-renormalization. This would be rather surprising, as for generic timelike trajectories, T µν W is not a BPS quantity. The structure of the paper is the following. In section 2, we revisit radiation by probes coupled to free field theories. We show that once we take into account the improvement term of the energy-momentum tensor for non-minimally coupled scalars, the radiative energy density is not positive definite, which is just a manifestation of the more general fact JHEP03(2020)087 that non-minimally coupled scalars can violate energy conditions even classically [25]. Furthermore, for non-minimally coupled scalars, the radiated power P is not Lorentz invariant. The new term that we find in the rate of 4-momentum loss is formally similar to the Schott term that appears in the Lorentz-Dirac equation in electrodynamics [1]. We will argue however that in theories with non-minimally coupled scalars its origin and meaning are different than the Schott term in classical electrodynamics.
In section 3, we discuss constraints imposed by conformal symmetry on the one-point function of the energy-momentum tensor of a conformal field theory, in the presence of an arbitrary timelike line defect.
In section 4 we discuss radiation by 1/2-BPS probes coupled to N = 2 SCFTs. Quite remarkably, for a 1/2-BPS probe coupled to N = 4 super Yang Mills following an arbitrary trajectory, the classical computation with conformally coupled scalars matches exactly the angular distribution found holographically [17,18].
In section 5 we mention some open questions. Our conventions are as follows: we work with a mostly minus metric, so the 4-velocity u and the 4-acceleration a satisfy u 2 = 1, a 2 < 0. Dots have different meaning for vectors and 4-vectors:ȧ = da/dτ , buṫ a = d a/dt. Our overall normalization of the energy-momentum tensor for scalars is not the usual one; it has been chosen for convenience when we add scalar and vector contributions in supersymmetric theories.
Radiation in free field theories
Consider a probe coupled to a field theory, following an arbitrary, prescribed, timelike trajectory z µ (τ ). One first solves the equations of motion for the field theory, in the presence of this source, choosing the retarded solution. Let x µ be the point where the field is being measured; define τ ret by the intersection of the past light-cone of x µ and the worldline of the probe, and the null vector ℓ = x − z(τ ret ).
One then evaluates the energy-momentum tensor with the retarded solution. Usually one defines the radiative part of the energy-momentum tensor T µν r as the piece that decays as 1/r 2 so it yields a nonzero flux arbitrarily far away from the source. A more restrictive definition of T µν r was introduced in [26,27], who required that • ∂ µ T µν r = 0 away from the source.
• ℓ µ T µν r = 0 so flux through the light-cone emanating from the source is zero.
In this work we will consider theories that don't satisfy the weak energy condition classically; for these theories, the requirement that the radiative energy density is nonnegative is less well motivated. In this work we use the first definition of T µν r , but we will discuss the implications of considering the second one. From T µν r we define [1]
JHEP03(2020)087
and integrating over the solid angle we obtain dP µ /dτ . It is a 4-vector [28] that gives the rate of energy and momentum emitted by the probe. From it one can define two quantities. The first one is the radiated power P, which is not manifestly Lorentz invariant. Following Rohrlich [1], we define a second quantity, the invariant radiation rate R as which is manifestly Lorentz invariant. For free CFTs, this invariant radiation rate can be written as We don't have a proof that this is the most generic form that R can take in interacting CFTs, but let's mention some restrictions. In principle there could be also a term in (2.4) proportional u·ȧ, but since a 2 = −u·ȧ, it would be redundant. Furthermore, by dimensional analysis, terms with higher derivatives of a can't appear in (2.4). In conclusion, (2.4) is the most general form that R can take, if it depends only on Lorentz invariants evaluated at a single retarded time.
Maxwell field
The energy-momentum tensor is It is traceless, without using the equations of motion. Consider a probe coupled to the Maxwell field, with charge q, following an arbitrary trajectory. The full energy-momentum tensor evaluated on the retarded solution is [28] T µν = q 2 4π 4 (2.6) where all quantities are evaluated at retarded time. Evaluating (2.6) for a static probe we derive the h coefficient [5] T 00 v= 0 = The part of (2.6) decaying as 1/r 2 is It satisfies all the criteria of [26,27], so it is the radiative part according to both definitions. Integration over angular variables yields
JHEP03(2020)087
It is a future-oriented timelike 4-vector, guaranteeing that all inertial observers agree that the particle is radiating away energy. The relativistic Larmor's formula follows
Scalar fields
Consider a free massless scalar field, with arbitrary coupling ξ to the Ricci scalar. The energy-momentum tensor is [23] 4πT In general, the trace of (2.13) does not vanish, even when applying the equations of motion. For the conformal value ξ = 1 6 it vanishes away from the sources, if we apply the equations of motion. For ξ = 0, this energy-momentum tensor can violate the weak energy condition at the classical level [25], even in Minkowski space. Now consider a probe coupled to the scalar field, following an arbitrary trajectory. The energy-momentum tensor (2.13) evaluated on the retarded solution of the equation of motion is 1 − ℓ · a ℓ · u (ℓ µ u ν + ℓ ν u µ ) + 2ξ(ℓ µ a ν + ℓ ν a µ ) (2.14) evaluated at retarded time. It depends onȧ = da/dτ , because the improved energymomentum tensor (2.13) involves second derivatives of the field, and the solution depends on the velocity of the probe.
In the conformal case ξ = 1/6 the terms independent or linear in the acceleration are the same as in (2.6), up to an overall factor. In the next section, we will argue that these terms are actually universal for all CFTs.
JHEP03(2020)087
It satisfies the first three criteria of [27] to be the radiative part. It also satisfies |T 00 | = |T 0i |. As a check, for ξ = 0, it reduces to the energy density found in [17], which is manifestly positive definite. However, for ξ = 0, T 00 is not guaranteed to be positive. After integration over the angular variables, we find The improvement term in the energy-momentum tensor of the scalar field (2.13) induces a qualitatively new term in dP µ /dτ , compared with the electrodynamics case. The additional term in (2.17) is a total derivative, and it is formally identical to the Schott term in classical electrodynamics [1]. However, the origin is different. In classical electrodynamics, the Schott term appears in the Lorentz-Dirac equation of motion of the probe, and it can be deduced from the fields created by the probe, in the zone near its worldline. It does not appear from evaluating the radiative part of the energy-momentum tensor (2.8). On the other hand, in (2.17) the new term appears directly from evaluating the energy-momentum tensor of the fields that decay like 1/r 2 , away from the probe. This additional term that we have encountered in (2.17) in a free theory computation has the same form as the additional term found holographically by [18], eq. (1.6). In that context, the works [20,21] have advocated using the more restrictive definition of T µν r , thus setting ξ = 0 in (2.16), (2.17). An argument in favor of doing so is that the new term in (2.17) is a total derivative so, for instance, its contribution vanishes for any periodic motion when integrated over a full period. This clashes with the intuition of radiated energy as something irretriavably lost by the particle. However, we think this intuition is built on the idea that the energy density is positive definite, which is not the case for non-minimally coupled fields.
For a minimally coupled scalar field, ξ = 0, dP µ /dτ is again a future-oriented, timelike 4-vector, and P = R, as in Maxwell's theory [3,17]. On the other hand, for ξ = 0, this 4-vector is no longer guaranteed to be timelike. This is related with T 00 no longer being positive definite. In the instantaneous rest frame, So for ξ < 1/2, in the instantaneous rest frame, there is energy loss. However, if dP µ /dτ is spacelike, the sign of its zeroth component is no longer the same in all inertial frames. For a non-minimally coupled scalar, P and R no longer coincide, and P is not Lorentz invariant. Indeed, For non-minimally coupled scalars, we will still define 2πB as the coefficient in front of the −a λ a λ term in (2.19). We furthermore introduce a new coefficient B ξ , as the coefficient in JHEP03(2020)087 R = −2πB ξ a λ a λ . We obtain Notice that B ξ=0 = B; we also define B R = B ξ=1/6 . In particular, for the conformally coupled scalar it follows that B R = 8 3 h. This ratio is the same as in Maxwell's theory, eq. (2.12).
One-point function of the energy-momentum tensor in CFTs
In this section we discuss the constraints that conformal invariance imposes on the onepoint function of the energy-momentum tensor of a conformal field theory, in the presence of a timelike line defect. While in the rest of the paper we consider Lagrangian field theories and the line defects are Wilson lines, the arguments of this section apply to arbitrary line defects in general CFTs.
For classical conformal field theories, we have seen in the previous section that the full one-point function of the energy-momentum tensor at a point in spacetime depends on the value of the 4-velocity and the 4-acceleration evaluated at a single retarded time. It is far from obvious that this feature should hold for generic line defects in arbitrary CFTs. In fact, once one considers strongly coupled conformal non-Abelian gauge theories, there are compelling arguments [18] that virtual timelike quanta will decay into further quanta thus forming a cascade, so the radiation measured at a point in spacetime does not have its origin at just a single retarded time in the probe worldline. This picture suggests that at least in some theories, the full one-point function should include integrals over the worldline of the probe, up to the retarded time, to take into account radiation originated by the cascade of timelike virtual quanta. Intriguingly enough, the holographic computations of [17,18] do not find such terms for N = 4 SYM in the planar limit. We will make a small comment about the presence or not of these terms for generic CFTs at the end of this section.
In the present discussion we will focus on the terms where the kinematic 4-vectors, like the 4-velocity and the 4-acceleration appear in the answer evaluated at a single time, without any integrals. Dimensional analysis, conformal symmetry and conservation of the energy-momentum tensor constraint the form of the answer.
The full energy-momentum tensor of a CFT in the presence of a static probe is fixed by conformal invariance [5], up to an overall coefficient, By applying a boost, it is then also fixed for a probe with constant velocity. This determines all the acceleration independent terms; since they are universal, they can be read off from (2.6) or (2.14). These terms decay as 1/r 4 as dictated by dimensional analysis,
JHEP03(2020)087
Furthermore, by applying a special conformal transformation to a static worldline, one obtains a worldline with constant proper acceleration. Therefore, for any CFT, the full energy-momentum tensor for a hyperbolic line defect is completely determined up to an overall constant. It is immediate to check that T µν W for Maxwell theory, eq. (2.6), and for a conformal scalar, eq. (2.14) with ξ = 1/6, have the same spacetime dependence for hyperbolic motion, since in this caseȧ = −a 2 u.
We will now argue that the previous property implies that the terms linear in the 4acceleration a must also be universal. The argument goes as follows. Since a worldline with constant proper acceleration satisfiesȧ = −a 2 u, terms that are not universal in T µν and change from one CFT to another, must be such that they collapse to the same universal expression whenȧ = −a 2 u. But terms linear in a don't depend onȧ or a 2 , so they must be universal for all CFTs. These terms decay as 1/r 3 as dictated by dimensional analysis. All in all, the terms independent or linear in a are, ℓ µ ℓ ν (ℓ · u) 6 (3.4) We then conclude that the terms in T µν W independent or linear in the 4-acceleration a λ -which respectively decay as 1/r 4 and 1/r 3 -are universal for all CFTs. On the other hand, terms that involve a 2 orȧ and decay like 1/r 2 are not uniquely fixed by conformal invariance. Indeed, the 1/r 2 terms for Maxwell's theory (2.8) and a conformal scalar (2.16) are different.
The formula (3.4) refers only to terms that depend only on the probe worldline at the retarded time, and does not exclude potential additional terms of the schematic form (3.1). To conclude this section, let's comment on the restrictions that conservation of the energymomentum tensor imposes on the presence of possible terms of the type (3.1), that depend on the worldline of the probe, and not just the retarded time. First of all, the full energymomentum tensor is conserved. We can further require that the piece of the energymomentum tensor that decays like 1/r 2 is conserved by itself, since it corresponds to energy that is detached from the probe. It then follows that the piece of T µν that doesn't decay like 1/r 2 must also be conserved by itself. It is straightforward to check that the terms that appear explicitly in (3.4) are conserved. This implies that if there are additional terms of the type (3.1) that decay like faster than 1/r 2 beyond the ones that appear in (3.4), they must be conserved on their own.
Radiation in N = 2 superconformal theories
The discussion in the previous section was completely classical. In this section we consider N = 2 Lagrangian SCFTs, for which powerful techniques to study the strong coupling regime are available.
Consider the energy-momentum tensor created by a 1/2-BPS probe coupled to a Lagrangian N = 2 SCFT in the classical limit. The probe is coupled to a vector and a scalar in the adjoint representation of the gauge group. As argued in [17,18], at very weak coupling this amounts to adding the contribution of the Maxwell (2.8) and free scalar (2.16) terms,
JHEP03(2020)087
with an effective charge. However [17,18] considered a free minimally coupled scalar. In CFTs, the correct computation amounts to adding (2.6) and (2.14) with the conformal value, ξ = 1/6. We obtain In three-dimensional language, with n = r− z | r− z| , the radiative energy density is (1 − β · n) 6 (4.2) Our free classical computation only guarantees (4.1), (4.2) at leading order in λ, for small λ. Strikingly, the 00 component of (4.1) is exactly the same result found by a rather elaborate holographic computation for a 1/2-BPS probe in the fundamental representation of N = 4 SU(N ) super Yang-Mills in [17,18], in the planar limit and at strong 't Hooft coupling where [14] 3h = B = √ λ/4π 2 ! To elaborate, we have computed the 1/r 4 , 1/r 3 terms at strong coupling, using the results of the holographic computations of [17,18] and have found exactly the first line of (4.1). The match of the spacetime dependence of these terms at weak and strong coupling is not surprising, as we have argued in section 3 that they are universal. Nevertheless, this match does provide a strong check of the holographic computations in [17,18]. On the other hand, the 1/r 2 term (4.2) was already computed at strong coupling in [17,18], and again it displays the same spacetime dependence as the classical result. We stress that we find exact agreement at the level of energy density, before performing any time average. This agreement prompts us to conjecture that (4.1) is true for all values of λ, in the planar limit. It is tempting to conjecture that (4.1) is true even at finite N and finite λ, but we currently don't have evidence for this stronger claim. Conformal symmetry alone is not enough to explain this agreement: comparing (2.8), (2.16) and (4.1) it is clear that the radiative energy density of a probe in arbitrary motion is not the same for different conformal field theories. Furthemore, while the probe is 1/2-BPS, it is following an arbitrary trajectory, so the Wilson line does not preserve any supersymmetry globally.
Many of the unexpected features of (4.2) have simple classical explanations that arise from properties of conformally coupled scalars: the fact that (4.2) is not positive definite everywhere, was interpreted in [17] as an inherently quantum effect. In fact, it's a feature already present at the classical level, reflecting that conformally coupled scalar fields can violate energy conditions even classically. As first noticed in [18], (4.2) depends on the derivative of the acceleration; now we understand that this follows from the fact that the improved tensor (2.13) involves second derivatives of the field. Another puzzle raised in [18] is that in N = 4 SYM, radiation was isotropic at weak coupling; as our classical derivation of (4.2) shows, this isotropy is just an artifact of considering minimally coupled scalars, instead of conformally coupled ones.
JHEP03(2020)087
In [17] it was noticed that for circular motion, while the angular distribution of radiated power computed holographically did not match the classical computation of Maxwell plus minimally coupled scalar, the respective time averages over a period did match. The reason is now easy to understand: the details of the angular distribution depend on ξ, but after averaging over a period, the averaged angular distribution is independent of ξ.
Let's discuss now the total radiated power in N = 2 SCFTs. Integration of (4.2) over angular variables yields Our computation ensures that this formula is valid at the classical level. At strong coupling, the only evidence is the N = 4 SYM holographic computation of [18].
To conclude, let's comment on the relation B N =2 = 3h N =2 conjectured in [10,12] and proved in [13] for generic, not necessarily Lagrangian, N = 2 SCFTs. This is a relation between the Bremsstrahlung coefficient as defined in (1.3) and the h N =2 coefficient, as defined in (1.2). The proof presented in [13] relies on 12B N =2 = C D , but not on the argument [8] . The values obtained in section 2 allow to test that this relation is satisfied by a free U(1) N = 2 SCFT, and in fact by any Lagrangian N = 2 SCFT at weak coupling, On the other hand, it also follows that the coefficients B N =2 R and h N =2 of any Lagrangian N = 2 SCFT satisfy, at weak coupling, the same relation as in Maxwell theory or for a conformal scalar, At strong coupling, contracting (4.3) with u µ and using B N =2 = 3h N =2 , we again obtain which is again the relation found for Maxwell's theory and for a free conformal scalar. So if (4.3) holds, (4.6) would be true for all the probes coupled to CFTs considered in this paper. Currently, the only evidence for (4.3) at strong coupling is the holographic computation of [18] for N = 4 SYM.
Discussion and outlook
In this work we have discussed radiation for theories with scalar fields. We have found that for non-minimally coupled scalars, the energy density is no longer positive definite, it depends on the derivative of the acceleration of the probe, and the radiated power is not Lorentz invariant. These three features were also encountered in the strongly coupled regime of N = 4 super Yang-Mills, by holographic computations [17,18]. In the introduction we mentioned that these computations do not quite agree with the holographic JHEP03(2020)087 computations at the probe string/brane level. The backreacted computations of [17,18] are on a firmer theoretical ground, but the results they yielded were unexpected, casting doubts on their validity. Our work implies that these features are to be expected for any conformal field theory with conformal scalars, and confirm the validity of the holographic computations of [17,18].
In this work we have not discussed radiation reaction on the probe coupled to the scalar field. It would be interesting to discuss it for the case of non-minimally coupled scalars.
We have shown that the relation (4.6) holds for probes of free CFTs, and we have presented evidence that it also holds for 1/2-BPS probes in N = 4 SCFTs. At this point it is not clear whether it holds for arbitrary probes of generic CFTs. A possible case to further test it would be less supersymmetric probes of N = 4 super Yang-Mills.
The fact that (4.2) holds both at weak and strong λ in the planar limit of N = 4 super Yang-Mills is rather mysterious, as it is not a BPS quantity. It will be important to prove if (4.2) holds for any λ, in the planar limit, or even at finite N . An even stronger conjecture is that it holds for generic N = 2 superconformal theories, but currently we lack techniques to study T µν W at strong coupling for generic N = 2 SCFTs and arbitrary timelike worldlines.
Finally, this note has only considered radiation of scalar fields in Minkowski spacetime. It will be interesting to generalize our results to other spacetimes. | 7,093.8 | 2020-03-01T00:00:00.000 | [
"Physics"
] |
Modeling of the Rating of Perceived Exertion Based on Heart Rate Using Machine Learning Methods
Abstract Rating of perceived exertion (RPE) can serve as a more convenient and economical alternative to heart rate (HR) for exercise intensity control. This study aims to explore the influence of factors, such as indicators of demographic, anthropometric, body composition, cardiovascular function and basic exercise ability on the relationship between HR and RPE, and to develop the model predicting RPE from HR. 48 healthy participants were recruited to perform an incrementally 6-stage pedaling test. HR and RPE were collected during each stage. The influencing factors were identified with the forward selection method to train Gaussian Process regression (GPR), support vector machine (SVM) and linear regression models. Metrics of R2, adjusted R2 and RMSE were calculated to evaluate the performance of the models. The GPR model outperformed the SVM and linear regression models, and achieved an R2 of 0.95, adjusted R2 of 0.89 and RMSE of 0.52. Indicators of age, resting heart rate (RHR), Central arterial pressure (CAP), body fat rate (BFR) and body mass index (BMI) were identified as factors that best predicted the relationship between RPE and HR. It is possible to use GPR model to estimate RPE from HR accurately, after adjusting for age, RHR, CAP, BFR and BMI.
INTRODUCTION
Appropriate exercise can improve health, and reduce the incidence of chronic disease and early mortality (Friedenreich et al. (2010), Healy et al. (2008)).Exercise is prescribed based on four elements such as frequency, intensity, time, and type.These four elements together determine the benefit of exercise (Garber et al. (2011)).Exercise intensity, an important determinant of the physiological responses to exercise training, is essential for achieving exercise benefits (Garber et al. (2011), Riebe et al. (2018)).Commonly, exercise intensity is measured by objective indicators such as HR (Jamnick et al. (2020)).But they are difficult to apply to the daily exercise of a large-scale population due to the requirement of specialized equipment.Fortunately, RPE, being well known highly correlated with HR during physical activity (1 RPE point is approximately 10 bpm) (Borg (1962)), can measure exercise intensity only based on subjective fatigue feeling.Due to its convenience and economical, RPE has been suggested as an adjunct to HR for measuring exercise intensity, and can even replace HR once the relationship between RPE and HR is established (Chow & Wilmore (1984), Scherr et al. (2013)).
Many researchers have focused on the relationship between RPE and HR.Borg first found that HR is equal to RPE value multiplied by 10 based on the RPE-15 scale, namely: HR[bpm]=RPE*10 (Borg (1962)).A study of young men in Taiwan explored the relationship between Borg's RPE scale and the HR values during dynamic exercise, the result was HR[bpm] = 8.88* RPE + 38.2 (Chen et al. (2013)).In a study of Hong Kong adults, the relationship between Borg's RPE scale and HR was: HR[bpm]=43+7.9*RPE(Leung et al. (2004)).In a large population study in Germany, the relationship HR[bpm] = 69.34+ 6.23*RPE was found to exist between RPE values on Borg's RPE scale and HR (Scherr et al. (2013)).These studies hypothesized that the relationship between HR and RPE is not affected by other factors.However, some studies have shown that RPE during exercise is not only related to HR but may also be affected by other factors.Studies (Koltyn et al. (1991), Scherr et al. (2013)) showed that women's HR was significantly higher than that of men under the same RPE.Borg et al. 2010 reported that the HR was higher in the younger age group than the older age group under the same RPE (Borg & Linderholm (2010)).In a study exploring the influence of exercise experience on the relationship between HR and RPE (Winborn et al. (1988)), HR showed a significantly higher association with RPE in high exercise experience subjects than low exercise experience subjects.Maybe there are other factors that affect the relationship between HR and RPE.For example, information such as demographic data, anthropometric data, body composition indicators, cardiovascular function indicators, and physical fitness indexes can reflect the individual's cardiovascular health, heart, and lung capacity, etc., and may cause the relationship between RPE and HR to change.However, as far as we know, few studies included influencing factors when constructing the relationship model between RPE and HR, which results in a large HR fluctuation range under the same RPE.
Machine learning (ML) is a computer-based data analysis method, which has become an alternative approach to conventional statistical methods for developing prediction models (Kavakiotis et al. (2017)).By learning from the sample data, ML can dig out the underlying patterns in the data and create a model for prediction in new data.Compared with traditional machine learning models,e.g.linear regression model (Du et al. (2020)), which are built using prior knowledge based on some implicit assumptions, modern machine learning models only make weak assumptions about the mapping function which helps to learn any underlying patterns in the training data and can deal with nonlinear relationships and higher-order interactions between variables (Russell & Norvig (2020)), both of which are common challenges in the field of health care.
This study hypothesized that some indicators of demographic, anthropometric, body composition, cardiovascular function, and physical fitness may affect the relationship between RPE and HR.The objectives of this study were to explore the factors that influence the relationship between HR and RPE in these indicators and to develop the model of predicting RPE from HR.We evaluate and compare the performance of three machine learning algorithms in developing the model, and then choose the best machine learning algorithm to develop the prediction model.The algorithms we used are two modern machine learning algorithms: Gaussian Process regression (GPR) (Schulz et al. (2018)), support vector machine (SVM) (Noori et al. (2011)), and a traditional linear regression.Cienc (2023) 95(2) e20201723 2 | 15 HUANHUAN ZHAO et al.
Subject
The study was carried out at the Institute of Intelligent Machines, Chinese Academy of Sciences, Hefei, China.Participants were recruited through social media, advertisements in public places, and word of mouth.In our study, 60 potential subjects, comprised of college students, scientific researchers, young and middle-aged white-collar workers, retired people, were recruited from Hefei of Anhui province.Of the 60 participants, 3 subjects were excluded since they were younger than 20 years or older than 65 years.Furtherly, 2 individuals who were athletes were excluded, athletes were defined as performing at least 10 h of exercise per week or being members of a national athletics team (Bjornstad et al. (2006)).Besides, 7 volunteers who do not pass the PAR-Q questionnaire (Neto et al. ( 2013)) were also excluded from the study.And finally, 48 participants (24 men and 24 women, age: 34.98±11.82years) were included in the study (Figure 1).All the participants were fully informed of the experiment process and matters needing attention and signed the informed consent.This experiment was approved by the ethics committee of Hefei institute of physical sciences, Chinese academy of sciences (No.Y-2018-29).Before the experiment, the physical fitness checkup data were collected for each participant.These data form a feature matrix.The physical fitness checkup measures some indicators of anthropometrics, body composition, cardiovascular function, and basic exercise ability.For all participants, stature and body mass were measured twice with light indoor clothes without shoes, and the mean values were used.BMI was expressed as the ratio of total body mass divided by stature squared (kg/m 2 ).Body fat rate (BFR) and body muscle rate (BMR) were measured by BX-BCA-100 body composition analyzer (Institute of Intelligent Machines, Hefei, China), systolic blood pressure (SBP), diastolic blood pressure (DBP), central arterial pressure (CAP), resting heart rate (RHR), Ejection duration (ED), and Subendocardial viability ratio (SEVR) were measured by IIM-CFTI-100 cardiovascular function test instrument (Institute of Intelligent Machines, Hefei, China).Sit-and-reach was measured by TSN 100/200-TQ (Beijing physical fitness), Grip strength was measured by TSN 100/200-WL (Beijing physical fitness), Vital capacity (VC) was measured by TSN 100/200-FH (Beijing physical fitness), Balance ability was measured by TSN 100/200-ZL (Beijing physical fitness), and Reaction time was measured by TSN 100/200-FY (Beijing physical fitness).The baseline characteristics of the participants are described in Table I.
RPE scale
Borg RPE scale (Borg (1970)) and CR-10 scale (Borg (1990)) are commonly used to measure RPE.But they have the characteristic of multi-levels and completely subjective descriptions, which limits their application in daily exercise.To address this issue, the improved CR-10 scale is proposed in this study.
MODELING OF THE RATING OF PERCEIVED EXERTION
First, according to people's cognitive habits (Williams et al. (1994)), the rating of perceived exertion was divided into 1-10, rating "1" stands for extremely easy, rating "10" stands for exhaustion, the exertion is strengthening with the increase of the rating; Second, because the physiological reaction of perceived exertion mainly reflects in breathing, especially breathing while talking.The physiological reaction description is more objective and easier to identify compared with the description like "hard" or "a little hard" etc.So the corresponding physiological reaction description was added for each rating in order to improve the accuracy of RPE identification.The improved CR-10 scale was shown in Table II.
The scale has been reviewed and approved by a total of 6 experts of sports medicine and rehabilitation from two tertiary hospitals.2018)).Values equivalent to 20%, 40%, 50%, 70%, 85%, and 100% of MAPW were respectively used as the 6-stage workload of the pedaling test.Each stage lasted 3 minutes, as was the interval between each test, the subjects kept pedal cadence at about 60 rpm.Before the experiment, the entire experimental process and improved CR-10 scale were explained to each participant before the experiment by trained practitioners.Participants sat quietly for 15 minutes until their HR was at a resting level (sustained for 3 min).Heart rate was recorded by 10-lead ECG in the sitting position.The average HR in the last 15 seconds of each workload was the HR of this workload.In the 15-s before the end of each workload, the participants were asked to report RPE according to the improved CR-10 scale.During the 3-min break between two pedaling tests, the RPE value of the prior pedaling test was confirmed, to ensure the reliability of the RPE values.Experiment termination conditions: (1) The subject completes the entire 6-stage pedaling test; (2) The subject experiences discomfort, and requests termination of the experiment.Furthermore, all tests were performed in the morning, the temperature of the laboratory was controlled at 20C, and humidity was controlled at 50%.
Leave-one-out cross validation
The leave-one-out cross validation is a case of k-fold cross validation, which could evaluate the performance of a regression.The process of leave-one-out cross validation is shown in Figure 2. It sorts the dataset randomly then partitions it into number-of-sample (n) folds, after that, respectively, each fold (sample) is used as the test set in turn to evaluate the model performance, and the other samples are used as the training set to construct the model.Therefore, a total of n models are trained, and the average performance of these n models is taken as the performance of the model.The advantage of performing a leave-one-out cross validation is that, with a small dataset, we could acquire the same result whenever the algorithm is executed.
Feature selection and model construction
In order to evaluate the performance of three machine learning models, we randomly divided the participants into a training set and a testing set.Respectively, for the three machine learning technicals, the training set is used to determine the model features and construct the model, and the test set is used to evaluate the performance of the model.
We collected some information on demographic data, anthropometric data, Body composition indexes, cardiovascular function indicators, and Basic exercise ability indexes to form a feature matrix, which included 16 features such as age, gender, BMI, BFR, BMR, SBP, DBP, CAP, RHR, SEVR, ED, Sit-and-reach, Grip strength, VC, Balance ability, and Reaction time.The forward selection method was utilized to find out which features in the feature matrix are effective in the models, that is which features are the influencing factors on the relationship between RPE and HR.The forward selection method adds a feature to the best features set through one iteration, and finally, the best features set is determined after multiple iterations (Mao (2004)).The specific algorithm of feature selection is shown in Algorithm 1.
Algorithm 1 Feature selection
Input: feature matrix P = {x 1 , x 2 , ⋯ , x n }, the feature set of the regression model Q = {HR} Output: Q 1: Perform a machine learning method to construct an RPE predicting model based on Q 2: Use the leave-one-out cross validation method to calculate and save the root mean square error (RMSE) value(RMSE') of the model 3: for i = 1 to n do 4: goto step 3 14: end if 15: Stop As illustrated in Algorithm 1, our approach is implemented as follows: Step 1: Perform a machine learning method to construct the model of predicting RPE based on the initial feature set, and use the leave-one-out cross validation method to calculate and save the RMSE value of the model; Cienc (2023) 95(2) e20201723 7 | 15 HUANHUAN ZHAO et al.
MODELING OF THE RATING OF PERCEIVED EXERTION
Step 2: From the feature matrix, select a feature that can best improve the performance of the model (the performance is measured by the RMSE value of leave-one-out cross validation) and add it to the feature set.And then delete the selected feature from the feature matrix; Step 3: Repeat step 2, until any feature's addition from the feature matrix cannot improve the performance of the model, and then the iteration is terminated.
Training set and testing set
At the end of testing, all participants completed the first three stages of the pedaling test, 46 participants (96%) completed the first four stages of the pedaling test, 35 participants (73%) completed the first five stages of the pedaling test, and 10 participants (21%) completed the 6-stage pedaling test.The data collected at the sixth stage of the pedaling test was excluded from the following analysis because of the limited amount (n=10).Finally, a total of 224 samples were used for model construction and evaluation.In order to verify the performance of the three regression algorithms on our data, the data of the 48 participants were randomly divided into a training set (40 participants, 188 samples) and a test set (8 participants, 36 samples).There is no significant difference in the main features of the two sets, such as age, BMI, RHR, etc.
Feature selection of model
Separately for GPR, SVM, and linear regression, we first performed the algorithm of feature selection based on the training set to determine the feature set.The feature selection processes of the three models were shown in Figure 3 and Table III.For GPR, the initial RMSE was 0.811 when the feature set only included HR.When CAP was included in the feature set in the first iteration, the RMSE achieved 0.690.The second iteration incorporated BFR into the feature set with an RMSE of 0.645.Age was included in the feature set in the third iteration, at this time, the RMSE was 0.595.The fourth and fifth iteration incorporated BMI and RHR in turn.The RMSE was 0.557 for the fourth iteration, and 0.556 for the fifth iteration.VC was included in the sixth iteration, which increased the RMSE to 0.557, and this satisfied the condition of iteration termination.Therefore, HR, age, RHR, CAP, BFR, and BMI constituted the feature set of the GPR model.As to the SVM model, when the initial feature set only included HR, the model got an RMSE of 0.809.The first iteration incorporated CAP, at this time, the RMSE was0.710.Followed, the second iteration incorporated age and improved RMSE to 0.696.The incorporation of BMI in the third iteration reduced the performance of the model (RMSE: 0.702), which satisfied the condition of iteration termination.Then, we obtained the feature set including HR, age, CAP for the SVM model.The initial RMSE of the linear regression model was 0.795 when the feature set only included HR.Age was included in the feature set in the first iteration, which improved the RMSE to 0.748.The second iteration incorporated RHR into the feature set with an RMSE of 0.724.CAP was incorporated in the fourth iteration and the RMSE achieved 0.718.The model performed the best when BMI was included in the feature matrix in the fourth iteration, at this time, the RMSE was 0.717.The fifth iteration which incorporated BFR was the last because the RMSE of the model was still 0.717, which did not perform better compared with the previous iteration.Finally, for the linear regression model, the feature set includes HR, age, RHR, CAP, and BMI.
Model construction and performance anylysis
We constructed the models based on feature set separately for GPR, SVM and linear regression using the training set.The test set was used to evaluate the performance of the models.As shown in Table IV, the performance for the testing set on the GPR model was the best, which achieved an R 2 of 0.95, adjusted R 2 of 0.89 and RMSE of 0.52.Followed by the SVM model, the R 2 , adjusted R 2 and RMSE were 0.91, 0.86 and 0.62, respectively.The linear regression model got the same R 2 value (0.91) with the SVM model, but the adjusted R 2 and RMSE were 0.79 and 0.74, respectively, which were the worst among the three models.Therefore, the GPR model outperformed other models.To further illustrate the performance of the best model (GPR model), the scatter plot of the measured RPE values and predicted (model-outputted) RPE values on the test set was shown in Figure 4.For practical application, we can round the predicted RPE values to obtain shaped values.Do this, we achieved an accuracy of 75%, for the rest, the errors are all controlled within one rating level.
DISCUSSION
To date, this is the first study that aims at using machine learning methods to explore the influencing factors and construct the relationship model of RPE and HR.First, we recruited 48 healthy people in the Hefei area, China to perform an exercise experiment.HR and RPE were collected during each stage.Secondly, we construct the optimal feature set with a forward selection method to train GPR, SVM and linear regression models.With R 2 , adjusted-R 2 and RMSE of 0.95, 0.89 and 0.52, respectively, the GPR model, which outperforms the SVM and linear regression model, identified age, RHR, CAP, BFR and BMI can best predict the relationship between RPE and HR.
In the process of model construction, there are some differences between the three regression algorithms.The final feature sets of the three models are not the same, as well as the order in which each feature is included in the feature set.This is due to the different learning processes of the three algorithms.The performance of the two modern machine learning models is superior to the linear regression model (see Table IV), which suggests that the variables are not independent, and there are a collinearity relationship and higher-order interactions between variables.Because Linear regression requires that the relationship between the independent variable and the dependent variable is linear and uniform, so some variables that have a non-linear impact on the outcome may be omitted, which affects the predictive performance of the model.As to the two modern machine learning methods, the GPR algorithm is a machine learning algorithm based on function distribution, some previous studies have shown that GPR has good performance on low-dimensional and small sample data (Liu et al. (2013)), and has been widely used in time series analysis, automatic control and other fields (Deng et al. (2020( ), Lima et al. (2020))).SVM is a machine learning method that is suitable for small sample data (Samui & Kim 2013).In this study, we used a total of 188 sample data to build the model, the input variables are less than 6 variables.Thus our sample data happens to meet the conditions of small samples, low-dimensional, which agrees with the GPR algorithm and SVM algorithm.As expected, the two models have shown favorable prediction performance in our data.In addition, the GPR model outperforms SVM model (adjusted R 2 :0.89 VS 0.86; RMSE: 0.52 VS 0.62).Therefore, we believe GPR is more reliable than SVM in terms of our data.The results of this study revealed the potential application value of GPR in the research of sports fields, in which the data often has the characteristic of small samples and low-dimensional.
The algorithm we proposed can explore the influencing factors of the relationship between HR and RPE from the feature matrix.For the GPR model, after adjusting age, RHR, CAP, BFR, and BMI, the performance of the model has been greatly improved.This indicates that age, RHR, CAP, BFR, BMI are the influencing factors of the relationship between HR and RPE.Referring to age, Borg, G and Linderholm, H claimed that there was a declining trend in HR at the same RPE with the increase of age (Borg & Linderholm (2010)), Shephard, R.J. suggested constructing a model within a narrower age range can improve the accuracy of RPE replacing HR (Shephard (2013)).These researches indicated age affects the relationship between RPE and HR, which are consistent with our study.RHR and CAP are indicators of cardiovascular function, which are relative to cardiorespiratory fitness (Wang (2016), McDaniel et al. (2020)).Our study showed RHR and CAP affect the relationship between HR and RPE.Winborn et al. 1988 found different exercise experiences may cause differences in cardiorespiratory fitness, which further bring different RPE under the same HR (Winborn et al. (1988)).This view supports our findings.In addition, our research shows that BFR and BMI are influencing factors of the relationship between HR and RPE.BFR and BMI are indicators of obesity, under the same exercise intensity (HR), compared with individuals with normal weight, obese individuals consume more energy when exercising (Keytel et al. (2005), Hiilloskorpi et al. (1999)).Different energy consumption will cause different subjective fatigue feelings.The greater the energy consumption, the more fatigue one feels.This might be the mechanism of why BFR and BMI affect the relationship between HR and RPE.Refer to gender, there is still debate on the influence of gender on the relationship between RPE and HR (Robertson et al. (2000), Garcin et al. (2005), Scherr et al. (2013), Koltyn et al. (1991)).Our results showed there was no significant difference between men and women on the relationship between RPE and HR.The result that gender does not influence the relationship between RPE and HR was not surprising, since RPE represents relative exercise intensity and is positively correlated with %HRmax, and the predicted HRmax values were not significantly different between the men and women.Winborn et al. (1988) indicated that differences in RPE accuracy scores may be influenced by gender but that exposure to athletic experiences appears to override any potential gender differences.By presumably, gender differences in athletic experiences rather than gender itself likely contribute to the differences of HR under the same RPE in studies (Koltyn et al. (1991), Scherr et al. (2013)).
Compared with previous RPE prediction models, the prediction accuracy of the GPR model has been significantly improved.Borg first found that HR is equal to the RPE value multiplied by 10 based on the Borg RPE scale, which was RPE = HR /10(R 2 =0.75) (Borg 1962).A study of young men in Taiwan showed that the relationship between RPE values of the Borg RPE scale and the HR during dynamic exercise was described by the equation RPE=(HR-38.2)/8.88(R 2 =0.70) (Chen et al. 2013).The relationship RPE= (HR-43)/7.9(R 2 =0.56) was observed between the RPE value of the Borg RPE scale and HR in a study of Hong Kong adults (Leung et al. 2004).In a large population study in Germany, the relationship between HR and RPE on the Borg RPE scale can be expressed as RPE= (HR-69.34)/6.23 (R 2 =0.55) (Scherr et al. 2013).Moreover, these models used the same samples for model training and performance validation.This may lead to overfitting, which means the performance of the model on other data was significantly lower than that on training data.To address this issue, we used an independent test set to evaluate the performance of the models.The GPR model with HR, age, RHR, CAP, BFR, and BMI as input variables in this study has an R 2 of 0.95 and adjusted-R 2 of 0.89, which outperformed the previous models significantly.This indicated that the model constructed in this study can converse HR to RPE more accurately.Meanwhile, with the continuous development of community healthcare services, the measurement of input features of the model, which are non-invasive, is simple and convenient.Through the model, we can realize the estimation from the target HR to RPE easily and accurately.This could help individuals to control personalized exercise intensity in daily exercise.
The current study has certain limitations.First, 48 healthy people from the Hefei area were recruited as participants, the finding of this study should consider area difference for further applications; Also, this finding was conducted on a cycle ergometer in the laboratory, the effectiveness of this study in free exercise still needs to be verified further; Last, the features included in the study is limited, especially, handgrip strength, other than quadriceps strength was used as the indicator of full-body strength, this may cause bias.Although quadriceps muscle strength can be used as a better indicator of full-body strength, the related measuring equipment (isokinetic muscle strength measuring instrument) is very expensive and not popularized.With the continuous development of smart wearable equipment and intelligent systems, next, we plan to analyze other features (For example, the strength of the lower limbs, quadriceps muscle strength, behaviors, education level, and character .etc.) for adjusting the relationship between RPE and HR further.
CONCLUSIONS
Our study has shown the proposed algorithm can explore the factors that affect the relationship between HR and RPE and construct the model of predicting RPE from HR.Among the three machine learning models, the GPR model performed the best, which achieved an R 2 of 0.95, adjusted R 2 of 0.89 and RMSE of 0.52.Indicators of age, RHR, CAP, BFR and BMI were identified as factors that best predicted the relationship between RPE and HR.Compared to models in prior research, the GPR model can more accurately realize the conversion of exercise HR to RPE after adjusting for age, RHR, CAP, BFR and BMI.This study provides a theoretical basis for people in Hefei, China to use RPE (improved CR-10 scale) instead of HR (target HR) to control exercise intensity.
Figure 1 .
Figure 1.Flow diagram of the recruitment and screening of participants.
Figure 3 .
Figure 3.The iteration processes of feature selection.
Figure 4 .
Figure 4.The scatter plot of measured RPE values and predicted RPE values on the test set.
Table I .
Baseline characteristics (mean ±SD) of subjects.
BMI: Body Mass Index; BFR: body fat rate; BMR: body muscle rate; SBP: systolic blood pressure; DBP: diastolic blood pressure; CAP: Central arterial pressure; RHR: resting heart rate; ED: Ejection duration, SEVR: Subendocardial viability ratio; Sit-and-reach: Used to measure human flexibility; VC: Vital capacity; Balance ability: Time to stand on one foot with eyes closed; Reaction time: The time measured by the human eye from seeing different signal lights to triggering the button by hand which can express human agility.
Table II .
Improved CR-10 scale.All participants performed intermittent incremental pedaling tests on a cycle ergometer (IEC 60601-1, REF no.960912, manufacturer: Lode BV Medical Technology, The Netherlands).During the test, participants wore Mortara ECG (UltimaTM PFX MEDGRAPHICS cardiopulmonary tester, manufacturer: MGC Diagnostics, USA) to monitor HR and ECG.Maximal acceptable pedaling workload (MAPW) is calculated using the formula proposed byWasserman et al. (Wasserman et al. (
Table III .
Feature selected in each iteration by each model.
Table IV .
Performance comparison of different models. | 6,388.2 | 2023-04-03T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Extensive numerical study of a D-brane, anti-D-brane system in AdS5/CFT4
In this paper the hybrid-NLIE approach of [38] is extended to the ground state of a D-brane anti-D-brane system in AdS/CFT. The hybrid-NLIE equations presented in the paper are finite component alternatives of the previously proposed TBA equations and they admit an appropriate framework for the numerical investigation of the ground state of the problem. Straightforward numerical iterative methods fail to converge, thus new numerical methods are worked out to solve the equations. Our numerical data confirm the previous TBA data. In view of the numerical results the mysterious L = 1 case is also commented in the paper.
Introduction
In this paper in the context of AdS/CFT [1,2,3] we study numerically the ground state of a pair of open strings stretching between two coincident D3-branes with opposite orientations in S 5 of AdS 5 × S 5 . The main motivation for the study is that according to string-theory the ground state of such a configuration is expected to be tachyonic for large values of the 't Hooft coupling [4]. In our work we rely on the perturbatively discovered and later "all loop conjectured" integrability [5] of both the AdS 5 × S 5 super-string and the dual large N gauge theory. For string configurations with D-branes integrability enabled one to describe string configurations ending on different types of D-branes as 1-dimensional integrable scattering theories with boundaries [6,7,8,9,10]. This formulation of the problem makes it possible to go beyond the approaches of perturbative gauge and string theories being valid for small and large values of the 't Hooft coupling respectively, and to determine the exact spectrum of the model at any value of the coupling constant. However, even with the help of the powerful techniques offered by integrability, the exact analytical solution of the problem is not possible. Remarkeble analytical results are available in the small [11,12,13,14,15,32] and large [16,17,19,18] coupling regimes, but the determination of the spectrum at any value of the coupling constant can only be carried out by high precision numerical solution [20,21,22] of the corresponding nonlinear integral equations.
In our paper we consider the case, when the two D3-branes are giant gravitons [23], namely they carry N units of angular momenta in S 5 . If the S 5 of AdS 5 × S 5 is parametrized by three complex coordinates X, Y, Z satisfying the constraint: |X| 2 + |Y | 2 + |Z| 2 = 1, then our D3-brane and anti-D3-brane are given by the conditions Y = 0 andȲ = 0 respectively. They wrap the same S 3 , but with opposite orientation. As a consequence of Gauss law such a system can support On the large N gauge theory side a Y = 0 brane is represented by a determinant operator [24] composed of N copies of the field Y : where a i and b i are color indices and ǫ is a product of two regular epsilon tensors ǫ a 1 ···a N b 1 ···b N = ǫ a 1 ···a N ǫ b 1 ···b N . The local operator corresponding to an open string ending on a Y = 0 giant graviton can be obtained from (1.1) by replacing one Y field with an adjoint valued operator W [25]: (1. 2) The gauge theory description of a pair of open strings stretching between two Dbranes is given by a double determinant operator, such that the string insertions W and V connect the two determinants of the Y fields 1 : Unfortunately, the precise gauge theory dual of the DD-system of our interest is not known. In [4] it was approximated by a double determinant operator similar to (1.3), but in one of the determinants the Y fields are replaced withȲ fields 2 : This observation allows us to apply the boundary Thermodynamic Bethe Ansatz technique (BTBA) [26] to each open string separately. The necessary ingredients of this technique are the boundary reflection factors [8,27,28,29] and the asymptotic Bethe equations of the problem [4]. Unfortunately, apart from some very special cases [30,31,32], it is still unknown how to derive BTBA equations for a general non-diagonal scattering theory in the context of the thermodynamical considerations of [26]. This is why in [4] the Y -system [33,34,35] and the related discontinuity [36] 1 The ground state of such string states is BPS. 2 According to the argument of [4] the correct state might have other structures involving the fields Y andȲ , but should be similar to the double determinant form (1.4) and the mixing with other fields seem to be suppressed at large N equations supplemented by analyticity assumptions compatible with the asymptotic solution [37] were used to derive BTBA equations for the nonperturbative study of the ground state of the DD-system [4].
The BTBA description of the system is an infinite set of nonlinear integral equations. The numerical solution of the equations [4] showed that the ground state energy is a monotonously decreasing function of the coupling constant 3 g. The analytical investigation of the large rapidity and large index behavior of the Y -functions of the BTBA revealed that the usual BTBA description of the system breaks down when the energy of an open string state with angular momentum L gets close to the critical value: E c (L) = 1 − L. This point was interpreted in [4] as a transition point where the ground state becomes tachyonic. Approaching the critical point the contribution of infinitely many Y -functions must be taken into account to get accurate numerical result for the energy 4 . This fact suggests reformulating the finite size problem in terms of finite number of unknown functions. The possible candidates could be the FiNLIE [46], the quantum spectral curve (QSC) [47,48] or the hybrid-NLIE (HNLIE) [38] formulation of the problem. Since at present it is not known (not even for the Konishi problem) how to use the analytically very efficient [19,14] QSC method for numerical purposes, we choose the HNLIE method to reformulate the finite size problem of the DD-system. In this paper we transformed the infinite set of boundary TBA equations [4] into a finite set of hybrid-NLIE type of nonlinear integral equations. We perform the extensive numerical study of these type of equations in order to get as close to the special E BT BA = 1 − L critical point as it is possible.
Our numerical results reproduce the numerical evaluation of the boundary Lüscher formula [27,39] in the linear approximation, and the numerical BTBA results of [4] as well. These numerical comparisons give further numerical checks on the hybrid-NLIE technique of [38]. Unfortunately, as g increases new local singularities enter the HNLIE formulation of the problem. Thus we could not approach very close to the critical point. Nevertheless, in the range of g where physically acceptable numer-3 Throughout the paper the relation between g and the 't Hooft coupling λ is given by: λ = 4π 2 g 2 . 4 This means that the usual truncation procedure for solving the infinite set of TBA equations is not applicable to such a system ical results were obtained, the HNLIE results could give higher numerical precision than that of the BTBA and also some interesting facts could be read off from our numerical data.
During the numerical solution of the HNLIE equations straightforward numerical iterative methods failed to converge, thus new numerical methods were worked out to solve the equations.
The ground state of the L = 1 state is a very special case, since there the critical point is right at g = 0 and so far neither perturbative field theory computations nor the boundary Lüscher formula could provide a finite quantitative answer to the anomalous dimension of this state. On the integrability side the HNLIE approach allows us to get some numerical insight into this problem. The outline of the paper is as follows: Section 2. contains the HNLIE equations. In section 3. the numerical method is described. In section 4. the numerical results and their interpretation is presented. Section 5. contains some comments on the mysterious L = 1 case and finally our conclusion is given in section 6. Various notations, kernels of the integral equations together with the necessary asymptotic solutions are placed in the appendices of the paper.
The HNLIE equations
In this section we transform the previously proposed BTBA equations of [4] for the ground state of our D-brane anti-D-brane system to finite component hybrid-NLIE equations. For presentational purposes we group the equations into 3 types.
There are TBA-type equations, horizontal SU(2) hybrid-NLIE type equations, and vertical SU(4) hybrid-NLIE type equations. They together form a closed set of nonlinear integral-equations, which are solved numerically in this paper. As it is usual, structurally the equations consist of source terms plus convolutions containing coupling dependent kernels and nonlinear combinations of the unknown functions. The objects appearing in the arguments of the source functions are subjected to quantization conditions, but similarly to the boundary TBA description [4], due to the u → −u symmetry of the problem they are tied to the origin of the complex plane, thus extra quantization conditions are unnecessary to be imposed, since they are automatically satisfied by symmetry. Since these source term objects have fixed positions their positions are exactly the same as that of their asymptotic counterparts. This fact saves us from the tedious computation of the source terms, since if we take the difference of the exact equations and their asymptotic counterparts the source terms cancel from the equations. To be pragmatic and save time and space, the equations will be presented in such a difference form. Thus for any combination f of the unknown functions, we introduce the notation δf is the asymptotic counterpart of f . Having introduced this notation, we start the presentation of the equations by the TBA-type part. For the labeling of the Y -functions we use the string-hypothesis [40] based notations of [41]. For the presentation of the equations a few more notations need to be introduced: ).
(2.1) For later numerical purposes we re-parametrize log Y Q by the formula: 2) such that c Q is the constant value of log Y Q at infinity and ε is minus twice the energy 5 : ε = −2 E BT BA . From the TBA equations of the problem [4], it follows that δc Q = c Q −c o Q ≡ δc is Q-independent, and for small g, logȳ Q is a smooth deformation of its asymptotic counterpart, such that δ logȳ Q tends to zero at infinity. 6 Using this decomposition the following notations are need to be introduced: Then the TBA-type equations take the form: The log multiplier of ε in (2.2) is chosen not to modify the constant term in the large u behavior and to satisfy , which is the LHS of an important Ysystem equation divided by its asymptotic counterpart. 6 log Y Q cannot be considered as smooth deformation of log Y o Q , because log Y Q −log Y o Q ∼ ε log |u| diverges for large u at any g. On the other hand logȳ Q − logȳ o Q is small for any u at small g and tends to zero at infinity.
For Y 1 the modified hybrid form [42] of the BTBA equations is used, where p 0 is the index limit starting from which the upper part of the TBA equations is replaced by an SU(4) NLIE of [38] (See figure 1.). For any kernel vector appearing in the TBA equations Ω(K Q ) denotes the residual sum ∞ Q=p 0 −1 L Q ⋆K Q , and following the method of [42] for p 0 ≥ 4 it can be expressed by next to nearest neighbor Yfunctions as follows: where r m = log(1 + Y m|vw ), the kernels s, s 1 2 , σ1 2 are hyperbolic functions [42], while the other TBA kernels can be found in appendix A. As a consequence of the re-parametrization (2.2) the two constants δc and ε also become part of the set of where for any kernel K: CK(u) denotes the constant term in the large v expansion of K(u, v). 8 As we mentioned −ε/2 is the TBA energy, thus (2.12) gives the energy formula in our formulation of the finite size problem. The asymptotic forms of the Y -functions necessary for the formulation of (2.4-2.13) are listed in appendix D. To close the discussion of the TBA-type equations we note that equations (2.7) and (2.8) determine Y ± up to an overall sign factor. The sign factor can be fixed from the asymptotic solution and its value is −1. Thus the fermionic Y-functions can be expressed in terms of the LHS of (2.7) and (2.8) by the formula: 14) The horizontal SU(2) wing of the TBA is resumed by an SU(2)-type NLIE [43,38], which in our case takes the form: where 0 < γ < 1/2 is a contour shift parameter, the kernel G is given by (B.6) and the asymptotic solution for b andb is given in appendix D 9 . The upper SU(4) NLIE 7 Here the ⋆ notation means simply integration from −∞ to ∞. 8 Here we note that only the dressing kernel has logarithmically divergent term in its large v expansion, all the other kernels has either constant term or they simply vanish at infinity. 9 In practice b andb are complex conjugate of each other.
of [38] is attached to the TBA equations at the p 0 -th node. The upper NLIE is for 12 complex unknown functions: b A and d A , A = 1, ..., 6. They are combinations of the T -functions of the upper wing SU(4) Bäcklund-hierarchy [38]. Their relations to the unknowns introduced in [38] are given by (B.3,B.4) in appendix B and their asymptotic forms are given in appendix C. Using the notation B A = 1 + b A and D A = 1 + d A , the equations they satisfy take the form: where the kernels are given in (B.5-B.12). The shifts in the kernels which is equivalent to fixing the lines on which the NLIE variables live, are chosen in a symmetrical way and fixed as follows: 3 , γ 1 , γ 2 , η 3 , η 1 , η 2 , η In practice this reduces to half the number of SU(4) NLIE variables. The vectors E A andĒ A are conjugate to each other and they give the TBA input into the upper NLIE. To give their form we introduce the notations:
25)
(2.28) The last set of equations gives, how the upper NLIE variables couple to the TBA part of the equations.
whereb 2 andd 2 are from the re-parametrization of b 2 and d 2 : (2.37) with η = ±1 being a global sign factor. Similarly to the definition ofȳ Q , also here the benefit of usingb 2 andd 2 is that, for small g, logb 2 and logd 2 are smooth deformations of their asymptotic counterparts, and in addition δlogb 2 and δlogd 2 vanishes at infinity, which is necessary for the convergence of certain integrals. The decompositions (2.36),(2.37) are chosen to be compatible with the functional relation b [38]. Equations (2.4)-(2.35) constitute our complete set of nonlinear integral equations, which governs the finite size dependence of the vacuum of our D-brane anti-D-brane system.
The numerical method
Here we describe our numerical method for solving the hybrid-NLIE equations presented in the previous section. During the iterative numerical solution of the equations we faced with very serious convergence problems, which forced us to work out such a method that overcomes all the difficulties emerged. Our numerical method can be applied to solve other type of nonlinear integral equations as well. The power of the method is shown by the fact that numerical convergence was reached even in such cases, when the solution was physically unacceptable. The numerical method consist of two main steps, namely: • Discretization of the equations • Iterative solution.
The first step involves the discretization of the unknown functions and kernels, furthermore the discrete approximate representation of the convolutions. Having carried out the appropriate discretization of the problem, the equations are considered as large nonlinear algebraic set of equations. Thus eventually instead of integral equations we solve discrete algebraic equations. In this paper we will present two methods to solve them numerically.
Discretization of the problem
The discretization serves two goals. First it allows us to reduce the numerical problem from solving integral equations to solving algebraic equations. Second choosing the discretization points appropriately it reduces the number of degrees of freedom as much as it is possible to reach the desired numerical accuracy. In our actual numerical computation instead of u of section 2. we used the new rapidity u → u g , because with such a scaling almost all the rapidity difference dependent kernels become g independent. Thus for example Y ± (u) will be defined in [−2g, 2g]. To decrease the number of discretization points the u → −u symmetry of the problem is exploited. This means that the Y -functions are to be discretized only on [0, ∞] or [0, 2g] and as for the NLIE variables it is enough to discretize the b-and d-type variables on [0, ∞]. Since we do not want to introduce any cutoff in the rapidity through the transformation formula: This formula is chosen such that the branch point 2g corresponds to t = 1 for any choice of the parameter a, where a is a global scaling factor which changes from unknown to unknown. We chose the values as follows: for Y 1|w a = 1, for b andb a = 2, for Y Q and Y Q−1|vw a = Q, for η 1 andη 1 a = p 0 , and finally for the b-and d-type NLIE functions a = p 0 . These values are chosen to preserve the smoothness 10 of the transformed functions in the finite interval. After this transformation all of our unknown functions live on a finite interval. To discretize them we used piecewise Chebyshev approximation. This means that we divide the finite interval into subintervals and on each subinterval the functions are approximated by a given order Chebyshev series. The choice of subintervals is not equidistant. The subintervals are placed more densely around the branch points, since the function x(u/g), which governs the decay of the massive Y Q -functions, has the largest change around this point. The advantage of the Chebyshev approximation is that if the function is smooth enough on the subinterval, the coefficients of the Chebyshev series decay rapidly and the order of magnitude of the last coefficient allows us to estimate the numerical errors of the procedure. Now we describe the discretization method in more detail. Our functions are defined on either [0, B(Q)] or on [0, 2g]. This is why two type of subinterval vectors are defined A Q and A ± , such that the endpoints of the subintervals of [0, B(Q)] are put into the vector A Q and the endpoints of the subintervals of [0, 2g] define A ± . Let l k be the order of the Chebyshev approximation, then using the general rules of the Chebyshev approximation, a given function where now the vector A stand for either A Q or A ± , furthermoreT j−1 are a slightly modified Chebyshev polynomials 11 are the Chebyshev coefficients of the function f , which can be computed from the sampling points of the Chebyshev approximation: by the simple formula: where c (i) (l k ) are the zeros of the l k order Chebyshev polynomial: andC k,i is given by: In our method the next step is to formulate the convolutions and the equations themselves in terms of the discrete values of our functions. Here will sketch the basic idea in some typical scenarios appearing in our equations. Then its application to the concrete unknowns and kernels of the problem is straightforward. If one takes the equations at the required discretized points t (k) j the following typical pattern arises: is the symmetrized kernel to exploit left-right symmetry of the problem for reducing to half the number of variables.
) is intended to modelize the variables in the left-hand side of the equations taken at the discretized points of the transformed variable t and L(u) stands for some nonlinear combination of some unknown function of the equations 13 , then the numerical approximation of the right hand side goes as follows; • First the integration variable is changed from v ′ to t, • then on each subinterval L(u(t)) is approximated by its Chebyshev series, • finally the integration is carried out and the convolution is expressed in terms of the discretized values of L(u(t)).
The final approximation formula takes the form: j )), L(A) denotes the dimension of A and K k,j k ′ ,j ′ is the discretized convolution matrix given by the formula: In this manner a convolution is reduced to a discrete matrix-vector multiplication. The other type of typical convolution is when the integration is taken from zero to 2g. In certain cases the function L(u) has square root behavior close to the branch points 14 . For such functions the truncated Chebyshev series does not give accurate approximation. In these cases not the function L(u) is approximated, but that part of it which remains after the elimination of the square root behavior. Namely, we write L(u) = 4g 2 − u 2L (u), thenL(u) is approximated by a truncated Chebyshev series and finally the approximate discretized form of the corresponding convolution is very similar to (3.8): j ) andK k,j k ′ ,j ′ is the square root factor modified version of (3.9); Here depending on the left hand side of the equation u for some a, or it can denote the sampling points on [0, 2g].
Applying our discretization technique to all unknowns and convolutions of our equations, we can reduce the integral equations to a discrete set of nonlinear algebraic equations. However, the transformation from integral equations to algebraic equations is obviously not exact. The typical error comes from the fact that on each subinterval the Chebyshev series is truncated, so the magnitude of the typical errors in our numerical method is governed by the neglected terms of the Chebyshev series, which can be approximated by the magnitude of the last Chebysev coefficient. In our case this is typically somewhere between 10 −5 and 10 −6 .
The last step of our numerical method is the iterative solution starting from the asymptotic solution.
The iterative solution
Here we will describe two methods to solve our integral equations iteratively. Since our actual equations have very complicated form, we will describe our methods using a model example, which has similar structure to our equations. Let the model equations take the form 15 : where G ab are some kernel matrices, f a are some source terms and y a s are the unknown functions of the problem. The solution of (3.12) is expanded around the asymptotic solution and the equations are formulated in terms of the corrections.
To fix the conventions, the correction functions δy a are defined by: As a consequence: The source term is also expanded around its asymptotic counterpart: f a = f o a + δf a . Then equations (3.12) can be reformulated in terms of the δy a functions as follows: (3.14) To define the iterative method, (3.14) are reformulated so that only O(δy 2 a ) terms remain on the right hand side of the equations. Thus the equations are rewritten in the form: Thus at each step of this iterative method a set of linear integral equations must be solved. Using the discretization method of the previous subsection, the problem reduces to solving a set of linear algebraic equations, which is a straightforward task in numerical mathematics. The very first (0th) iteration starts from the asymptotic solution δy a = 0 and it corresponds to the solution of the linearized equations, which in our case gives the Lüscher-formula for the energy.
This (first) method in a certain range of the coupling constant defined a numerically convergent iteration to solve the equations for the ground state of our D-brane anti-D-brane problem, but beyond a certain value of g the method failed to converge anymore. This is why we worked out a second method, which proved to be much more efficient than the first one. This efficiency is manifested in two facts. First it converges much faster than the previous iterative method, second it gives convergent solutions to our equations even when the solution cannot be accepted as physical one 16 .
This second method can be described simply in words. Instead of defining an iteration as above, we simply take the discretized version of (3.14). We consider it as a set of nonlinear algebraic equations. As a first step we solve the linearized discrete equations (i.e. (3.16) with RHS = 0) and starting from the solution of the linearized equations we solve the discrete nonlinear system by Newton-method 17 . 16 Beyond a certain value of the coupling constant the equations in the form presented in section 2. are not the right ones anymore, they should be corrected by some new source terms and quantization conditions, but even for the "wrong" equations the second method shows numerical convergence, giving unacceptable result. 17 In MATHEMATICA language it can be implemented by FindRoot[...,Method→"Newton"].
Numerical results
In this section we summarize our numerical results. We solved numerically the equations for several integer values of the length parameter L. In this section we concentrate on the states with L ≥ 2. The L = 1 special case is discussed in the next section. For the explanation of the numerical data we will mostly use the L = 2 case as an example, because the critical point of this state is the closest one to zero, so it is enough to work with relatively small values of the coupling constant. This is important from the numerical point of view, since by increasing g the numerical method becomes more and more time consuming. First the parameters of the numerical method is discussed. There are three parameters in the nonlinear integral equations (2.4)-(2.35). The most important one is the coupling constant g, then there are two other parameters which allow us to formulate the equations according to our purposes. The two parameters are p 0 and C, where p 0 is a kind of "truncation index", which tells us the node number starting from which the upper TBA equations are replaced by SU(4) NLIE variables (see figure 1.). The parameter C is a free parameter in the asymptotic solution for the upper SU(4) NLIE variables (C.6-C.20) and it enters the equations such that the asymptotic solution around which the equations are formulated contain this parameter. From this discussion it is obvious that g is a physical parameter which means that the energy depends on it, while the other two parameters p 0 and C correspond to different formulations of the same mathematical problem, so the energy does not depend on them. Thus the choice of these parameters is in our hand and we tried to choose such values for them which allows us numerical convergence in the widest range in g. For example the C = 0 choice is the best for numerical purposes since due to (2.22) a u → −u symmetry arises in the SU(4) HNLIE variables minimizing the number of unknowns in the problem. Tuning p 0 might have two advantages. First, numerical experience shows that for large p 0 the Chebyshev coefficients of the unknowns entering the formula (2.10) for Ω, decay faster, which allows for higher numerical precision. Second also from numerics we learn that with p 0 fixed at certain values of g non physical results are obtained from the numerical solution of the problem. This is a consequence of new local singularities entering the problem, but we still did not take them into account in the equations. We solved numerically our equations for different values of L and with various values of p 0 and C, and in case the numerical result was physically acceptable for all p 0 and C we tried, it was also independent of these parameters within the numerical errors of the method.
So far we discussed the parameters of the continuous integral equations and their role in the numerical solution. Now we turn to discuss the numerical parameters of the equations. The numerical parameters are artifacts of the numerical method, and they arise mostly from the discretization method described in section 3. We note that there is no cutoff parameter in our numerical method, neither in the integration range nor in the index of Y -functions. Everything is treated in an exact manner, the only source of numerical errors is the discretization of the unknowns and the convolutions. Here we give the most used subinterval vectors of our numerical computations.
On each subinterval we used an l k = 10 order Chebyshev approximation. The subinterval vector A ± of [0, 2g] is given by the empirical formula: where the vectors in components take the form: with v having vector components: • The first element of A Q is 1 2 .
• The length of subintervals in the range 3 < t < B(Q) is approximately 1: ∆t 1.
18 These requirements are based on numerical experiences with the choice l k ≥ 10.
In practice the length of the subintervals are slightly "squeezed" with respect to the conditions above to fill the full [0, B(Q)] properly 19 . Finally, we note that for checking the numerical precision, we also did numerical computations with l k = 12, 14, 16 keeping the subintervals fixed and also with keeping l k = 10, but doubling the number of subinterval points. Before turning to present the numerical results we would like to say a few words about the possible tests of the numerical results. Namely, how one can recognize a wrong result. This is also a very important point of the numerical method, since there are a lot of equations with very complicated kernels and it is easy to make mistakes during writing the code of the numerical solution. There are three basic things that we can check from the numerical results.
The first check is dictated by the energy equation (2.12). It is known that the energy starts at the first wrapping order (i.e. e −L ) and this first order correction is exactly given by the Lüscher formula [4]: with Y o Q (u) given explicitly in (D.2). This quantity can be computed numerically with any digits of precision, so its value is known exactly at any values of g and L. The Lüscher-formula (4.5) corresponds to the linearized version of our equations (2.4)-(2.35), this is why solving the linearized set of equations (which is the first step for the iterative solution) we should reproduce the numerical evaluation of (4.5). This is a nontrivial check on the kernels, on the discretization method and on the equations themselves as well. In addition since this test is quantitative it can tell some information also on the numerical precision of the method 20 .
This test can signal problems on solving the linearized problem. The remaining two tests can signal some discrepancies during the solution of the nonlinear problem.
The second testing condition is that from the numerical solution Y p 0 −2|vw must be real. This sound trivial, but it is not trivial at all. If one takes a look at the equation BT BA from the exact Lüscher result. This quantity gives some information on the numerical accuracy of the method. Finally the column "number of nodes" tells us the cutoff index of the Lüscher formula, which is necessary to get the Lüscher energy with the precision given by ∆E BT BA . This number is not equal to p 0 in our equations. For the L = 2 state, in case of 0 < g < 1.9 we used p 0 = 4, for 1.9 < g < 2.1 we used p 0 = 8, and in the range 2.1 < g < 2.14 we took p 0 = 12. Finally at g = 2.16 we used p 0 = 26 to get acceptable numerical results. Then beyond this point we could not save our equations from the entrance of new singularities by increasing the value of p 0 with a reasonable O(10) amount. Because of this reason we could not get really close to the supposed critical point. There E BT BA ∼ −1, but we could reach only E BT BA ∼ −0.7 at g = 2.16. Apart from this very embarrassing fact, some important features can be read off from the numerical data. First of all it can be seen that in the range g < 2.16 the energy is very slowly varying function of g, so there is no sign of any divergent behavior. What is more interesting is the behavior of the global constant δc. It is negative and it decreases faster and faster as g is increased. From the definition of δc (2.2) it follows that all Y Q -functions are proportional to its exponent: Y Q ∼ ξ = e δc . The fast decrease of δc indicates that though Y Q has worse and worse large u asymptotic by the increase of g, its global magnitude is actually decreasing. This remark can be understood from the TBA formulation of the energy.
Close to the critical point E BT BA is supposed to be finite [4] E BT BA ∼ 1 − L, but naively the sum in the RHS of (4.7) would diverge due to the large Q terms. Since Y Q is small for large Q, in leading order 21 the log(1 + Y Q ) → Y Q replacement can be done: Diverges close to the critical point +..., (4.8) where Q 0 is an arbitrary index cutoff scale and Y Q = ξỸ Q replacement was applied.
Since ξ is Q-independent all the dangerous Q dependence is still inỸ Q . In (4.8) approaching to the critical point the second sum starts to diverge, and the global multiplicative factor ξ must tend to zero in order to ensure the finiteness of both sides of the equation. Our numerical data seems to support this picture. Namely δc → −∞ as going closer and closer to the critical point.
In [4] from Y -system arguments the large Q behavior of Y Q was also estimated by the formula: where δc is defined after (2.2) in section 2. the Y -functions of the infinite Y -system. This makes it possible to test numerically the correctness of the large Q estimate (4.9). In case (4.9) holds, it implies that δ lnȳ Q = lnȳ Q − lnȳ o Q tends to zero as 1/Q for large Q. In figure 3. the numerical demonstration of this statement can be seen. The plotted functions are defined by the formula: The plots of figure 3. are based on the numerical computation with p 0 = 26 at g = 2.16. Figure 3. nicely demonstrates the expected 1/Q behavior of the functions δF Q . This fact shows us that the equations we solved numerically are not the right ones anymore. Something is missing from the equations. Either a special object [44,45] or some other local singularities of the T -and Q-functions of the problem, which enter those strips of the complex plane, which are relevant in the derivation of the HNLIE equations. The numerical data for the L = 3 and L = 4 states are given by table 2 and 3. Also in case of these states the appearance of new singularities obstacled us to get close to the critical point in the framework of the HNLIE technique.
Comments on the L = 1 case
The L = 1 ground state is mysterious, since so far the anomalous dimension of this state could not be determined even for small g either from field theory or from integrability considerations [4]. Here we concentrate on the integrability side. There the boundary Lüscher formula [27,39] diverges for this state [4]. takes the form [4]: This small coupling expression diverges for L = 1, since this point sits exactly on the pole of the ζ -function. As for the origin of this divergence; in (5.1) the individual integrals are convergent, but their sum for Q causes the divergence. In [4] it was argued that also for any larger L the TBA energy formula would diverge beyond a certain critical value of the coupling: g c (L). Assuming that the energy is a monotonously decreasing function of g, which is supported by numerical results, this critical point can be expressed clearly in terms of the energy by the criterion: In [4] this point was interpreted as a turning point where the energy becomes imaginary and as a physical consequence the ground state becomes tachyonic. For the L = 1 state the critical point is right at g = 0 assuming that for small g the energy is also small. Now let us turn our attention to the HNLIE description of the problem detailed in section 2. Here there are no infinite sums and even for L = 1 all the convolutions of the integral equations seem to converge 23 . For the first sight there is no sign of any problem in the HNLIE description and it seems that only the TBA description is inappropriate to treat the L = 1 case. But unfortunately this is not the case.
We can write down the discretized integral equations for the L = 1 case as well, and using the Newton-method, we can solve them for small values of the coupling 24 . We always get some numerical solution for the discretized problem, but it turns out that the Chebyshev coefficients of the unknowns, which correspond to the large u subinterval do not form a decaying series. This phenomenon is a typical sign of some weak (probably logarithmic) large u divergence of the unknowns. If one increases the number of subintervals and sampling points the situation remains the same. The 23 If we assume that large u behavior of the unknown functions, which was used to derive the BTBA equations from discontinuity relations and Y-system. 24 Typically g ∼ 10 −1 .
conclusion is that we can solve the discretized problem, but the solution cannot be interpreted as the discretely approximated version of the continuous solution of our integral equations. In other words the continuous HNLIE equations have no solution for L = 1.
In order to get some analytical insight why the solutions become diverging at large u let us consider the TBA formulation of the problem (p 0 → ∞ in HNLIE). It is known [4] that the TBA energy comes from the coefficient of the most divergent log |u| term in the large u expansion of log Y Q : The E BT BA term originates from the RHS of the TBA equations for log Y Q from the convolution term (2) by exploiting the large u expansion of the kernel: (2) has better large Q ′ behavior than that of dp Q ′ dv , since it behaves like 1/Q ′ . As a consequence contrary to the energy formula, the sum of dressing convolutions is convergent indeed. Thus one might think that for L = 1 the problem emerges, because for the derivation of the energy formula we expanded the sum of dressing convolutions term by term for large u. This is why instead of this usual procedure, we consider the sum of dressing convolutions itself, compute it and then at the end of the computation we take the large u expansion. This procedure is carried out in the small coupling limit. We need the leading order small coupling expression of the dressing kernel in the mirror-mirror channel 25 :
(5.4)
Then the formula, the large u expansion of which accounts for the small coupling expanded energy, is given by: is the leading small coupling expression of (D.2) at L = 1. The second derivative of O(u, Q) can be computed explicitly by simple Fourier space technique. We take the Fourier form of each functions under integration, the convolution is the product of the individual Fourier transforms, the sum for Q ′ can be easily done in Fourier space and at the end of the process everything is transformed back to the u space. In such a manner one gets a bulky, but explicit expression for d 2 du 2 O(u, Q), which we do not present here, only its large u expansion: Integrating twice the large u expansion at small coupling becomes: From (5.7) it is obvious why the naive Lüscher energy formula diverged. Because the leading order large u term is not the expected ∼ log |u|, but ∼ (log u) 2 . This is the key point of the problem, since in this case after this first iteration Y Q acquires an unwanted type of large u term, which makes Y Q divergent for large u: This large u divergence contradicts to what was assumed about the large u behavior of Y Q at the derivation of the integral equations, since it was supposed to decay. In this example we have shown in the small coupling limit, that during the iterative solution of the BTBA equations, log Y Q acquires an extra ∼ (log |u|) 2 behavior at infinity, which made Y Q an exploding function at infinity. This means that the iterative solution of the TBA equations leaves the class of physically acceptable solutions.
One might ask the question, whether it is possible to keep somehow the qualitative large u behaviors that we assumed at the derivation of the equations? Here we sketch a possible idea for small coupling to the L = 1 case. Let us assume that we managed to modify the TBA equations, such that all Y -functions have the large u behavior we want. Since most of the TBA equations reflect the structure of the Y -system functional equations we expect to modify only those equations which are affected by also the discontinuity relations. It follows, that for large Q, the formula for the estimate for Y Q (4.9) remains the same. Now, we assume that for small g the energy is also small and take the simultaneous small g and small energy expansion of the RHS of the TBA energy formula (4.7). In leading order the large Q terms will dominate: whereŶ Q denotes the large Q estimate (4.9) of Y Q ,ξ = ξ g 4E BT BA as a consequence of the u → u/g change of variables and the pole term in E BT BA comes from the pole of the ζ-function. In our HNLIE approach the energy E BT BA and the constant δc are parts of the equations which means that they are not simply expressed by explicit formulas based on the solution of the equations, but must me obtained by solving the set of non-trivially entangled equations. In this sense (5.9) defines an equation for E BT BA for small g. Its leading order solution is: Ifξ > 0 then E BT BA becomes imaginary as it would be expected from string-theory expectations [4]. To decide the sign ofξ, the equation (2.13) has to be analyzed in the context of the small g and E BT BA expansion. It turns out thatξ is positive and O(1) for small g, so according to (5.10) E BT BA is imaginary. Another remarkable fact is that according to (5.10) E BT BA starts at O(g 2 ) instead of the O(g 4 ) prediction of the boundary Lüscher formula (5.1). This might be another explanation why the coefficient of g 4 diverges in the Lüscher formula for the L = 1 case. Finally, we note that in the small g and E BT BA expansion of the L = 1 state, the energy is pure imaginary only at leading order in g, but in higher orders it acquires real part as well.
For the first sight, it might seem that without modifying the equations one immediately gets imaginary energy when going through the critical point. But, the situation is a bit more subtle. There is a hidden tacit modification of the equations. This is realized in (5.9) by the replacement: For the L = 1 case it is an identity for Re(E BT BA ) > 0, but for Re(E BT BA ) < 0 it is not an identity anymore, but a nontrivial analytical continuation in E BT BA .
Such an analytical continuation would require the exact determination of complicated sums of convolutions of the TBA equations as functions of the energy. Since this does not seem to be feasible in practice, we give such an alternative modification of the TBA equations which preserves the infinite sum structure of the equations, but the sums will converge everywhere for Re(E BT BA ) > −L except at the critical value E cr = 1 − L.
The basic idea of the modification comes from the sum representations of the ζ-function. The usual one converges for Re(s) > 1: but there is another representation which converges for Re(s) > 0: Then the original TBA equations are modified through their infinite sums by the replacements: where s E = 4(L + E BT BA ) − 3. Taking into account the large Q behavior of all Y Q functions and all the kernels of the infinite sums of the TBA equations, the new representation will converge for Re(E BT BA ) > −L. This slight modification of the TBA equations might make it possible to go beyond the critical point and get solution of the TBA equations with large u asymptotics being in accordance with the ones used for the derivation of the equations.
The conclusion of this heuristic argument is that to keep the expected 26 qualitative large u behavior a nontrivial modification of the TBA equations must be carried out, which might lead to complex energies.
Summary and conclusions
In this paper we studied the ground state energy of a pair of open strings stretching between a coincident D3-brane anti-D3-brane pair in S 5 of AdS 5 × S 5 . The main motivation for the study is that string-theory predicts that the ground state of such a configuration becomes tachyonic for large values of the 't Hooft coupling [4].
In [4] it was shown that the usual integrability based BTBA approach always give real energies for the ground state and it breaks down at latest when the energy gets close to the critical value: During the numerical solution of the HNLIE equations the usual iterative methods failed to converge, this is why we worked out two numerical methods to reach convergence. The most effective one is, if one transforms the integral equations into discrete nonlinear algebraic equations and solves them by Newton-method. The power of this method is demonstrated by the fact that it gives convergent results even if the numerical solution is not physically acceptable.
Unfortunately, in our numerical studies we could not get very close to the critical point, because new singularities entered the HNLIE equations taking into account of which would have required an enormous amount of additional work. Nevertheless, in the range where we could get physically acceptable results, the precision of the HNLIE data were higher than those of BTBA and the HNLIE approach could give a deeper understanding of the problem. For the ground state of the L = 1 state the critical point is right at g = 0 and neither perturbative field theory computations nor the boundary Lüscher formula could provide a finite quantitative answer to the anomalous dimension. Even in this special case the numerical solution of the HNLIE equations was possible. The results showed that without an appropriate modification of the equations, they cannot give physically acceptable results. In this case, it means that the solution of the dicretized problem cannot be considered as a discretized solution of the continuous nonlinear integral equations. Moreover the large rapidity behavior of the numerical solution is incompatible with the one assumed for the derivation of the equations. This phenomenon is analytically analyzed in the framework of BTBA and an idea is sketched to preserve the expected large rapidity behavior of the unknowns. This method is based on an appropriate modification of the TBA equations which would lead to complex energies beyond the critical point.
Hopefully the L = 1 case at g = 0 could be treated analytically in the framework of the quantum spectral curve method [47,48,14], solving the mystery of this state in the context of integrability.
A Notations, kinematical variables, kernels
Throughout the paper we use the basic notations and TBA kernels of ref. [41], which we summarize below. For any function f , we denote f ± (u) = f (u ± i g ) and in general f [±a] (u) = f (u ± i g a), where the relation between g and the 't Hooft coupling λ is given by λ = 4π 2 g 2 . Most of the kernels and also the asymptotic solutions of the HNLIE-system are expressed in terms of the function x(u): which maps the u-plane with cuts [−∞, −2] ∪ [2, ∞] onto the physical region of the mirror theory, and in terms of the function x s (u) which maps the u-plane with the cut [−2, 2] onto the physical region of the string theory. Both functions satisfy the identity x(u) + 1 x(u) = u and they are related by the x(u) = x s (u), and x(u) = 1/x s (u) relations on the lower and upper half planes of the complex plane respectively.
The momentump Q and the energyẼ Q of a mirror Q-particle are expressed in terms of x(u) as follows: Two different types of convolutions appear in the HNLIE equations. These are: The kernels and kernel vectors entering the HNLIE equations can be grouped into two sets. The kernels from the first group are functions of only the difference of the rapidities, thus actually they depend on a single variable. The other group of kernels composed of those, which are not of difference type.
We start with listing the kernels depending on a single variable: The fundamental building block of kernels which are not of difference type is: Using the kernels K(u, v) and K Q (u − v) it is possible to define a series of kernels which are connected to the fermionic Y ± -functions. They are: and Further important kernels entering the Y ± related TBA-type equations are defined as follows: . (A.10) The kernels entering the right hand sides of the equation (2.9) for Y 1 are , (A.11) and the dressing-phase related kernel K QM sl(2) (u, v), which is built from the sl(2) Smatrix of the model [49]. It is of the form where Σ QM is the improved dressing factor [50]. The corresponding sl(2) and dressing kernels are defined in the usual way Explicit expressions for the improved dressing factors Σ QM (u, v) can be found in section 6 of ref. [50]. Here for our numerical computations we used the single integral representation given in [21].
Finally we mention that along the lines of [42] in the derivation of the formula (2.10) for Ω(K Q ), it was exploited that all the necessary kernels: K Q , K Qy , K Q1 xv , s ⋆ K Q−1,1 vwx , K y1 , K Q1 sl(2) satisfy the identity: (A.14)
B Kernel matrices of the vertical HNLIE part
In this appendix the kernel matrices appearing in the upper HNLIE part of our equations (2.18,2.19) are presented. Here the kernel matrices are different compared to those published in [38]. The difference comes simply from a reformulation the equations in the language of new unknown functions. In [38] the unknowns are 6 b-type functions:
C Asymptotic solutions of the vertical HNLIE
In this section along the lines of [38] the asymptotic solutions of the upper SU(4) NLIE variables are presented . In the asymptotic limit the T-hook of AdS/CFT splits into two SU(2|2) fat-hooks. The basic building blocks of the asymptotic solution are the nine Q-functions corresponding to the left and right SU(2|2) fathooks. Due to the left-right symmetry of the Y -system it is enough to give the right Q-functions. They can be derived from the asymptotic solution of the Y-functions given in [4]. They take the form: w o (u) = − i Λ 2 e −π g u 4 ((g u) 2 + w c ), y o (u) = i Λ 2 e −π g u 4 ((g u) 2 + w c − i C), (C.5) 28 Here for correspondence we use the same letters for the names of different unknowns as in [38].
where w c and C are arbitrary constants. Using the building blocks listed above, the system can be obtained from the Bäcklund functions above by appropriately shifting their arguments: where * denotes complex conjugation. In our numerical studies we mostly use the C = 0 asymptotic solution to setup the equations to solve. In this case the exact equations guarantee the fulfillment of (2.22), which reduces to 6 the number of independent complex functions of the upper NLIE part. | 12,602.8 | 2015-01-29T00:00:00.000 | [
"Physics"
] |
A Multilayer Approach to Subgraph Matching in HP-graphs
Visual modeling is widely used nowadays, but the existing modeling platforms cannot meet all the user requirements. Visual languages are usually based on graph models, but the graph types used have significant restrictions. A new graph model, called HP-graph, whose main element is a set of poles, the subsets of which are combined into vertices and edges, has been previously presented to solve the problem of insufficient expressiveness of the existing graph models. Transformations and many other operations on visual models face a problem of subgraph matching, which slows down their execution. A multilayer approach to subgraph matching can be a solution for this problem if a modeling system is based on the HP-graph. In this case, the search is started on the higher level of the graph model, where vertices and hyperedges are compared without revealing their structures, and only when a candidate is found, it moves to the level of poles, where the comparison of the decomposed structures is performed. The description of the idea of the multilayer approach is given. A backtracking algorithm based on this approach is presented. The Ullmann algorithm and VF2 are adapted to this approach and are analyzed for complexity. The proposed approach incrementally decreases the search field of the backtracking algorithm and helps to decrease its overall complexity. The paper proves that the existing subgraph matching algorithms except ones that modify a graph pattern can be successfully adapted to the proposed approach.
Introduction
The study of any objects and processes, as well as their design, can barely be done without modeling; that is why software tools that allow specialists to build various models and formalize descriptions of objects and processes, or use modeling as a method of analysis, are becoming more popular. Models are described and built with the help of a visual modeling language, which is a fixed set of graphical symbols and rules for constructing visual models by using these symbols [1]. Visual languages can be represented as various types of graphs, including oriented graphs [2], hypergraphs [3], hi-graphs [4], meta-graphs [5] and P-graphs [6]. Previously, a new graph model, called HP-graph, was proposed as a formalism for representing visual languages [7]. This model unites expressive possibilities of all the mentioned graph types and, thus, it can be used for building more complicated models than those which can be built with the help of the other graph models. The paper [7] proved that this graph model allows the creation of a flexible visual model editor based on it. This model is proposed as a basis for domain-specific modeling, one of the key aspects of which is model transformations. Such transformations allow users to move from one level of abstraction to another (a vertical transformation) or from one modeling language to another (a horizontal transformation) [5]. Different approaches can be used to transform visual models, but the current standard is the algebraic approach which is based on the graph grammars [9]. Based on this approach, a transformation r = (L, R) includes the left and the right part, where L is a subgraph to be found in a source graph, and R is a subgraph replacing L in the source graph. As for the HP-graph, only main operations, including operations of adding and removing graph elements and operations of decomposition, were described for this model, and no algorithm were proposed to perform an isomorphic subgraph search operation. The structural complexity of the model requires modifying the existing algorithms to adapt them to this model. The HP-graph has a multilayer structure which consists of the layer of vertices and hyperedges and the layer of poles and links, sets of which are combined into the elements of the former layer. The multilayer structure of the graph model allows to reduce time complexity of search algorithms. The number of operations can be decreased due to the fact that the first search and matching is performed on the layer of vertices and hyperedges, and only after finding a subgraph with the desired characteristics, the algorithm moves to a more detailed level, where the already selected sets of corresponding poles and ordinary edges are compared. In practice, a task of finding an isomorphic subgraph has a wide range of applications, including chemical compound search [10], social network analysis [11], pattern recognition [12], and protein interaction analysis [13]. However, subgraph matching is a bottleneck in the overall performance for most of these applications due to the fact that this task is NP-hard [14]. For instance, nodes count 165 for protein structure analysis can reach up to tens of thousands [15]; that is why active efforts are currently being made to find an optimal algorithm for subgraph matching. In visual modeling the problem is the same. The thesis [5] proposes to represent all the models in the form of a single graph, which allows users to maintain links between the models and automatically propagate changes from the source model to the target ones associated with it. For instance, a change in the metamodel of the subject area should be propagated to all the models built on this metamodel. However, storing all the models as a single graph increases the computational complexity of the algorithms on this graph, which requires developing an efficient subgraph search algorithm for the graph model used. The contributions of these paper are: 1) a new multilayer approach to decrease complexity of subgraph matching algorithms, 2) a backtracking algorithm based on this approach, 3) applications of this approach in several existing subgraph matching algorithms. The paper is organized as follows. Section 2 discusses related work and the main algorithms for finding subgraph isomorphism. Section 3 presents the proposed graph model, definitions of the HPsubgraph and isomorphism of the HP-graphs, and the multilayer approach to subgraph matching. Section 4 introduces a backtracking algorithm based on this approach. Section 5 presents several applications of the approach in the existing subgraph matching algorithms. Section 6 describes the obtained results. Section 7 concludes the paper.
Related work
The problem of subgraph matching has been investigated for many years. The works of many scientists, such as [16]- [18], are dedicated to exploring applicability, time complexity and limitations of the existing subgraph matching algorithms. These algorithms are generally divided into two classes: Algorithms that observe many graphs {G1, Gn} and retrieve those which contain a query graph Q. Algorithms that observe a single graph G and retrieve all its subgraphs which are isomorphic to a query graph Q. In both of these approaches, algorithms can either return a correct and complete answer (having an exponential time complexity) or return an approximate answer (having a polynomial time complexity). While the complete answers describe all subgraphs exactly isomorphic to a pattern, the approximate answers are generally obtained using specific similarity measures and, thus, may also contain false positive subgraphs. This work belongs to the second class of the algorithms. Most of these algorithms use backtracking to move through the built search tree and find appropriate combination of corresponding vertices of the source graph and the graph-pattern. Algorithms in this class include Ullmann algorithm [19], VF2 [20] (and also VF2 Plus [21] and VF3 [22]), TurboISO [23], CFL-Match [24], QuickSI [25], SPath [26] and others. These algorithms implement various techniques to decrease time needed for the matching process. Exploiting Pruning Rules. The Ullmann algorithm uses refining procedure on each step of the algorithm by comparing degrees of corresponding neighbors of the added pair of vertices. VF2 [20] provides feasibility rules that are checked before a vertex is added to a graph-candidate. There rules check consistency of graph-candidates with this vertex and check for a sufficient number of verticesneighbors of these graph-candidates. SPath [26] uses neighborhood signature for each vertex to store information about the surrounding vertices. These signatures are compared with the corresponding signatures of the query graph and are used for search space pruning before subgraph matching. TurboISO [23] compares quantity of neighborhood labels of corresponding vertices and prune out unpromising ones. CFL-Match [24] proposes a compact-path-index (CPI) structure 166 presented as a tree which is built from the source graph vertices with the same labels as query graph vertices and then refined by exploiting matching operations. Graph Pattern Modification. The Ullmann algorithm and VF2 [20] do not modify graph pattern and search its embeddings in the source graph. SPath [26] changes the way of graph query processing from vertex-at-a-time to path-at-a-time, which tends to be more cost-effective than traditional graph matching methods. TurboISO [23] presents a NEC-tree structure which merges similar vertices together and present a query graph as a tree. CFL-Match [24] transform a query into a set of dense subgraphs, forests, and leaves. The source graph in this algorithm is only probed for non-tree edge validation, whereas other query parts are checked in the CPI structure. Optimizing Matching Order. The Ullmann algorithm [19] does not specify the matching order of the vertices, whereas VF2 [20] starts from a random query vertex and then recursively adds those vertices that are connected with the already matched ones. QuickSI [25] exploits an order which is based on the vertex label frequency, and the algorithm starts a process of matching from the least frequent ones. TurboISO [23] implements a concept of candidate region exploration and produces a matching order for every region where a NEC-tree was found. CFL-Match [24] present all candidates as a CPI-structure, where all the pattern embeddings are filtered and validated by traversing this tree structure. The most of theoretical research of this problem was conducted specifically for ordinary graphs [18]; that is why the approaches of these algorithms have to be adapted to an HP-graph model. In particular, this paper presents an adaptation of a standard backtracking algorithm for subgraph matching, the Ullmann algorithm [19] and the VF2 algorithm [20], which are optimized for the multilayer structure of this graph model.
Graph-Matching Approach for HP-graphs
Let Pol be a set of all poles of the graph, including external poles and internal poles of vertices and hyperedges. Then, an HP-graph is an ordered triple vm} is a non-empty set of vertices, W = {w1 wl} is a set of hyperedges [7]. An example of the graph model is demonstrated on fig. 1. Every hyperedge w of the HP-graph G can be presented by ordinary links, which are defined as a set Ew = {e1 en}, where every link (e Ew) is a pair of connected poles (p, r), where p is a source pole and r is a target pole of a link. An example of this decomposition is presented in Fig. 2. The hyperedge w2 defines a set Ew 2 = {(p4, p8), (p4, p6), (p6, p8)}. Every vertex and hyperedge can also be decomposed by a new HP-graph, which is described in detail in [7].
Definitions of a Subgraph and Isomorphism
To determine subgraph matching operations, it is needed to give a definition to a subgraph of the HP-graph. An HP-graph G' = (P', V', W') is a subgraph of an HP-graph G = (P, and meets the condition (1) to make transformation operations possible [7]. A subgraph can contain vertices called incomplete whose sets of poles are only part of the sets of poles of the vertices of the original graph: (1) the set V'partial is a set of the incomplete vertices in the graph, where V'partial V'. To define the isomorphism mapping, it is necessary to establish one-to-one correspondences between the same type elements of graphs that preserve the incident relations. This, two HP-graphs G = (P, V, W) and G' = (P', V', W') are isomorphic iff there exists a bijection f:
A Multilayer Approach to Graph Matching
As the graph model is proposed to store all the models together, search algorithms for this formalism have to be optimized for this task. A possible solution to this problem is to divide the HP-graph into two main levels: the level of vertices and hyperedges, and the level of poles and ordinary links between them. In this case, the search is started on the higher level, and when a candidate is found, it moves to the lower level, where a more detailed comparison of graph elements is performed. Fig. 3(a) illustrates an example of a query graph Q, which is a pattern for subgraph matching for a data G from Fig. 1. As is seen, it contains 4 vertices, 2 hyperedges and 4 poles. Its higher (or first) level is presented in fig. 3(b). It contains only 4 vertices and 2 hyperedges, whereas all the poles are eliminated. This layer is compared with the first layer of the graph G ( fig. 4), and when a potential subgraph is found, the matrix of vertex correspondence is built.
-Q Fig. 3. Query graph Q and its first level The found correspondences between vertices of Q and G can be presented as a set {(v1', v2), (v3', v3), (v2', v4), (v4', v5)}. If a subgraph is found, the algorithm moves to the next level, where the corresponding hyperedges and their poles are compared. All the candidate hyperedges are grouped by their incidence with each other depending on the poles which they consist of. For instance, hyperedges w1' and w2' are presented as a single group because of the pole p3' which both of them own. Thus, a corresponding pair (w3, w4) is also presented as a single group. All these groups are compared for exact isomorphism on the layer of poles and ordinary links. Fig. 5 demonstrates this layer for a pair of candidate groups (w1', w2') and (w3, w4). All these hyperedges are decomposed and only their poles and links are considered on this stage. As these graphs are identical, the found correspondences between poles of incident hyperedges of graphs Q and G can be presented as a set {(p3', p9), (p4', p11), (p2', p7), (p1', p4)}. If a validation on this hyperedge group is succeeded, the algorithm moves to the next group of hyperedges and validate them, until all the hyperedges are traversed. If a validation fails, the algorithm moves to the upper level and tries to find new pairs of vertices and hyperedges and validate them. Lastly, the algorithm verifies that for every pole of the pattern graph only one pole of the source graph has been found. Otherwise, the found subgraph is considered as not isomorphic and the search continues.
Backtracking Graph Matching Algorithm based on the Multilayer Approach
The algorithm presented in this section uses as a basis a backtracking algorithm presented in [19]. This algorithm traverses a search tree using DFS until an isomorphic subgraph is found. If a pair of corresponding elements cannot be found at a certain step, a transition to an earlier step is carried out.
Listing Pseudocode of the algorithm that matches the corresponding sets of graph elements
This algorithm at the beginning initializes a matrix M 0 which defines possible candidates between corresponding elements of graphs. If m 0 ij = 1 then the i-th element of the first graph is a candidate for isomorphism for the j-th element of the second graph. Otherwise, they cannot form a pair of corresponding elements. At each step, the modification of this matrix is used to determine appropriate pairs of elements. Thus, it is needed to define rules for building this matrix for each set of HP-graph elements. For vertices matching, external poles and vertices can be combined into one set and named as vertices (for simplification). Thus, the matrix M 0 = |QV QP| GV GP| is filled according to the rule (2); if this condition is not met, m 0 ij = 0:
HP-Listing Pseudocode of the algorithm that finds an isomorphic subgraph in HP-graph
The main idea of this algorithm is to incrementally shorten the search field. While the search for vertices traverses all the vertices of the original graph, the search for hyperedges only moves through those edges that are connected with the already chosen vertices and utilizes information about their correspondence with the vertices of the query graph. Pole matching is performed for each group of incident hyperedges, where a sufficient quantity of combinations is pruned out by exploiting information about the corresponding vertices and hyperedges. The algorithm also checks and matches the unlinked poles if they exist, which can be done in linear or close to linear time as all the corresponding vertices are already found. For simplicity, the algorithm is given for searching for the first isomorphic subgraph but can be transformed to searching for all embeddings of a pattern.
Exploiting Pruning Techniques of the Existing Algorithms
To optimize algorithms certain existing techniques can be used. Adaptation of the main techniques of the existing algorithms to the proposed graph model can prove the possibility of adapting these algorithms as a whole and improve the efficiency of the algorithm presented above.
Ullmann Algorithm
Ullmann algorithm [19] is one of the first algorithms for subgraph matching. This algorithm uses a backtracking algorithm presented above and at each step it performs a refinement procedure to prune out unpromising pairs.
171
This algorithm is performed at each node of the search tree. It traverses the matrix M and converts a certain part of values from ones to zeros. The condition for preserving 1 is that if a vertex j of the original graph is a candidate of a vertex i of the pattern graph, then each neighbor of the vertex i must have at least one candidate among the neighbors of the vertex j. Otherwise, j cannot be a candidate for a vertex i. This algorithm can be implemented for both vertex matching and pole matching to eliminate unpromising element pairs. The refining algorithm for vertices can be presented as follows 3.
HP-Listing 3. Pseudocode of the algorithm that runs refining for vertices of the HP-graph The algorithm goes through all the neighbors of the current query vertex, which have at least one common hyperedge with this vertex, and checks whether a source graph contains a corresponding neighbor-vertex. The algorithm for poles looks similarly but poles and ordinary links are used instead of vertices and hyperedges.
VF2 Algorithm
VF2 [20] has been proposed for performing subgraph matching on large graphs. Effective representation of data structures and the usage of feasibility rules significantly reduces both the average time complexity of the search and the amount of memory used. The idea of the algorithm is to use special rules, called feasibility rules, at each node of the search tree to evaluate the feasibility of further progress on this branch of the tree before adding a pair of vertices to graph-candidates. There rules check consistency of graph-candidates and sufficiency of vertices-quantity of the graph-candidate. If all the checks are passed, the algorithm can move to the next level of the tree. An approach of checking the feasibility rules can be applied on both vertex and pole layers. As a pole layer is presented as an ordinary graph, the feasibility rules from [20] can be used without any significant modifications. However, feasibility rules for a vertex layer have to be defined. The first rule checks the consistency of the existent candidate graphs by checking correctness of connections with the already added vertices. Let coreG be a list of found pair vertices for the graph G and coreQ be a list of found pair vertices for the graph Q. Accordingly, let connG be a list of vertices which already have a pair or have a connection to the current graph-candidate G' and connQ be a similar list for the graph-candidate Q'. Then, the first rule can be presented as follows: n Conn(G, v) is a set of vertices of the candidate-graph G, which are connected to the vertex v.
Let PC define a set of vertices that can be connected to the vertex u, but the graph G does not include them; then it can be represented as follows: Thus, a new rule, which compares numbers of newly added connections to graphs, appears: |PC(G', n)| PC(Q', m)|. The last rule performs a two-look-ahead in the searching process. Let N be a set of vertices which are connected to the target vertex but are not connected to the graph-candidate:
Graph Pattern Modification Algorithms
The usage of algorithms such as TurboISO [23], CFL-Match [24] and other ones, that change a graph pattern, is complicated in the presented multilayer approach because these algorithms are made specifically for ordinary graphs. Their usage on the layer of vertices and hyperedges is a subject for the future research as it requires reformulation of their main aspects and ideas. Nevertheless, all these algorithms can be successfully used on the layer of poles and links and can find an isomorphic subgraph in the single-layer approach.
Complexity of the Algorithms
The presented algorithms can decrease the complexity of subgraph search by implementing matching on different graph layers. The search field shortens at each stage whereas the usage of pruning rules can also eliminate unpromising combinations of elements. Table 1 shows computational complexity of the backtracking algorithm at its main stages. The evaluation of the backtracking algorithms based on the Ullmann refinement is presented in The evaluation of the algorithms based on the VF2 approach is demonstrated in Table 3. The modification of the GetAllCandidatePairs procedure according to rules (2-4) slightly increases the worst-case complexity from N N! to N 2 N! and the best-case complexity from N 2 to N 3 but significantly shortens the search field.
Conclusion
This paper proposed a solution to the problem of identifying isomorphic subgraphs in HP-graphs.
The proposed approach is based on implementing matching on different graph layers of the graph model and incrementally shortening the search field at each layer. The designed algorithms for subgraph matching based on the multilayer approach and evaluations of their complexity are presented above. The proposed approach incrementally decreases the search field of the algorithm and helps to decrease its overall complexity. The usage of pruning rules of the existing algorithms can eliminate unpromising candidates at each stage of the proposed algorithm and thus, significantly shorten the size of the search tree. It is planned to evaluate actual time complexity of these algorithms on various data sets and develop a visual modeling system using the proposed approach to subgraph matching. | 5,203.8 | 2021-01-01T00:00:00.000 | [
"Computer Science"
] |
Annexin A2 in Virus Infection
Viral life cycles consist of three main phases: (1) attachment and entry, (2) genome replication and expression, and (3) assembly, maturation, and egress. Each of these steps is intrinsically reliant on host cell factors and processes including cellular receptors, genetic replication machinery, endocytosis and exocytosis, and protein expression. Annexin A2 (AnxA2) is a membrane-associated protein with a wide range of intracellular functions and a recurrent host factor in a variety of viral infections. Spatially, AnxA2 is found in the nucleus and cytoplasm, vesicle-bound, and on the inner and outer leaflet of the plasma membrane. Structurally, AnxA2 exists as a monomer or in complex with S100A10 to form the AnxA2/S100A10 heterotetramer (A2t). Both AnxA2 and A2t have been implicated in a vast array of cellular functions such as endocytosis, exocytosis, membrane domain organization, and translational regulation through RNA binding. Accordingly, many discoveries have been made involving AnxA2 in viral pathogenesis, however, the reported work addressing AnxA2 in virology is highly compartmentalized. Therefore, the purpose of this mini review is to provide information regarding the role of AnxA2 in the lifecycle of multiple epithelial cell-targeting viruses to highlight recurrent themes, identify discrepancies, and reveal potential avenues for future research.
INTRODUCTION
To successfully replicate, viruses must hijack and reprogram host cells to produce viral progeny. The life cycle of a virus consists of three main phases intimately reliant on host cell proteins and mechanisms. The first is cellular attachment and penetration. Attachment and penetration can occur through receptor-mediated endocytosis or through direct membrane fusion. Second, the viral genome is released for replication and protein expression. In this phase, the virus can rely on host enzymes to facilitate capsid uncoating or host machinery to replicate the viral genome. Finally, assembly and maturation yield newly constructed viral particles poised for release. In this stage, viral proteins can require post-translational modification by host factors, or intracellular transport systems for proper localization. During egress, virions are released by taking advantage of apoptosis, exocytosis, cell lysis, and by appropriating host membranes to bud directly from the cell.
This mini review aims to shed light on a host factor that has been repeatedly exploited for the benefit of viral infection: annexin A2 (AnxA2). AnxA2 is a membrane-associated protein implicated in a number of human, animal, and zoonotic infections. By addressing the involvement of AnxA2 in the context of multiple viruses and viral life cycle stages, we offer a broad perspective on an emerging host-pathogen interaction and highlight the complexities in AnxA2 biology.
Annexin A2
Annexin A2 is a multifunctional calcium-and lipid-binding protein that is expressed in nearly all human tissues and cell types. AnxA2 exists as a monomer localized to the cytoplasm, vesiclebound, or as a heterotetrameric complex termed A2t consisting of two AnxA2 monomers bridged by an S100A10 dimer found on the inner and outer leaflet of the plasma membrane. Both AnxA2 and A2t have been implicated in a wide range of intracellular processes including membrane domain organization, membrane fusion, vesicle aggregation, cytoskeletal-membrane dynamics, epithelial cell polarity, exocytosis, endocytosis, phagocytosis, and transcriptional regulation through binding of AnxA2 to RNA (reviewed in Gerke and Moss, 2002;Rescher, 2004;Bharadwaj et al., 2013;Hitchcock et al., 2014;Hajjar, 2015;Schloer et al., 2018).
More broadly, AnxA2 has been implicated in immune function, multiple human diseases, and viral infection (Hajjar, 2015;Tanida et al., 2015;Bećarević, 2016;Schloer et al., 2018). AnxA2 expression in some cancers can promote metastasis and function as a prognostic marker of recurrence and survival (Lokman et al., 2011;Zhang et al., 2013;Xu et al., 2015). This involvement of AnxA2 in human health and disease has prompted the development of pharmacological inhibitors of AnxA2 and A2t (Reddy et al., 2011(Reddy et al., , 2012(Reddy et al., , 2014Liu et al., 2015), and these compounds are being explored in a growing number of therapeutic contexts. One class of inhibitors, for example, has been shown to block human papillomavirus (HPV) type 16 (HPV16) infection in cervical epithelial cells (Woodham et al., 2015). It is presumed that these A2t inhibitors disrupt the function of A2t during viral infection, though this specific mechanism still needs to be verified. Importantly, HPV is just one virus in a list of at least 13 viruses with known AnxA2 associations during binding, endocytosis, and egress ( Table 1).
Annexin A2-Virus Associations
The annexin superfamily is highly conserved across eukaryotic phyla, from unicellular organisms to complex plants and animals (Moss and Morgan, 2004;Jami et al., 2012;Einarsson et al., 2016), and annexins have been associated with both human and non-human viral pathogens (summarized in Figure 1). This review focuses on seven viruses with direct links to AnxA2 during their lifecycle. To more confidently cross-compare cellular functions, we specifically discuss the viruses that target human epithelial cells. AnxA2 is utilized by HPV, enterovirus 71 (EV71), respiratory syncytial virus (RSV), and cytomegalovirus (CMV) during cell attachment and penetration (Wright et al., 1994(Wright et al., , 1995Raynor et al., 1999;Malhotra et al., 2003;Derry et al., 2007;Yang et al., 2011;Woodham et al., 2012Woodham et al., , 2015Dziduszko and Ozbun, 2013;Taylor et al., 2018), by hepatitis C virus (HCV) and influenza A virus (IAV) during replication (LeBouder et al., 2008;Backes et al., 2010;Saxena et al., 2012;Ma et al., 2017;Solbak et al., 2017), and by measles virus (MV) during assembly and maturation (Koga et al., 2018). In some cases, AnxA2 has been implicated in multiple life cycle steps of the aforementioned viruses, underscoring the importance of a more complete approach to understanding the role of AnxA2 in viral infection.
ANNEXIN A2 IN CELL ATTACHMENT AND ENTRY BY VIRUSES
Virus attachment and entry into target cells occurs through host receptor-mediated endocytic mechanisms or less frequently, through direct fusion between virus envelope and the plasma membrane. The first steps of infection are attractive antiviral targets and are therefore studied extensively in a vast array of viral infections (selected reviews include: Mercer et al., 2010;Barrow et al., 2013;Yamauchi and Helenius, 2013;Helenius, 2018).
Human Papillomavirus (HPV)
Persistent infection with HPV can lead to the development of a variety of anogenital and oropharyngeal cancers causing significant morbidity worldwide (Watson et al., 2008;Forman et al., 2012;Spence et al., 2016). HPV is a non-enveloped double stranded DNA (dsDNA) virus that enters basal keratinocytes through a non-canonical endocytic pathway while interacting with a number of host molecules (Spoden et al., 2008;Schelhaas et al., 2012;Raff et al., 2013;Day and Schelhaas, 2014;DiGiuseppe et al., 2017). In the search to identify an HPV uptake receptor, AnxA2 and A2t were discovered as central mediators of HPV entry and intracellular trafficking. Interestingly, it has been suggested AnxA2 and A2t have independent functions in HPV attachment and intracellular trafficking (Dziduszko and Ozbun, 2013). For example, it was shown that AnxA2 and A2t colocalize with HPV at the cell surface and that antibodies against AnxA2 alter entry kinetics (Dziduszko and Ozbun, 2013), but antibodies against the S100A10 subunit (Dziduszko and Ozbun, 2013) and targeted knock-out via CRISPR/Cas9 (Taylor et al., 2018) does not affect cellular entry in vitro. Furthermore, when the full A2t complex is knocked-out HPV infection is significantly reduced as measured by reporter gene transduction. However, when S100A10 alone is knocked-out, only a moderate reduction in infection is observed (Taylor et al., 2018), emphasizing the importance of delineating the roles of monomeric AnxA2 versus heterotetrameric A2t.
Enterovirus 71 (EV71)
EV71 is a causative agent of hand, foot, and mouth disease (HFMD), a common infection in infants and children that can sometimes lead to severe illness and long term neurological conditions (Chan and AbuBakar, 2004). EV71 is a nonenveloped single-stranded RNA (ssRNA) virus that enters cells through an unknown dynamin-independent pathway (Yuan et al., 2018). In an effort to understand initial hostvirus interactions, AnxA2 was identified as a cell surface attachment factor through anti-EV71 immunoprecipitation and mass spectrometric analysis of infected cells in vitro (Yang et al., 2011). Using immunofluorescence microscopy, these authors also demonstrated that AnxA2 and EV71 colocalize at the cell surface, and that pretreatment with recombinant AnxA2 (rAnxA2) or antibodies against AnxA2 yields reduced infectivity. Results from this work showed that AnxA2 and EV71 colocalize at the cell surface, but they did not address if the reduction in infectivity was due to reduced binding, entry, or replication. Furthermore, their yeast two-hybrid experiments showed that EV71 capsid protein VP1 interacted with the C-terminus of AnxA2, which may also implicate A2t as serving a functional role. Future studies investigating AnxA2 or A2t in EV71 endocytosis could yield interesting results given the varied implications of AnxA2 in this process.
Respiratory Syncytial Virus (RSV)
Infants and elderly can develop severe lower respiratory disease from infection by RSV (Nair et al., 2010) -an enveloped ssRNA virus. The mechanism of RSV cellular entry is disputed, with independent reports suggesting plasma membrane fusion, clathrin-dependent endocytosis, and macropinocytic mechanisms (Kolokoltsov et al., 2007;Collins and Graham, 2008;Gutiérrez-Ortega et al., 2008;Krzyzaniak et al., 2013). Fucoidan is a polysaccharide that inhibits RSV infection in vitro and in vivo. Because it is assumed that fucoidan works by binding to RSV receptors, Malhotra et al. (2003) employed solid-phaseimmobilized fucoidan as an affinity matrix to isolate potential RSV-binding partners on epithelial cells and identified AnxA2 using mass spectrometry. The authors show that treatment with rAnxA2 reduced RSV infection as measured by fluorescent focus assay 24h post-infection, but similar to Yang et al. they did not investigate the mechanism of infection reduction beyond cell surface interactions. An independent study did, however, demonstrate that AnxA2 is not involved in virus assembly (Shaikh et al., 2012). As was the case with EV71, a more detailed analysis of RSV endocytosis has the potential advance our understanding of AnxA2-mediated endocytosis.
Cytomegalovirus (CMV)
Cytomegalovirus infection in immunocompromised individuals or through congenital transmission can lead to serious diseases including pneumonia and hearing loss (Fowler and Boppana, 2018). CMV is an enveloped dsDNA virus that is able to establish life-long persistence, and multiple CMV entry mechanisms have been described. Interestingly, it has been hypothesized that the viral entry route may actually influence the outcome of infection (Murray et al., 2018). Early work first discovered AnxA2 on the surface of CMV particles isolated from human fibroblasts, and found that rabbit antiserum against AnxA2 inhibited CMV infection in vitro (Wright et al., 1994(Wright et al., , 1995. Using synthetic membrane systems and rAnxA2, Raynor et al. (1999) demonstrated enhanced binding of CMV if rAnxA2 was present and attributed fusion events to A2t. Follow up studies elucidated that AnxA2 is not essential for CMV entry (Pietropaolo and Compton, 1999;Esclatine et al., 2001), however, viral gene expression and completion of viral life cycle are dependent on AnxA2 and A2t (Derry et al., 2007) and progeny virions have been shown to contain both forms of AnxA2 on viral envelopes (Wright et al., 1994). These findings together suggest multiple roles of both AnxA2 and A2t for CMV trafficking and progeny egress. Viruses that have been shown to involve AnxA2 during assembly, maturation, and egress. Assembly can occur in the nucleus or in the cytoplasm, and release is achieved via cell lysis, apoptosis, exocytosis, or direct budding from the plasma membrane. * Evidence for AnxA2 involvement in more than one phase of the viral life cycle.
ANNEXIN A2 IN VIRUS REPLICATION, ASSEMBLY, AND RELEASE
The ultimate goal of a virus is to produce and release progeny virions. In order to make new infectious particles, viruses must transcribe and replicate their genomes in either the cytoplasm or the nucleus of a host cell. To accomplish this, the virus orchestrates cellular factors to form replication complexes: organelle-like structures that form in the nucleus, the cytoplasm, endoplasmic reticulum (ER), or at the plasma membrane and shield cytoplasmic genome replication from host defenses (Den Boon et al., 2010;Schmid et al., 2014). Post-replication, virus particles must reassemble and traffic to the plasma membrane for release.
Hepatitis C Virus (HCV)
Chronic infection with HCV can lead to the development of liver cirrhosis and hepatocellular carcinoma. HCV is an enveloped ssRNA virus that enters the host cell through endocytosis, replicates in ER-replication complexes, and exits via exocytosis (Farquhar et al., 2012;Lindenbach and Rice, 2014;Benedicto et al., 2015). Many viruses express non-structural (NS) proteins to aid in efficient and successful infection; accordingly, NS proteins often function within viral replication complexes. Lai et al. (2008) investigated host factors that might interact with a specific NS protein complex of HCV (NS3/NS4A) known to interact with actin filaments in kidney epithelial cells. NS3/NS4A expression and co-immunoprecipitation followed by mass spectrometry identified AnxA2 as an interacting host factor (Lai et al., 2008). Given that lipid rafts were demonstrated to be involved in the formations of HCV RC complexes and because AnxA2 is associated with both lipid rafts and interacted with NS4A (Gokhale et al., 2005), the authors published a follow-up study asking if AnxA2 aids in the formation of HCV replication complexes (Saxena et al., 2012). Their report details the localization of AnxA2 at HCV replication complexes via immunofluorescence and immuno-electron microscopy, a reduction in the number of these structures following AnxA2 siRNA silencing, and a reduction in HCV RNA synthesis. The reduction in RNA synthesis, however, was measured via HCV replicase activity although there was no observed change in relative mRNA levels. An independent report conclusively demonstrated that although monomeric AnxA2 colocalizes with HCV NS proteins, AnxA2 silencing has no direct effect on HCV RNA replication but causes a significant reduction in intra-and extracellular virus titers (Backes et al., 2010). Based on these findings, the authors conclude that AnxA2 is involved in viral assembly as opposed to replication. Interestingly, overexpression of AnxA2 led to an enrichment of HCV NS proteins at replication Frontiers in Microbiology | www.frontiersin.org complex sites (Saxena et al., 2012), a mechanism that may in fact promote virus assembly and support the claim of these authors.
Influenza A Virus (IAV)
Of the four types of influenza viruses, A, B, C, and D, influenza A viruses (IAV) and influenza B viruses cause epidemics of seasonal disease and respiratory infections. IAV type H1N1 and zoonotic avian IAV type H5N1 have both been associated with AnxA2 (LeBouder et al., 2008;Ma et al., 2017). IAV is an enveloped ssRNA virus that enters host cells via endocytosis, replicates in the nucleus, and buds from the plasma membrane for release (Salomon and Webster, 2009). It has been shown that AnxA2 and A2t are present on IAV H1N1 viral envelopes and that A2t, a plasminogen receptor, is responsible for the conversion of plasminogen to plasmin, a process involved in IAV replication (LeBouder et al., 2008). The authors demonstrated reduced viral titer after inhibiting plasminogen activation but did not tease out the precise involvement of AnxA2. An independent report investigated the role of AnxA2 in IAV H5N1 replication and found that silencing AnxA2 via siRNA inhibited viral protein expression and reduced progeny titer and proposed a mechanism by which AnxA2 bridged the gap between NS1 and p53, extending the amount of time cells could produce new virions (Ma et al., 2017). These data support the hypothesis that AnxA2 is involved in viral replication or assembly but do not preclude AnxA2 involvement during the preceding steps.
Measles Virus (MV)
Measles is a highly contagious respiratory infection that is caused by MV -an enveloped ssRNA virus. After MV fuses with the host cell membrane, genome replication occurs in the cytoplasm and the virus is released by budding at the plasma membrane (Jiang et al., 2016). Knockdown of AnxA2 via shRNA in cervical epithelial cells (HeLa) caused reduced MV progeny virus generation 24 h post-infection, but did not affect MV entry and RNA replication (Koga et al., 2018). Normally, MV matrix protein (M protein) aids in connecting the viral capsid to the viral envelope and localizes to the plasma membrane where MV particles will form. Koga et al. (2018) went on to reveal that in the absence of AnxA2, M protein expression is decreased and mislocalized from the plasma membrane to the perinuclear space. Finally, the authors found that the observed M trafficking effect was due to monomeric AnxA2 versus A2t (Koga et al., 2018).
OTHER VIRUSES AND SPECIES WITH AnxA2 ASSOCIATIONS
In our mini review we have focused on epithelia-targeting viruses that cause disease in the human population and have all been shown to utilize AnxA2 or A2t in some capacity during their viral lifecycle. Outside of humans AnxA2, A2t, or a combination of the two have been implicated in the attachment and entry of rabbit vesivirus (RaV) (González-Reyes et al., 2009), the replication of porcine reproductive and respiratory syndrome virus (PRRSV; pig) and avian infectious bronchitis virus (IBV; chicken) (Kwak et al., 2011;Li et al., 2014;Chang et al., 2018), and in the assembly and release stages of classical swine fever virus (CSFV; pig) and bluetongue virus (BTV; livestock) (Beaton et al., 2002;Celma and Roy, 2011;Sheng et al., 2015;Yang et al., 2015). Additionally, there have been multiple reports of AnxA2-viral associations occurring within cell types beyond epithelial cells [e.g., Alphaherpesviruses with neuronal cells (Koyuncu et al., 2013) and human immunodeficiency virus with macrophages (Ma et al., 2004;Ryzhova et al., 2006;Rai et al., 2010;Woodham et al., 2016)], however, addressing viral tropisms for nonepithelial cells and how different cell types utilize AnxA2/A2t does not fall within the scope of this mini review. Alas, a deeper understanding of how AnxA2 biology is manipulated during the course of viral infections may uncover novel treatment routes or expand our understanding of cellular biology in general. Figure 1 summarizes where and how different viruses exploit AnxA2 and A2t and serves as a visual representation of the complexity of this subject. It should be noted that nearly half of these studies fail to address whether or not the observed AnxA2-associated effects are due to monomeric AnxA2 or heterotetrameric A2t (indicated in Table 1). Synthesizing our current understanding of AnxA2 in viral infections will reveal similarities between pathogens, highlight deficiencies in our experimental approaches, and help us better understand the diversity in AnxA2 functionality.
PATTERNS AND POTENTIAL
Multiple reports implicating AnxA2 in virus attachment and penetration base their conclusions on experiments that combine studying AnxA2-virus-membrane interactions with infection readouts post-AnxA2 manipulation. This strategy merely indicates that AnxA2 plays a role somewhere between cell attachment and life cycle completion. In-depth analysis of the virus life cycle in the context of AnxA2 could reveal novel information about AnxA2-mediated intracellular trafficking.
This mini review has provided a general overview of the diverse ways AnxA2 is utilized by viruses in epithelial infections in order to shed light on broad mechanistic patterns and identify potential avenues for future research. We conclude that AnxA2-mediated endocytosis may represent a distinct trafficking pathway utilized by multiple viruses. For example, EV71 and HPV are both suggested to travel through an undefined dynaminindependent pathway related to AnxA2 (Schelhaas et al., 2012;Yuan et al., 2018). The preferred entry route of RSV is also debated and similar to HPV, related to macropinocytosis (Kolokoltsov et al., 2007;Krzyzaniak et al., 2013;Mastrangelo and Hegele, 2013). The entry mechanisms of RSV, HPV, and CMV are proposed to be complex two-step processes involving proteoglycans and dependent on EGFR activation, HPV and RSV infection stimulate increased A2t translocation to the cell surface (Raff et al., 2013), a process which in itself is not well understood. Finally, both HPV and PRRSV infections involve vimentina protein that directly interacts with AnxA2. Together, these viruses may serve as models to study a unique AnxA2-dependent endocytic pathway.
Given the vast diversity in A2t functionality, it is probable that AnxA2 plays a highly complex and dynamic role in virus infection. As mentioned above, AnxA2 also has immunomodulatory effects (Woodham et al., 2014;Hajjar, 2015;Zhang et al., 2015), and it therefore possible that AnxA2 expression promotes a permissive environment for infection through modulation of innate immune responses. Additionally, AnxA2 may play a role in initiating adaptive immune responses against infections. For example, exogenous addition of A2t to the antigen-presenting cells of the epithelium, Langerhans cells (LC), induced suppression of immune activation and reduced Th-1 cytokine production in vitro, suggesting that A2t may function as an immune modulator in the epithelium. The small molecule inhibitor against A2t that was used in the HPV16 studies was also able to reverse HPV-induced immune suppression of LC populations, further supporting the notion that a deeper understanding of AnxA2 biology may reveal novel avenues for treatment options (Woodham et al., 2014).
Future studies should implement a holistic research approach that investigates the interactions between the host cell, the pathogen, and the immunological environment in which the viral lifecycle takes place. Ultimately, multi-modal research approaches may provide a more comprehensive understanding: we can learn more about AnxA2 endocytosis by studying different viruses, and we can use AnxA2 endocytosis as a model to better understand viral infection.
AUTHOR CONTRIBUTIONS
JT and WK conceptualized and designed the work. JS and WK contributed critical interpretation and revision for intellectual content. JT designed the figures.
FUNDING
The research in our lab was supported by the National Institutes of Health (NIH) (R01 CA074397 to WK and F31 1F31AI136312-01 to JT) and the ARCS Foundation John and Edith Leonis Scholar Award to JT. Generous donations from The Netherlands American Foundation, Ella Selders, Yvonne Bogdanovich, Connie de Rosa, the USC Norris Comprehensive Cancer Center Auxiliary Women, and Sammie's Circle are gratefully acknowledged. | 4,929.4 | 2018-12-05T00:00:00.000 | [
"Biology"
] |
Including the Z in an Effective Field Theory for dark matter at the LHC
An Effective Field Theory for dark matter at a TeV-scale hadron collider should include contact interactions of dark matter with the partons, the Higgs and the Z. This note estimates the impact of including dark matter-Z interactions on the complementarity of spin dependent direct detection and LHC monojet searches for dark matter. Their effect is small, because they are suppressed by electroweak couplings and the contact interaction self-consistency condition $C/\Lambda^2<4\pi/\hat{s}$. In this note, the contact interactions between the Z and dark matter are parametrised by derivative operators; this is convenient at colliders because such interactions do not match onto the quark-dark matter contact interactions.
Introduction
A diversity of cosmological observations imply that a quarter of the current mass of our Universe is unknown "dark matter" [1]. Various experiments attempt to detect the particle making it up. For instance, direct detection(DD) experiments [2,3,4,5], search for ∼ MeV energy deposits due to scattering of dark matter particles from the galactic halo on detector nuclei. And the Large Hadron Collider (LHC) searches [6,7] for dark matter pairs produced in multi-TeV pp collisions, which would materialise as an excess of events with missing energy and jets. The LHC and DD searches are at very different energy scales, so different Standard Model (SM) particles are present, and also the quantum interferences are different [8]. The expected rates can be compared in specific dark matter models [9], or, in recent years, several studies [6,10,11,12,13,14] have compared the LHC and DD sensitivities using a contact interaction parametrisation of the dark matter interactions with the standard model particles.
The LHC bounds obtained in this way are restrictive, and probe smaller couplings than direct detection experiments searching for "spin dependent" interactions between partons and dark matter [4]. These contact interaction studies are refered to as "Effective Field Theory" (EFT), and considered to be relatively model independent. However, the particle content is an input in EFT, and the restrictive LHC limits assume that the dark matter particle is the only new particle accessible at the LHC. Relaxing this assumption can significantly modify the experimental sensitivities [12,13,14]. This has motivated various simplified models for dark matter searches at the LHC [15,16,17]. Retaining this assumption, as will be done in this note, is only marginally consistent, because the contact interactions to which the LHC is sensitive would have to be mediated by strongly coupled particles. As recalled in the next section, this implies that colliders can exclude contact interactions of order their sensitivity, but not much larger.
Effective Field Theory (EFT) is supposed to be a recipe to get the correct answer in a simple way [18]. So this note attempts to compare LHC and DD constraints on dark matter, according to the prescriptions of [18]. From a "bottom-up" phenomenological perspective, an EFT for dark matter at the LHC should parametrise all possible SM-gauge invariant interactions of the dark matter with other on-shell particles. So first, contact interactions between the dark matter and the Higgs or Z should be included at the LHC. These can interfere with the contact interactions studied in previous analyses, but contribute differently at colliders from in direct detection, so the linear combination of operator coefficients constrained at high and low energy will be different. Secondly, an EFT contains in principle a tower of operators [19] organised in increasing powers of the inverse cutoff scale 1/Λ, and higher orders can be neglected if they are "suppressed". The importance of higher dimensional operators will be left to a subsequent publication 1 . This note focuses on the first point, and estimates analytically the consequences of including the lowest dimension operators allowing dark matter interactions with the Z 2 . Section 2 outlines a peculiar choice of operators for the Z vertex; they are proportional to the momentum-transfer-squared. This choice appears convenient, because the effects of the Z are not mixed into the dark-matter-quark contact interactions. Section 3 estimates the impact of cancellations between Z exchange and dark matter contact interactions with quarks at the LHC, and section 4 recalls the direct detection bounds.
Assumptions, Operators, and EFT
The low energy consequences of New Physics from above a scale Λ can be parametrised by contact interactions of coefficient C/Λ n . The New Physics cannot be more than strongly coupled, implying C < 4π, and the "low" energy scale must be below Λ. This means that an experiment can exclude whereŝ is the four-momentum-squared of the process. Low energy experiments, whereŝ → 0, therefore can be taken to exclude everything above their sensitivity. However, the upper limit of eqn (1) is relevant for collider dark matter ssarches, whereŝ is the invariant mass of the invisibles. This parametrisation in terms of contact interactions is reasonably model-independent, if a complete set is obtained by adding to the Lagrangian (below the scale Λ) all operators up to some order in Λ −1 , which can be constructed out of the fields present, consistently with the symmetries of the theory [18].
In this note, the dark matter is the only new "Beyond-the-Standard-Model" particle lighter than a TeV, and is taken to be a SM gauge singlet dirac fermion χ with a conserved parity, and of mass m χ ≥ m Z /2 (maybe ≥ m h /2), to avoid bounds on the coupling to the Z from the invisible width of the Z (and Higgs). So the particle content of the EFT for χ at the LHC should be χ, plus all relevant particles of the SM, which I take to be the partons, the Higgs, and the Z.
The operators of this EFT should be SM gauge invariant, to profit from our knowledge of the SM gauge sector. They are of dimension > 4, and should attach a χχ pair to partons, to the Higgs, or to the Z. The quark operators will be generation diagonal; flavour-changing operators were considered in [20]. The quarks are chiral because the operators are SM gauge invariant, and also because opposite chiralities do not interfere at the LHC. The dark matter currents are taken in a vector, axial vector, etc basis because these do not interfere in direct detection, nor at the LHC in the limit when χ mass is neglected, as done here. The scale Λ will be taken as 1-2 TeV, for reasons discussed above eqn (7). Experimental limits on contact interactions will therefore be presented as limits on the dimensionless coefficient C x .
At dimension six, there are vector and axial vector χ currents coupled to quarks: where the quarks Q i are first generation SM multiplets {q L , u R , d R }, and P X is the appropriate chiral projector. These will be the operators of interest in this note, because they can interfere with the Z. Then at dimension seven, there are four fermion operators: (and similarly for u quarks, but with a charge conjugate Higgs field), and interactions with the gluons: These dimension seven operators, higher generation quarks, and the Higgs vertices listed below, will be considered in a later publication. The contact interactions between the dark matter and the Z boson are taken as where to the right of the arrow is the resulting vertex, B µ is the hypercharge gauge boson with coupling g ′ = e tan θ W ≡ es w /c w , B µν = ∂ µ B ν − ∂ ν B µ , and a term ∝ p Z · Z was dropped after the arrow in O Z,A , assuming the Z was produced by light quarks. There is in addition a "dipole moment" operator B µν χσ µν χ, which is neglected here because it also induces dark matter interactions with the photon [22] which are more interesting. The Z operators are choson ∝ p 2 Z so that they are relevant at the LHC, where the Z is an external leg of the EFT, but do not contribute in the low-energy scattering of DD. This choice should be acceptable, because the operator basis can always be reduced by using the equations of motion [23]. These are, neglecting gauge-fixing terms for B µ [21] where ψ is a SM fermion of hypercharge y ψ . Usually [21], the derivative operators are dropped, and the operator proportional to the Higgs v.e.v. squared H 2 is retained. In this usual basis, But it should be included in the quark-χ contact interaction used in direct detection, so the coefficient of the operators of eqn (3) would not be the same in direct detection as at the LHC.
For m χ < m Z /2, the invisible width of the Z (at "2σ", so [27] This is only marginally more restrictive than the LHC limits, despite the 0.1% precision on the invisible width of the Z, because the operator is ∝ p 2 Z , which amplifies its coupling to very energetic off-shell Zs at the LHC.
At dimension seven, the equation of motion for the Higgs field where H = v = 174 GeV, p h is the four-momentum of the physical Higgs particle h, and after the arrow are the interactions induced by the operator. The operators of eqn (5) are interesting, because they give a higgs coupling to dark matter ∝ p 2 h , which has the desirable feature of being relevant at the LHC where the Higgs is in the effective theory, but not contributing at low energy. It is possible to use the Equations of motion to replace two operators with one, because I am only interested in the h-χ-χ interaction induced by these operators. The linear combination of operators [µ 2 H † H − λ(H † H) 2 ]χχ, which is orthogonal to the combination in the Equations of Motion, gives a vanishing h-χ-χ interaction, due to the minimisation condition of the Higgs potential.
The dark matter interactions to W and Z pairs, given after the arrows in eqns (5), were studied in [24], who used U (1) em × SU (3) invariant operators such that these contact interactions have dimension five with coupling 1/Λ CHLR . They find that the 8 TeV LHC with luminosity 25 fb −1 could probe Λ CHLR < ∼ TeV. This constrains the coefficients of the operators of eqn (5) to be < ∼ 1/(TeVm 2 W ), which is not restrictive. A more significant limit, of 10 TeV −3 arises for m χ < m h /2 from requiring Γ(h → χχ) < ∼ Γ(h → bb). This restriction should be reasonable [25] because the Higgs is observed to decay to bb.
Dark matter interactions with the Higgs are neglected in the rest of this note, because the operators of eqns (3) and (4) constitute a sufficient "toy model" in which to estimate the impact of including the Z. Since the LHC produces more Zs than Higgses, one could anticipate that the Z is more likely to have a significant effect on the LHC's sensitivity to dark matter. Figure 1: Effective interactions contributing to qq → χχ at the LHC. The coefficient of the four fermion operator is C q,AX /Λ 2 , and the effective axial vector coupling of the Z to dark matter is s w p 2 Given the operators of eqns (3) and (4) at the LHC, the axial vector dark matter current can interact with quarks Q via the diagrams of figure 1, which will can be written as a four-fermion interaction of coefficient where 27]. A similiar expression can be obtained for the vector χ current. The Z exchange looks like a contact interactions for large p 2 Z = M 2 inv ≫ m 2 Z , where M 2 inv is the invariant mass-squared od the dark matter. This approximation will used below because the best limit arises at larger M 2 inv .
Estimated limits from the LHC
Dark matter particles are invisible to the LHC detectors, so pair production of χs can be searched for in missing (transverse) energy (E T / ) events, which can be identified by jet(s) radiated from the incident partons. The principle Standard Model background for such "monojet" searches is Z+ jet production, followed by Z →νν. The 8 TeV LHC is sensitive to dark matter contact interactions with C/Λ 2 ∼ TeV −2 .
The aim here is to analytically estimate the invisible four-momentum-squared M 2 inv , by comparing the partonic cross-sections for νν and χχ production. I assume that the QCD part of the amplitude is identical in both cases, so it does not need to be calculated. This has the advantage of allowing for an arbitrary number of jets which is more difficult to simulate [26] (the data frequently contains more than one jet [6]). In the matrix element for jets +νν will appear whereas, for DM production via theχγ µ γ 5 χ current, this is replaced by: c QX,A Λ 2 (Qγ α P X Q)(χγ α γ 5 χ) . Then the full matrix element must be squared and integrated the phase space of N jets and two invisible particles. The invisibles can be treated as a single particle of variable mass p 2 = M 2 inv , using the identity Neglecting spin correlations and the dark matter mass, the invisible phase space integral over the gamma-matrix trace for the invisible fermions gives M 2 inv /(8π) for χs, and 3M 2 inv /(16π) for neutrinos. For neutrinos in the final state, M 2 inv = m 2 Z due to the delta-function-like behaviour of the Z propagator-squared. However, for dark matter, the dM 2 inv phase space integral will privilege larger values of M 2 inv . Treating the N jets of the event as a particle of negligeable mass, the upper bound on M 2 inv is > ∼ 4E T / 2 , where E T / is the invisible transverse energy. The CMS study [6] uses the range 400 GeV ≤ E T / < ∼ TeV. Therefore most of the dark matter signal will come from M 2 inv ≫ m 2 Z , and the approximation (6) is consistent. Furthermore, the contact interaction approximation requires Λ 2 > M 2 inv which suggests Λ > ∼ 1 − 2 TeV. The CMS collaboration obtains a limit 3 Λ > 950 GeV, for the sum of the operators of eqn (3), each with |C QX,A | = 1. There is also an upper limit on the Cs which a collider can exclude, eqn (1), from requiring that the contact interaction approximation be self-consistent: C/Λ 2 < 4π/ŝ. It will be applied below forŝ ∼ TeV 2 . For the axial χ current with Λ = TeV, the CMS limit and eqn (1) give 3 independent bounds on {c qL,A , c uR,A , c dR,A }: where the first line is the summed contributions of u L and d L , the fractions are approximations gg Q X s w /2c w , and the d to u pdf ratio is taken 1/2. Similar limits apply for the operators of eqn (2).
It can be seen already from eqn (7), that including the interactions with the Z will make little differences to the LHC limits on the C QX,A : for the doublet quarks, the Z contribution cannot cancel simultaneously against the u L and d L contributions, and the Z contribution is irrelevant for the singlet quarks, because also C Z,A must be < ∼ 4π. The parameters ruled out by the first and second eqns of (7) are represented as the central regions in figure 2.
From the TeV to the MeV
In direct detection, the dark matter scatters non-relativistically off nuclei. Therefore, to translate the EFT from the TeV to the MeV, the Z must be removed, the effects of QCD loops in running the operator coefficients should be included, and the quarks must be embedded in the nucleons.
To remove the Z, the Greens function for two quarks and two χs in the Effective theory with a Z, should be matching to the same Greens function in the theory without a Z. Since the matching is performed at zero momentum for the fermion legs, the contact interactions of eqn (4) do not contribute, and the coefficients of the four-fermion operators of eqns (3,2) remain the same after the Z is "matched out". The Z vertices were taken ∝ p 2 Z to obtain this. The light quark currents qγ µ P X q are conserved in QCD, so do not run. Also, since χ is a SM gauge singlet and the only dark sector particle below the TeV, I suppose that the operators with vector and axial vector χ currents do not mix below the TeV.
Finally, the quark currents can be embedded in nucleons N = {p, n} using identities [29] such as where c p V,u = c n V,d = 2, and c p V,d = c n V,u = 1, because this current counts valence quarks in the nucleon. The axial quark current is proportional to the nucleon spin: where the proportionality constants are measured [28] as ∆u p = ∆d n = 0.84, ∆d p = ∆u n = −0.43. In the zeromomentum-transfer limit of non-relativistic scattering, the dark matter can have spin-dependent interactions via the axial current, or spin-independent interactions via the first component of the vector current. The spin-independent scattering amplitude for χ on a nucleon, is a coherent sum of vector and scalar interactions, for quarks of both chiralities and all flavours. The experimental limit on the cross-section per nucleon is σ SI < ∼ 10 −44 cm 2 for m χ ∼ 100 GeV [3]. For the proton (uR ↔ dR for the neutron), with C qR,V = 1 3 (C dR,V + 2C uR,V ), this gives [29] σ SI ≃ 1 π where the +... is scalar contact interactions neglected in this note. For Λ = TeV, this gives The spin dependent cross-section per proton is [29] σ SD ≃ m 2 where the experimental bound is for m χ ∼ 100 GeV. For Λ = TeV, this gives Comparing to eqn (7) shows that the contact interactions explored by SD direct detection experiments are mediated by physics which is not a contact interaction at the LHC, so are not excluded by the limits given in eqn (7). The limit (9) is represented in figure 2 as the vertical exclusions.
Discussion
From a bottom-up EFT point of view, it is important to include all operators which can interfere, when computing experimental constaints. This is to allow for cancellations. Including several operators which do not interfere improves the bound, but is not so well motivated. In this note, operators with vector and axial vector currents for the dark matter fermion χ were presented as an example, which illustrates two points. First, the EFT at the LHC contains more particles than the light partons and dark matter that are relevant in direct detection. At the LHC, the Higgs and Z should also be included. Matching the high and low energy EFTs, as done in this note, suggests that the LHC constrains several combinations of operator coefficients that are different from direct detection, as can be seen by comparing eqns (7) and (9). However, the contribution of the Z is relatively unimportant, because its couplings to singlet quarks are small, and it interferes with opposite sign with u L and d L . The LHC limits on the dark matter couplings to quarks and the Z are represented as the central exclusion areas of figure 2: the coupling to quarks is more constrained than the coupling to the Z, and arbitrary axial current dark matter interactions to quarks cannot be allowed by tuning the dark matter coupling to the Z. This is because there is a self-consistency upper bound on contact interaction coefficients at colliders C/Λ 2 < 4π/ŝ (see eqn (1)). It is important to notice that this upper bound also implies that the LHC limits do not exclude the parameter space probed by spin dependent direct detect experiments.
Second, an interesting difference between direct detection and collider experiments, is that quarks of different chirality and flavour interfere in direct detection, whereas the LHC can constrain the interactions of dark matter with each flavour and chirality of quark individually. This is related to the relative unimportance of the Z: it cannot cancel separately against the contributions of u L , d L , u R and d R . (4)), and with u R quarks in the left plot, and the doublet q L in the right plot (see eqn (3)). Λ = TeV, and all other coefficients are zero. The upper limit of the LHC exclusions is estimated from eqn (1).
In summary, the rules of bottom-up Effective Field Theory say that one should include all operators up to some specified dimension. So to parametrise at dimension six the axial vector interactions of dark matter with quarks, one should include contact interactions of dark matter with the quarks and with the Z. Including interactions with the Z that are ∝ p 2 Z , as done here, suggests that these are not crucial. | 4,988.2 | 2014-03-20T00:00:00.000 | [
"Physics"
] |
How Ready Can You Be? Profiling Indonesian Teacher’s Preparedness for an Online Teacher Professional Development Program
Online teacher professional development has emerged as a promising approach to enhancing teachers' competencies and improving classroom experiences, ultimately leading to improved student academic achievements (de Kramer et al., 2012). However, an effective online teacher professional development program must be tailored to meet teachers' needs and account for the complexities of their teaching practices. This article reports on a need analysis conducted via surveys and focuses on group discussions with 1544 teachers from across Indonesia, aimed at designing a Massive Open Online Course (MOOC) for online teacher professional development. The need analysis focused on several key areas, including the participants' Information and Computer Technology (ICT) background, beliefs, abilities, and their prior experiences and needs in teacher professional development. The findings highlight the importance of carefully designing the MOOC's technical requirements, content, and delivery. Specifically, the MOOC must incorporate specific modes, materials, tools, and activities that are deemed beneficial for the teachers. In terms of content, the MOOC must aim to enhance teachers' skills in specific teaching activities, create a positive learning environment for students, and cater to different ability levels. Moreover, the MOOC delivery should include reflection and sharing activities to foster collaborative learning among the participants. These findings have significant implications for designing effective online teacher professional development programs that can meet the needs of teachers in diverse settings.
INTRODUCTION
Research has argued that online professional developments have the potential to enhance teacher expertise and improve teacher retention (Erickson, Noonan, & McCall, 2012) as well as reinforce learning and critical thinking (Şendağ & Odabaşı, 2009).As reviews on research in teacher professional development have pointed out the importance of understanding the nature of teaching and learning as contextual and complex (Opfer & Pedder, 2011) and emphasised the importance of considering teachers' identity, their learning stage, and their learning needs and growth (Lay et al., 2020) in the design of a professional development program, when online professional development is designed appropriately, it may have the potential benefit of developing and improving many elements of teacher competencies (Orleans, 2010).
The research presented here is part of a project to create a Massive Online Open Course (MOOC) that delivers modules for teachers' professional development to meet the need to leverage secondary English teachers' capacity to integrate ICT into the classroom.As the MOOC needed to correspond to the needs of the project's potential participants, a need analysis was conducted on teachers in Indonesia, with the specific objectives of finding their ICT profiles and ICT readiness, pedagogical competences, their prior TPD experiences, and their expectations in using the MOOC.The research question posed, therefore, is what are the teachers' ICT profiles and readiness, pedagogical competences, prior TPD experiences, and expectations in using the MOOC.Some of its notable practical benefits are how it has the potential to provide greater access in terms of coverage, ease, expenses, and loss of valuable classroom time for teachers (Alzahrani & Althaqafi, 2020;Dash et al., 2012;Lay et al., 2020;Masters et al., 2010;Reeves & Pedulla, 2011;Vu et al., 2014), This benefit is apt with the situation of Indonesia, where geographical distance and Internet connection, as well as the vast coverage, become an issue.Another practical benefit of online professional development involves suitability in the learning pace for the teachers and the possibility of immediate implementation in day-today teaching situations (Mccall, 2018).
In regards to the improvements in teacher competencies, online professional developments have the potential to enhance teacher expertise and improve teacher retention (Erickson et al., 2012) as well as reinforce learning and critical thinking (Şendağ & Odabaşı, 2009), promote collaboration with peers, teacher reflection, and teacher confidence (Bragg et al., 2021;So et al., 2009).It is expected that these improvements and enhancements in teachers' knowledge and skills may lead to improved teaching practices, eventually resulting in improved student academic performance.(De Kramer et al., 2012) Reviews on research in teacher professional development have pointed out the importance of understanding the nature of teaching and learning as contextual and complex (Opfer & Pedder, 2011) and emphasise "the importance of considering teachers-who they are, where they are in their learning, and what they need to move forward in their learning and growth" (Lay et al., 2020, p. 3) in the design.Therefore, conducting a need analysis on the teachers who become the subjects of a teacher professional development program is imperative to ensure that the program successfully meets the needs of the teachers and that the design is based on contexts and the complexities of teachers' teaching and learning.
The need analysis can portray the teachers' general attitude toward using technology in the classroom.Parasuraman (2000) postulated that technology readiness is "people's propensity to embrace and use new technologies for accomplishing goals in home life and at work."He mentioned that there are four dimensions to measure readiness to embrace new technology, among others: 1) Optimism which shows people's positive attitude and belief that technology can provide users more control, flexibility, and efficiency in their lives; 2) Innovativeness which shows people's willingness to be the pioneer and leader in technology; 3) Discomfort which shows people's sense of having lack of control and being overwhelmed by technology; and (4) Insecurity which shows people's distrust and scepticism toward the affordances of technology (Parasuraman, 2000).
METHOD
The need analysis was conducted as mixed-method research, employing traditionally considered qualitative and quantitative research methods.The use of this approach is necessary as the need analysis investigated the same underlying phenomenon (of teachers, their classroom practices, and past online TPD experiences, in which quantitative and qualitative data are needed (Leech & Onwuegbuzie, 2009) 1) to involve participants in the program, 2) to develop, implement and evaluate a program, and finally, 3) to obtain more complete and corroborated results (Creswell & Plano Clark, 2018) The rationale for using this approach is twofold: 1) the online TPD program was aimed at teachers from all over Indonesia, and thus sufficient coverage of information from the population in the form of descriptive statistics data was necessary; 2) specific information on teachers' background, interests, and classroom practices was necessary to be scrutinised in details, and such information could only be sought qualitatively.
The data for the quantitative research was collected through an internet survey delivered via Microsoft Teams using a snowball sampling recruitment approach through teachers in Indonesia Technology-Enhanced Language Learning (iTELL) association and analysed in descriptive statistics.One thousand five hundred sixty-three (1563) participants filled out the survey, but upon scrutiny, only data from one thousand five hundred forty-four (1544) participants were used in the analysis.Reasons for data exclusion included multiple submissions and incomplete answers.Upon completing the survey, 687 participants were invited to attend the Focus Group Discussions (FGDs), and 310 participants agreed to be involved in the FGDs.Two data collectors then conducted 11 FGDs.The data from the interviews in the FGDs were qualitative and analysed using thematic analysis.
The themes and questions posed in the survey and FGDs focused on seeking information on the participants' demographic background, ICT background, ICT beliefs, pedagogical competences, ICT abilities, and their TPD interests, experiences, and needs.
FINDING AND DISCUSSION ICT Background
The information on ICT background aimed to provide recommendations on designing the MOOC platform, particularly in its provision and delivery mode, ensuring that teachers in Indonesia can access the MOOC platform and materials easily.The data gained was on the teachers' connection speed of their internet access and device owned.
Regarding the internet connection speed, the findings are similar to the distribution of teachers in Indonesia.Teachers living on Java Island mostly enjoyed high and medium internet connection speeds (39% and 16%, respectively), while other major or groups of islands experienced medium internet connection speeds.Most notably, teachers in Maluku and Papua only had the option of low internet connection speed (3%).On the internet connection speed, most teachers in Indonesia had the option of medium internet speed (63%), followed by 19% of low and 18% of high internet speed.In the case of devices, almost all teachers had access to handphones (98%), and laptops (96%), with more than half of the participants could also access printers (57%).Some also used scanners (13%) and tablets (13%).
Data from Datareportal (Kemp, 2021) shows that there has been a significant increase in internet users (16%) between January 2020 and January 2021.However, similar to the uneven access to the Internet connection in the findings above, although the Internet penetration reached 73.7% as of January 2021 (Kemp, 2021), it has not been widely accessible to all regions.Intersecting the information on the internet connection speed and the devices owned by the participants, Figure 1 below shows that most teachers used laptops/computers or handphones with medium internet connection speed.
Figure 1. The Intersection of Internet Connection Speed and Device Owned
The results suggested that the MOOC be accessible via laptops/computers or handphones, with materials and activities that required low to medium internet connection speed.They also suggested that asynchronous and audio or text-based materials or exchanges were preferable, as they did not require high internet connection speed.The choice of this mode, materials, and activities would also allow those from areas with low internet connection speed to access the MOOC to ensure its accessibility in all regions in Indonesia.
ICT Beliefs, Pedagogical Competence, and ICT Abilities
The data on ICT Beliefs and TPD interests were necessary to ensure that the MOOC design would receive positive responses from the target teachers.By discovering the pedagogical competences, ICT abilities, and TPD needs of the participants, the MOOC could be designed to meet the level of competences and abilities of the target teachers and to cover the topics/skills still lacking in the teachers or needed by the target teachers.
ICT Beliefs
In general, teachers believed that technology was useful for teaching and learning.Expressly, they agreed on the usefulness of technology in easing their job (94.04%) and improving their performance (99.68%).For learning, almost all participants agreed on technology's potential to improve student performance (93.39%).At a glance, the survey results show overwhelming positive beliefs toward using technology in teaching and learning, as the answers leaned toward above 90%.
However, the FGDs revealed further in what sense the technology was perceived as beneficial or problematic for the teaching and learning processes.On the student's side, the success of the learning was in the expression of attitudes toward the teaching-learning process, such as being active, interactive, engaged, enthusiastic, etc.Some also mentioned achieving the level of having specific abilities, such as the ability to collaborate, to learn independently, etc.On the teachers' side, success was expressed through abilities to do certain teaching activities.
It may be concluded from the survey results and the FGDs that teachers' attitudes toward technology were generally positive.To be specific, among the positive benefits of technology, the participants of the FGDs mentioned the usefulness of using technology for their work in terms of the opportunities to teach innovatively and creatively, to access easily (even unlimited) varied/innovative materials/learning resources, to conduct scoring and assessment processes that are well-structured and well-documented, to upgrade their technological knowledge, and to communicate easily with the students and the parents.They also noticed that using technology gave students more intrinsic motivation.As decisionmakers in the classroom, teachers should have positive beliefs towards using technology.Zhao et al. (2006) mentioned two related beliefs critical in enabling teachers to use technology for meaningful learning: 1) that using technology will bring certain benefits, and 2) that technology is compatible with existing practice.
However, they also revealed the opposing sides or the problems of using technology in their work, from the teacher-, student-, and technical sides.From the teacher's side, using technology was viewed as time-consuming because it disregarded regular working hours.The teacher also felt they had insufficient time to provide feedback, monitor students (mainly to prevent plagiarism/cheating), and explore ICT tools.In addition, some teachers also expressed their lack of ICT skills, particularly in creating engaging/ innovative lessons using technology.From the student's side, the teachers in the FGDs saw a lack of studentstudent interaction and participation when technology was used.Interestingly, there was a conflicting answer regarding students' motivation, as they seemed to embrace the opinion that technology could increase students' intrinsic motivation and demotivate them.In the case of technical problems, the teachers in the FGDs revealed various problems in using technology, among a few, frequent blackouts, unstable internet network, slow/limited internet connection, the cost to buy internet quota, and the unavailability of devices or limited storage/memory in their devices.
Teachers' beliefs about technology's usefulness are mainly affected by their prior experiences and attitudes (Borg, 2002).Such experiences can be obtained through interactions with peers or experts, experiencing successes and failures when trying specific technology tools, and revisiting their pedagogical approach based on their specific teaching context (Arnold & Ducate, 2015).Different teachers will have different beliefs depending on their prior knowledge, teaching context, attitudes, and general pedagogical knowledge, which influence how they identify the affordance of technology and how they react to challenges (Haines, 2015;Liu & Kleinsasser, 2015;O'Dowd, 2015)
Pedagogical Competences
In preparation for a MOOC course, it is necessary to determine the teachers' initial pedagogical abilities and ICT skills and their confidence if they are involved in a digital course.The participants mostly perceived themselves as having sufficient pedagogical competence in conducting teaching-learning processes, with more than 60% claiming that they could identify students' linguistic problems, support and facilitate students' learning and interaction through the provision of learning activities, teaching approaches, appropriate materials, and assessed the students in various ways.Around 16%-23.70%even went as far as being confident about these abilities.However, 6.67%-17.6%still felt that their pedagogical competences were lacking.
ICT Abilities
Regarding ICT skills, the survey asked the participants about the tools/applications they were familiar with, had used in their teaching, and/or helped other teachers use them.Microsoft products, such as PowerPoint, Word, and Excel, are listed as the ones that the participants used for both personal and professional purposes (over 40%), followed by online discussion forum/social media and Learning Management System (LMS) (31.09%),Google Apps (28.82%), online assessments (23.90%), media editing (image -22.60%, audio -16.26%, video -15.48%), online polling and stickies (9.59% and 8.35% respectively).
The FGD results added an interesting finding of using Google Classroom and WhatsApp as the most used tool/apps.In the case of Google Classroom, it was widely used because the government provided teachers with accounts for Google Education products to teachers as a solution for teaching during the pandemic.As for WhatsApp, the familiarity and use of this chat application is no surprise, as the finding was similar to the report of Datareportal in January 2021, which put WhatsApp as number 2 in the list of most-used social media platforms in Indonesia after YouTube.
The FGDs also implored the typical activities that used technology in administrative, teaching, and learning processes.The results reveal that the participants mostly used technology in creating, providing, and delivering learning activities, followed by activities for assessment for/of learning, providing learning sources, administrative purposes, creating materials, collaborating, and lastly, as a teaching platform.In response to these findings, setting teacher professional development objectives through the MOOC can entail several possible routes.
The first route is the provisions of opportunities for teachers to gain new knowledge of the tools/applications or the skills in using technology for activities that they least do, under the assumption that they did not do the activities because they lacked the knowledge of potential technology for the activities or the skills to use technology in doing the activities.If this route is taken, the MOOC should focus on potential teaching platforms, collaboration activities, or material creation activities, as these were mentioned the least.
Another route is to provide additional knowledge on tools/applications or skills in using technology for activities that they mostly do, in this case, the activities of creating, providing, and delivering learning activities, followed by activities of assessment for/of learning, providing learning sources, administration, and upgrading the teacher's current knowledge and skills in using technology for their job.
In the attempt to provide technology-focused TPD, three aspects need to be incorporated into the TPD Program.The first introduces emerging technologies and their instructional functions, and the second discusses how to use the technology to create genuine interaction, increase cooperation and promote students' creativity (Wu & Wang, 2015).Therefore, helping teachers to integrate technology should not be conducted through a one-shot event or isolated programs.Teachers need a series of interconnected, situated, and sustained experiences to construct new practices through experimentation and reflection (Zhao, 2022).When utilising technology, Koehler et al. (2011) highlighted that teachers frequently lack the skills and dispositions to play around and experiment with technology tools.To create transformative learning experiences, teachers need to draw upon their creativity, find different approaches to educational technology and be willing to experiment with technology and ideas.
Teacher Professional Development (TPD) Prior Experiences and Needs
The findings on the TPD prior experiences and TPD Confidence helped design a platform with delivery means and features that would likely be familiar to the target teachers.The results of ICT confidences are for understanding their intended action after the TPD.The results show that around 60% of the survey participants had done TPD training previously and that 28.95% had done this training both synchronously and asynchronously.In comparison, predominantly 31.93% had done it in synchronous mode.During the FGDs, the participants shared the topics of TPD that they had experienced.Most topics touched upon equipping teachers with the specific skills of using particular tools/applications/LMS, creating learning materials using technology, or more general topics of online learning or technology integration.These topics, albeit useful for the teachers, do not indicate if, within the training, they were introduced to or reflected on implementing certain pedagogical principles when using technology for learning.
To seek that the topics relevant to the implementation of pedagogical principles were of interest or need of the teachers, a list of TPD needs surrounding such topics was presented to the survey participants.They were asked to rank the ones they needed the most for their work.The participants' answers aligned with their typical activities in using technology, i.e. creating, providing, and delivering learning activities.Thus, the top three of the most needed TPD training were related to these activities, i.e., designing interactive classroom activities, developing engaging teaching materials using technology and employing innovative approaches to teaching English using technology.The following three activities in the list focused on students, in which teachers expressed the need to create learning environments that nurture autonomy, promote collaboration among students and be able to identify students' problems.The rest of the needs were the ones that are related to the use of specific technology for learning.
The last part of the survey was on their confidence in doing a TPD program should the opportunity arise.This part consisted of possible stages of a typical TPD program, which included being confident in doing the program, collaborating with other program participants, trialling the newly acquired competence from the program, and reporting back the trial results to the program as it is expected the participants of the MOOC would not only engage and collaborate within the MOOC but took the competence into practice and reflected on the practice.Generally, the survey participants responded positively to the stages of the online TPD program, with a percentage nearing or over 70%.
the most interesting finding is in the stage of reporting the trial back to the online program.The percentage of those who felt slightly confident and not confident in reporting back is the highest among other stages (21.57% and 3.76%, respectively).This result can be interpreted as their reluctance to share their application experiences with the TPD to reflect on their practices.Consequently, if reflection on practices is an important activity in TPD, the design of the MOOC activities will need to be directed to provide a safe environment for the participants to reflect without being judged.
RECOMMENDATIONS FOR MOOC DESIGN AND DELIVERY MOOC Technical Requirements
Based on the Internet connection results and the ownership of devices, it is suggested that the MOOC be accessible via laptops/computers or handphones, with materials and activities that require low to medium Internet connection speed.Also, to avoid teachers' low motivation or reluctance to be involved in the MOOC, several potential technical issues must be addressed by creating/providing network-, access-and cost-friendly materials.Therefore, asynchronous and audio or text-based materials or exchanges are preferable, as they do not require high internet connection speed.As for the tools/applications provided via the MOOC, using MS Office, Google apps, and WhatsApp is advisable as these tools/applications are familiar, widely used, and available for teachers.As time was also mentioned as a discouraging factor in using technology, the activities in the MOOC need to be kept at a reasonable and feasible length for the teachers, with high flexibility of access in terms of time.The choice of these modes, materials, tools/applications, and activities will also allow those from areas with low internet connection speed to access the MOOC to ensure its accessibility in all regions in Indonesia.
MOOC Content
Juxtaposing the findings on teachers' beliefs in ICT, pedagogical competences, ICT abilities, TPD prior experiences, and TPD needs, the following recommendations are offered for designing the MOOC content/topics.Firstly, the findings suggest that the MOOC that the teachers prefer is the one that provides opportunities for teachers to have the ability to perform certain teaching activities, provides a learning environment that enables students to express positive attitudes toward the teaching-learning processes, and be at a certain level of abilities.Therefore, the top three of the most needed TPD training are related to the abilities to perform certain teaching activities, i.e., designing interactive classroom activities, developing engaging teaching materials using technology, and innovative approaches to teach English using technology.The following three activities focus on the students, in which teachers expressed the need to create learning environments that nurture autonomy and promote collaboration among students and of being able to identify students' problems.The rest of the needs are the ones that are related to the use of specific tools/applications for the teaching-learning process.Secondly, considering the lack of competences, the MOOC should provide specific topics on (1) how to use various teaching approaches, especially TPACK; (2) how to support students' interaction; (3) how to identify students' linguistics problems; (4) how to facilitate students' learning through various activities; (5) how to assess students; and, ( 6) how to select appropriate teaching materials.
This finding is consistent with past research in online teacher professional development, in which the content of the MOOC needs to meet learner needs effectively (Farris, 2015) and contain practical examples, group projects, and video demonstrations to get teachers' attention and enhance confidence (Qian et al., 2018) and practical activities that include hands-on activities and real-life observations (O'Dwyer et al., 2007;Yeh et al., 2011) that encourages students to reflect on learning and receive feedback (Desimone, 2009;Qian et al., 2018).
MOOC Delivery
The recommendation of the MOOC delivery incorporates the findings on TPD prior experiences and the TPD confidence of the teachers.The complete stages of delivery of the proposed MOOC are 1) being involved in the program; 2) collaborating with other program participants; 3) trialing the newly acquired competence from the program; and 4) reporting back the trial results to the program.However, it is found on the findings of TPD prior experiences that the training needs to include topics that enable teachers to reflect and consider the pedagogical principles in implementing specific technology in their teaching, as their prior training is more on improving their competencies in integrating technology into their teaching, without considering the pedagogical principles that lie behind the integration.
Therefore, in the findings on TPD confidence in the stages of a proposed online TPD, the stage that includes reflection on the application of technology in the actual context needs to be highlighted and emphasized; in this case, Stage 4 of reporting back the trial results to the program as a means of reflection.Also, since the teachers are mostly reluctant to share their application experiences back to the TPD, Stage 4 is intended to provide a safe environment for the teachers to reflect without being judged, as a reflection on teachers' instructions and beliefs (Liu, 2012) is one of crucial elements of an effective online teacher professional development.
CONCLUSIONS
Research has argued that online professional developments have the potential to enhance teacher expertise and improve teacher retention.As reviews on research in teacher professional development have pointed out the importance of understanding the nature of teaching and learning as contextual and complex and emphasised the importance of considering teachers' identity, their learning stage, and their learning needs and growth in the design of a professional development program, the need analysis in this paper is crucial to design the MOOC that successfully meets the needs of the teachers.The design is also based on contexts and teachers' teaching and learning complexities.
The need analysis results show that the MOOC needs to be designed carefully in terms of the technical requirements, content, and delivery of the MOOC.The choice of specific modes, materials, tools/applications, and activities of the MOOC are crucial in setting the basic technical requirements for the MOOC to be useful for teachers.The MOOC design should also consider contents that enable teachers to have the abilities to perform certain teaching activities, provides a learning environment that enables students to express positive attitudes toward the teaching-learning processes, and be at a certain level of abilities.In the delivery of the MOOC, the delivery stages need to be carefully considered.They must include the activities of reflecting on their training and sharing the results of their training at the end of the delivery period.
With the increased use of online modes in delivering teacher professional development programs after the pandemic, the question is no longer whether online professional development programs are more effective or successful than traditional faceto-face ones.Instead, more research should be conducted to find strategies that maximise the benefits of such programs for teachers that will eventually improve the quality of students learning and performance.In addition to research on need analyses, studies that reflect and evaluate the implementation of the MOOC program following the need analysis also have the potential to be conducted. | 6,032.6 | 2023-04-26T00:00:00.000 | [
"Education",
"Computer Science"
] |
Utilization of E-learning by Agricultural Students of Public Higher Institutions in Southwest of Nigeria
This study aimed at determining the utilization of e-learning by agricultural students’ of public higher institutions in the Southwest of Nigeria. A multistage sampling procedure was used to select 300 respondents from the higher institutions. Students were mostly female with a mean age of 22.4 years and with majority enrolled in undergraduate programmes. Respondents had a low usage of e-learning due to low awareness and knowledge level, complexity of technology and inadequate e-learning infrastructures in their schools. A significant relationship existed between constraints, awareness, knowledge, and utilization of e-learning. The regression analysis carried out in this study resulted in R-square of .957. These findings indicate that about 95.7% of variance in the level of usage of elearning is explained by awareness, knowledge and different constraints. The study recommends that efforts should be geared towards encouraging students to integrate e-learning usage in their academic activities by providing elearning infrastructures and easy access through competent e-learning personnel. Keywords— Agricultural students, E-Learning, Higher institution, Utilization
I. INTRODUCTION
Technology has inevitably become the most powerful tool in almost every aspect of human's daily life as is regarded as the precursor of major revolutions in various aspects of human endeavours, including education (Lee et al., 2018). The use of Information Technology (IT) is the new paradigm of learning in the 21st century that allows people to easily access, gather data, analyse and transfer of knowledge (Darling-Hammond et al., 2020). This makes it possible to function as teachers, study mates and more importantly, as tools to improve the entire teaching and learning process. This current development which relates to the role of IT and the Internet shows that the phase of the whole educational system has changed. Information and communication technologies (ICTs) hold immense capability to gradually transform and remould education (Assar, 2015). The ultimate impact of ICTs may be seen on the structure, content and outcomes of learning, both inside and outside of school. Bader and Kottstorfer (2013) reported trends in technology influence on education and knowledge management and that E-learning is gradually becoming important in higher institutions of learning by providing E-learning resources through allowing profile registrations on the internet by more students. E-learning is an aspect and/or manifestation of e-readiness, which is the general term for using computer and other electronic technologies to promote teaching and learning that may include the use of the technologies as part of the conventional or traditional teaching where learners and teachers may never meet face to face (Khvilon & Patru, 2018;Lakshmi et al., 2020). The technology includes not just computers and networks that connect them but also software such as email, online databases, CD-ROMs and the peripherals such as video cameras and interactive white ISSN: 2456-1878 https://dx.doi.org/10.22161/ijeab.63. 21 184 boards (Anderson, 2010). E-learning is also the use of ICT to promote the acquisition of more efficient and effective learning materials and results, facilitate the accessibility of research findings and educative write-ups, allow greater students' access to information and make researchers more accountable to students and the general public (Eze et al., 2018). The concept of e-learning has brought about changes in knowledge management and human resources development (Subramanian, 2016). Increasing capacity of ICT has further been empowered by the growth of a global network of computer networks known as internet (Teng et al., 2020). It has impacted in the way business is conducted, facilitated learning and knowledge sharing, generated global information flows, empowered citizens and communities in ways that have re-defined governance and have created significant wealth and economic growth resulting in a global information society (Cascio & Montealegre, 2016).
Agricultural education in tertiary institutions is one of the ways of preparing for sustainable agricultural development through the training of farmers and allied professionals as well as agricultural extension practitioners (Kozicka, 2018). Agricultural education implies training people in a way to be futuristic in their exploitation of nature with adequate consideration for the environmental, societal and economic factors in a balanced way, in the pursuit of development and improved quality of life (Smith & Rayfield, 2016). Many of today's major challenges include; energy security, national security, human health, and climate change that are closely tied to the global food and agriculture enterprise (Islam & Kieu, 2020). Academic institutions with programmes in agriculture are in a perfect position to foster the next generation of leaders and professionals needed to address these challenges (Ikehi et al., 2014). However, to keep pace with changing times, undergraduate agricultural education needs a new focus. Agriculture is affected by so many factors and its participants must always be prepared to react, adapt, and think ahead (Alawa et al., 2014).
Tertiary institutions with undergraduate programmes in agriculture must undergo a significant transformation to foster the agricultural workforce of tomorrow. Such institutions must position themselves at the cutting-edge and offer students the opportunity to learn about the complexities of agriculture, grapple with its emerging challenges, and find their opportunity to contribute as leaders and participants (Carlisle et al., 2019). Keeping up with the evolving nature of the agricultural enterprise is not a simple task. It requires a much more dynamic approach to the curriculum and teaching methods than most academic institutions have developed (Njura et al., 2020).
Increased awareness of agriculture's important role in addressing major societal problems can help to raise the profile of the field and attract more students (FAO, 2014). Transforming and sustaining education in agriculture requires an ongoing commitment with strong leadership and interest in agriculture (Ameyaw et al., 2019). The investments in undergraduate education will play an important role in shaping the future of agriculture in meeting the challenges of the 21 st century and beyond. In achieving sustainable agriculture, e-learning is one of the ways to improving Agricultural education in the world (Ra et al., 2019).
According to Apuke and Iyendo (2018), the educational sector is yet to tap into this technology to deliver services especially in our tertiary institutions due to high cost of internet access and lack of reliable and permanent sources of power, causing loss of interest from students who believed in e-learning as a good way of impacting knowledge on them. Poor telecommunication penetration hinders access to e-learning facilities in many towns and cities in the country and also, the cost of owing personal computers is a major hurdle for the most of the Nigerian students (Atanda, 2014).
ICTs hold great potential for supporting and augmenting existing education as well as national development efforts (Idowu & Esere, 2013). The functionality of ICTs in agricultural education delivery is faced with enormous challenges in the nation's higher institutions, among which include inadequate ICT infrastructure, including high cost of band width access, lack of skilled manpower, to manage available systems and inadequate training facilities for ICT education at the tertiary level (Yushau & Nannim, 2018). Moreover, resistance to change from traditional pedagogical methods to innovative, technology based teaching and learning methods from students and academic staff is yet an important issue (Howard & Mozejko, 2015). The overall educational system is grossly under-funded and therefore, available funds are used to solve more seemingly urgent and basic needs by the institutions (Adeniran et al., 2019). In addition, the overdependence of educational institutions on government has limited tertiary institutions ability to collaborate with the private sector or seek alternative funding sources for ICT educational initiatives and more so, ineffective coordination of various ICT for educational initiatives, give room to mushroom education across the nation's educational institutions (Gana, 2017). These challenges throw many tertiary institutions to operate beyond capacity.
Addressing the agricultural students' usage of e-learning in tertiary institutions across the country is paramount, in order to gauge progress and pro-actively address stumbling blocks. It is important to conduct a study on the utilization of e-learning among agricultural students as it is crucial for the country's participation in the information society and a promising effective strategy academic performance and agricultural transformation in the country. Thus, this study aimed at examining the utilization of e-learning by agricultural students of public higher institutions in Southwest of Nigeria. Specifically, the objectives to be achieved are as follows: 1. describe the personal characteristics of agricultural students; 2. determine the level of awareness of e-learning facilities available to agricultural students; 3. examine the knowledge level of the agricultural students on the use of e-learning; 4. determine the level of usage of e-learning by agricultural students; 5. identify constraints to the use of e-learning by agricultural students.
Hypotheses of the study
1. Ho: There is no significant relationship between personal characteristics of agricultural students and their usage of e-learning.
Ho:
There is no significant relationship between agricultural students' levels of awareness, knowledge, constraints and their level of usage of e-learning.
Study area
The study was carried out in Southwest zone of Nigeria. Southwest lies between latitude 60 30'N to 9'N and longitude 30 0'E to 50 30'E. It is majorly a Yoruba speaking area, although there are different dialects even within the same state. The weather conditions vary between the two distinct seasons in Nigeria; the rainy season (March-November) and the dry season (November-February). The dry season also brings Harmattan dust and cold dry winds from the northern deserts blow into the southern regions around this time.
Study population
The research population comprises of all agricultural student in public tertiary institutions in Southwestern Nigeria.
Sampling procedure and sample size
Multistage sampling procedure was used to select students for the study.
Data collection and analysis
Data collection was conducted using structured questionnaire. The information was gathered by using an interview schedule made up of well-structured open and close-ended questions. To test the stated hypotheses, the data were analyzed using descriptive and inferential statistics like Pearson Product Moment Correlation (PPMC) and Regression analysis. The personal characteristics of the respondents were presented using frequency counts, percentages and means.
Personal characteristics of the respondents
The findings presented in Table 1 shows that more than half (55.3%) of the respondents were female, while 44.7% of them were male. This implies that the females were more involved in agricultural education in the study area.
Obayelu & Fadele (2019) supported this finding that across tertiary institutions, the ratio of female to male respondents studying Agricultural Science is most times skewed towards the females emphasizing that they are more willing to study agriculture in tertiary institution than their male counterparts. Christians. This is manifested in the existence of churches in the areas. This shows that education is accepted by the two prevailing religions. The implication is that there is no religious taboo against the acquisition of tertiary education in the study area. Majority (92.0%) of the respondents were single while 8.0% of the respondents were married. This indicates that the single people could get more focused, boost their capacity to seek, utilize information and become excellent in their academic pursuits than the married people as they are limited to little family responsibilities (Moses et al., 2020).
On the category of students programme, majority (82.0%) were undergraduate students, 11.0% were preliminary students, while 7.0% of them were postgraduate students. This implies that undergraduate programme is the foundation and mainstay of tertiary education in the study area with the attraction of lesser fees compared to post graduate study which is an afterthought degree, carried out by fewer respondents because it requires more money, dedication and vast knowledge of ICT application to pursue satisfactorily. Table 1 further revealed 53.3% of students were University students, 26.7% were in the Polytechnic and 20.0% were Colleges of Education students. This implies that many students were studying agriculture in the university compared to other higher institutions because university education is more popular, recognized and better professionalized than other tertiary education in the study area. This result is consistent with the study of Yang et al. (2015) and Chankseliani et al. (2021); they validated that Universities play major roles not only in national but also, increasingly in regional economic development in the delivery of life-long learning and development of civic culture.
Also, as revealed on Table 1, 87.6% of the respondents were between 100 and 500 levels, while few of them were between 600 and 800 levels (Postgraduate). This implies that the higher the students grow in academics, the tougher the agricultural programmes become. Fewer students survive it to higher levels and could only pursue higher studies in agricultural based courses through the University alone. semester. This indicated that almost all the respondents earn lesser income as monthly stipends from parents, guardians or self-labour to cope effectively with their academic demands and that could negatively affect their information seeking and utilization behaviour to e-learning facilities. This finding was consistent with Okoro (2021) as she confirmed that inadequate funding is the major constraints of using e-learning facilities during teaching and learning process.
Level of awareness of e-learning facilities available to agricultural students
In accordance with the rating of low and high, as indication of level of awareness of available e-learning facilities, the respondents' scores in level of awareness as shown in Table 2, range 1-22 with a mean of 10.9. To find the level of awareness of available e-learning facilities, the scores were grouped into two categories; low (0-11) and high (12)(13)(14)(15)(16)(17)(18)(19)(20)(21)(22). The analysis revealed that level of awareness of available e-learning facilities by the respondents, with a majority (71.0%) claiming to be as low, while 29.0% indicated that the level of awareness of available elearning facilities is high. The result shows that the respondents' low level of awareness was due to inadequate provision of the needed e-learning facilities on campuses by various institutions which will not be the best way for students to be aware of new technology, willing and interested to study in various concepts in searching and upgrading their knowledge (Suresh et al., 2018).
Knowledge level of the agricultural students on the use of e-learning
As shown in Table 3, the students' scores of knowledge level on the use of e-learning range from (1-14) with a mean score of 5.4. To determine the level, the respondents' scores were grouped into two categories. It described (0-7) as low, and (8)(9)(10)(11)(12)(13)(14) as high. The results of the level of knowledge according to the respondents', with more than half (53.0%) stating as low, while 47.0% of the total respondents indicated high level of knowledge towards the use of e-learning facilities. The implication is that many of the respondents did not have good knowledge of the use of e-learning facilities and this might hinder their readiness for the usage and invariably lead to the experience of low readiness for e-learning. Eze et al. (2018) validated this finding by attributing some of the reasons for low elearning adoption in higher institutions ranges from lack of specialized and social aptitudes required for the execution of e-learning and teachers to lack of teachers and students knowledge and the know-how to use e-learning platforms.
Level of usage of e-learning facilities by agricultural students
The respondents' score in Table 4 reveals the mean is 8.8.
To determine the level of usage of e-learning, the respondents' scores were grouped into two categories. It described (0-9) as low, and (10-18) as high. The results on the usage of e-learning facilities indicated that more than half (56.0%) stating as low. While 44.0% of the total respondents indicated high level on the usage of e-learning facilities which implies that the extent at which the students use e-learning facilities are still very low and this can be as result of not being knowledgeable enough about the usage. In line with this, Almaiah et al. (2020) found that academic staff knowledge of learning technologies, student knowledge and technical infrastructure, were significant factors in facilitating the successful acceptance and usage of e-learning in universities.
Hypotheses testing
The result obtained in Table 6 clearly reveals that there had been no significant relationship between selected personal characteristics of the students; gender (t = 0.857, p>0.05), age (t = -0.007, p>0.05), category of student programme (t = 1.136, p>0.05), level of programme (t = -1.600, p>0.05), monthly stipend per semester (t = 0.609, p>0.05) and their usage of e-learning. This result implies that the students' usage of e-learning is irrespective of their personal characteristics but on their level of awareness and knowledge to the use of e-learning. In a related study, Fleming et al. (2017) and Bączek et al. (2021) reiterated that age, gender and level of study are not a significant factor impacting the use of e-learning among the students.
Furthermore, the result from Table 6 shows that a significant relationship existed between students' levels of awareness (t = 14.672, p<0.05), knowledge (t = 31.903, p<0.05), constraints (t = 2.555, p<0.05) and the level of elearning usage. The predicted relationship has been established; meaning that for student to use e-learning facilities effectively, their level of awareness and knowledge need to be increased as well as reduce certain constraints to its usage such as poor electricity and power supply which serves as a serious constraint in the study (Oyediran et al., 2020). This result might stem from the fact that the students were not previously exposed to elearning facilities due to inadequate awareness and knowledge (Olayemi et al., 2021). Similarly, result obtained by Ngampornchai and Adams (2016) on students' acceptance and readiness for e-learning in Northeastern Thailand and Nigeria. They revealed the reason why usages of e-learning were low among students was as a result of low awareness and knowledge level, complexity of technology and inadequate e-learning infrastructures in the school. Almaiah et al. (2020) also reported similar results that lack of awareness and knowledge lead to students not taken responsibility for their e-learning utilization.
The regression analysis carried out in this study resulted in R-square of .957. These findings indicate that about 95.7% of variance in the level of usage of e-learning is explained by awareness, knowledge and constraints. The significant variables are shown in Table 6 below. Since the p value is less than 0.05, it indicates that there is a significant relationship between level of awareness, knowledge, constraints and utilization of e-learning.
IV. CONCLUSION
The study investigated the e-learning utilization among agricultural students in public higher institutions in Southwest of Nigeria. It can be concluded that there is low utilization of e-learning in the study area. The study found significant association between constraints faced by students in the utilization of e-learning, students' awareness of the use and knowledge of e-learning. The major constraints affecting respondents' utilization of elearning include poor electricity and power supply, complexity of technology, insufficiency of financial resources for technology integration, and inadequate elearning infrastructures in their various higher institutions.
V. RECOMMENDATIONS
Based on the findings of the study, it is recommended that: 1. Tertiary institutions should concentrate on sustainable and internal power generation through usage of solar panel or independent power projects by converting dams in schools where they have them to power generation centres. This could be achieved by sensitizing and lobbying the public through mass media on the importance of stable electric supply for concrete and quality research to attain good and reliable policy formulations for government to accede to this lofty demand.
2. Higher Institutions should enhance institutional rewards for high quality teaching and research development among lecturers.
3. Adequate attention should be given to up-to-date and make e-learning facilities accessible in public higher institutions, to broaden the horizon of agricultural students on relevant information and exchange of ideas that will change their knowledge, skills and attitudes to proper and good usage of e-learning facilities for effective learning.
4. Agricultural students should be more encouraged to integrate e-learning usage in their daily academic and non-academic endeavours in order to internalize and get more acquainted to its usage every time.
5. e-learning facilities should be made more available, accessible and affordable to students with less bureaucracy so that students would enjoy a more viable, robust, reliable, efficient, effective and cost beneficial educational acquisition.
6. Tertiary Institutions should increase student opportunities to participate in the outreach and extension activities during the farm practical training year programmes.
7. Staff and students of the university need to be well informed about the content and provisions in available ICT policies as a means of making all stakeholders adequately informed.
8. There is need for higher institutions to seek private sector collaboration as alternative means of funding ICT and e-learning educational initiatives, as it availability in schools is pertinent for effective teaching and learning. | 4,761.4 | 2021-01-01T00:00:00.000 | [
"Agricultural and Food Sciences",
"Education",
"Computer Science"
] |
Protein Subcellular Localization with Gaussian Kernel Discriminant Analysis and Its Kernel Parameter Selection
Kernel discriminant analysis (KDA) is a dimension reduction and classification algorithm based on nonlinear kernel trick, which can be novelly used to treat high-dimensional and complex biological data before undergoing classification processes such as protein subcellular localization. Kernel parameters make a great impact on the performance of the KDA model. Specifically, for KDA with the popular Gaussian kernel, to select the scale parameter is still a challenging problem. Thus, this paper introduces the KDA method and proposes a new method for Gaussian kernel parameter selection depending on the fact that the differences between reconstruction errors of edge normal samples and those of interior normal samples should be maximized for certain suitable kernel parameters. Experiments with various standard data sets of protein subcellular localization show that the overall accuracy of protein classification prediction with KDA is much higher than that without KDA. Meanwhile, the kernel parameter of KDA has a great impact on the efficiency, and the proposed method can produce an optimum parameter, which makes the new algorithm not only perform as effectively as the traditional ones, but also reduce the computational time and thus improve efficiency.
Introduction
Some proteins can only play the role in one specific place in the cell while others can play the role in several places in the cell [1]. Generally, a protein can function correctly only when it is localized to a correct subcellular location [2]. Therefore, protein subcellular localization prediction is an important research area of proteomics. It is helpful to predict protein function as well as to understand the interaction and regulation mechanism of proteins [3]. Now, many methods have been used to predict protein subcellular location, such as green fluorescent protein labeling [4], mass spectrometry [5], and so on. However, these traditional experimental methods usually have many technical limitations, resulting in high cost of time and money. Thus, prediction of protein subcellular location based on machine learning has become a focus research in bioinformatics [6][7][8].
When we use the methods of machine learning to predict protein subcellular location, we must extract features of protein sequences. We can get some vectors after feature extraction, and then we use the classifier to process these vectors. However, these vectors are usually complex due to their high dimensionality and nonlinear property. In order to improve the prediction accuracy of protein subcellular location, an appropriate nonlinear method for reducing data dimension should be used before classification. Kernel discriminant analysis (KDA) [9] is a nonlinear reductive dimension algorithm based on kernel trick that has been used in many fields such as facial recognition and fingerprint identification. The KDA method not only reduces data dimensionality but also makes use of the classification information. This paper newly introduces the KDA method to predict protein subcellular location. The algorithm of KDA first maps sample data to a high-dimensional feature space by a kernel function, and then executes linear discriminant analysis (LDA) in the high-dimensional feature space [10], which indicates that kernel parameter selection will significantly affect the algorithm performance.
There are some classical algorithms used to select the parameter of kernel function, such as genetic algorithm, grid searching algorithm, and so on. These methods have high calculation precision but large amounts of calculation. In an effort to reduce computational complexity, recently, Xiao et al. proposed a method based on reconstruction errors of samples and used it to select the parameters of Gaussian kernel principal component analysis (KPCA) for novelty detection [11]. Their methods are applied into the toy data sets and UCI (University of CaliforniaIrvine) benchmark data sets to demonstrate the correctness of the algorithm. However, their innovation in the KPCA method aims at dimensional reduction rather than discriminant analysis, which leads to unsatisfied classification prediction accuracy. Thus, it is necessary to improve the efficiency of the method in [11] especially for some complex data such as biological data.
In this paper, an improved algorithm of selecting parameters of Gaussian kernel in KDA is proposed to analyze complex protein data and predict subcellular location. By maximizing the differences of reconstruction errors between edge normal samples and interior normal samples, the proposed method not only shows the same effect as the traditional grid-searching method, but also reduces the computational time and improves efficiency.
Results and Discussion
In this section, the proposed method (in Section 3.4) and the grid-searching algorithm (in Section 4.4) are both applied to predict protein subcellular localization. We use two standard data sets as the experimental data. The two used feature expressions are generated from PSSM (position specific scoring matrix) [12], which are the PsePSSM (pseudo-position specific scoring matrix) [12] and the PSSM-S (AAO + PSSM-AAO + PSSM-SAC + PSSM-SD = PSSM-S) [13]. Here AAO means consensus sequence-based occurrence, PSSM-AAO means evolutionary-based occurrence or semi-occurrence of PSSM, PSSM-SD is segmented distribution of PSSM and PSSM-SAC is segmented auto covariance of PSSM. The k-nearest neighbors (KNN) is used as the classifier in which Euclidean distance is adopted for the distance between samples. The flow of experiments is as follows.
•
First, for each standard data set, we use the PsePSSM algorithm and the PSSM-S algorithm to extract features, respectively. Then totally we obtain four sample sets, which are GN-1000 (Gram-negative with PsePSSM which contains 1000 features), GN-220 (Gram-negative with PSSM-S which contains 220 features), GP-1000 (Gram-positive with PsePSSM which contains 1000 features) and GP-220 (Gram-positive with PsePSSM which contains 220 features). • Second, we use the proposed method to select the optimum kernel parameter for the Gaussian KDA model and then use KDA to reduce the dimension of sample sets. The same procedure is also carried out for the traditional grid-searching method to form a comparison with the proposed method. • Finally, we use the KNN algorithm to classify the reduced dimensional sample sets and use some criterions to evaluate the results and give the comparison results.
Some detailed information in experiments is as follows. For every sample set, we choose the class that contains the most samples to form the training set [8]. Let S = [0.1, 0.2, 0.3, 0.4, 1, 2, 3, 4] be a candidate set of the Gaussian kernel parameter, which is proposed at random. When we use the KDA candidate set of the Gaussian kernel parameter, which is proposed at random. When we use the KDA algorithm to reduce dimension, the number of retained eigenvectors must be less than or equal to C -1 ( C is the number of classes). Therefore, for sample sets GN-1000 and GN-220, the number of retained eigenvectors, which is denoted as d , can be from 1 to 7. For the sample sets GP-1000 and GP-220, d can be 1, 2, and 3. As far as the parameter u is concerned, when it is 5-8% of the average number of samples, good classification can be achieved [14]. Besides, we demonstrate the robustness of the proposed method with the variation of u in Section 2.2. So here we simply pick a general value for u , say 8. To sum up, in the following experiments, when certain parameters need to be fixed, their default values are as follows. The value of d is 7 for sample sets GN-1000 and GN-220, and 3 for GP-1000 and GP-220; the value of u is 8 and the k value in KNN classifier is 20.
The Accuracy Comparison between the Proposed Method and the Grid-Searching Method
In this section, first, the proposed method and the grid-searching method are respectively used in the prediction of protein subcellular localization with different d values. The experimental results are presented in Figure 1. In Figure 1, all four sample sets suggest that when we use the KDA algorithm to reduce dimension, the larger the number of retained eigenvectors, the higher the accuracy. The overall accuracy of the proposed method is always the same as that of the grid-searching method, no matter which value of d . The proposed method is effective for selecting the optimal Gaussian kernel parameter.
Then, in the analyses and experiments, we find that superiority of the proposed method is the low runtime, which is demonstrated in Table 1 and In Figure 1, all four sample sets suggest that when we use the KDA algorithm to reduce dimension, the larger the number of retained eigenvectors, the higher the accuracy. The overall accuracy of the proposed method is always the same as that of the grid-searching method, no matter which value of d. The proposed method is effective for selecting the optimal Gaussian kernel parameter.
Then, in the analyses and experiments, we find that superiority of the proposed method is the low runtime, which is demonstrated in Table 1 and Figure 2.
Sample Sets Overall Accuracy
Ratio ( In Table 1, t1 and t2 are the runtimes of the proposed method and the grid-searching method, respectively. The overall accuracy and the ratio of t1 and t2 are presented in both Table 1 and Figure 2, from which we can see that for each sample set, the accuracy of the proposed method is always the same as that of the grid-searching method; meanwhile, the runtime of the former is about 70-80% of that of the latter, indicating that the proposed method has a higher efficiency than the grid-searching method.
The Comparison between Methods with and without KDA
In this experiment, we compare the overall accuracies between the cases of using KDA algorithm or not, with k values of the KNN classifier varying from 1 to 30. The experimental results are shown in Figure 3.
For each sample set, Figure 3 shows that the accuracy with KDA algorithm to reduce dimension is higher than that of without it. However, the kernel parameter has a great impact on the efficiency of the KDA algorithm, and the proposed method can be used to select the optimum parameter that makes the KDA perform perfect. Therefore, accuracy can be improved by using the proposed method to predict the protein subcellular localization. In Table 1, t 1 and t 2 are the runtimes of the proposed method and the grid-searching method, respectively. The overall accuracy and the ratio of t 1 and t 2 are presented in both Table 1 and Figure 2, from which we can see that for each sample set, the accuracy of the proposed method is always the same as that of the grid-searching method; meanwhile, the runtime of the former is about 70-80% of that of the latter, indicating that the proposed method has a higher efficiency than the grid-searching method.
The Comparison between Methods with and without KDA
In this experiment, we compare the overall accuracies between the cases of using KDA algorithm or not, with k values of the KNN classifier varying from 1 to 30. The experimental results are shown in Figure 3.
For each sample set, Figure 3 shows that the accuracy with KDA algorithm to reduce dimension is higher than that of without it. However, the kernel parameter has a great impact on the efficiency of the KDA algorithm, and the proposed method can be used to select the optimum parameter that makes the KDA perform perfect. Therefore, accuracy can be improved by using the proposed method to predict the protein subcellular localization.
The Robustness of the Proposed Method
In the proposed method, the value of u will have an impact on the radius value of neighborhood so that it can affect the number of the selected internal and edge samples. Figure 4 shows the experimental results when the value of u ranges from 6 to 10, in which the overall accuracies of the proposed method and the grid-searching method are given. It is easily seen from Figure 4 that the accuracy keeps invariable with different u values.
The number of the selected internal and edge samples has little effect on the performance of the proposed method. Therefore, the method proposed in this paper has a good robustness.
Evaluating the Proposed Method with Some Regular Evaluation Criterions
In this subsection, we compute the values of some regular evaluation criterions with the proposed method for two standard data sets, which is show in Table 2 and 3, respectively. In Table 3, "-" means an infinity value, corresponding to the cases when the denominator is 0 in MCC.
The Robustness of the Proposed Method
In the proposed method, the value of u will have an impact on the radius value of neighborhood so that it can affect the number of the selected internal and edge samples. Figure 4 shows the experimental results when the value of u ranges from 6 to 10, in which the overall accuracies of the proposed method and the grid-searching method are given.
The Robustness of the Proposed Method
In the proposed method, the value of u will have an impact on the radius value of neighborhood so that it can affect the number of the selected internal and edge samples. Figure 4 shows the experimental results when the value of u ranges from 6 to 10, in which the overall accuracies of the proposed method and the grid-searching method are given. It is easily seen from Figure 4 that the accuracy keeps invariable with different u values.
The number of the selected internal and edge samples has little effect on the performance of the proposed method. Therefore, the method proposed in this paper has a good robustness.
Evaluating the Proposed Method with Some Regular Evaluation Criterions
In this subsection, we compute the values of some regular evaluation criterions with the proposed method for two standard data sets, which is show in Table 2 and 3, respectively. In Table 3, "-" means an infinity value, corresponding to the cases when the denominator is 0 in MCC. It is easily seen from Figure 4 that the accuracy keeps invariable with different u values. The number of the selected internal and edge samples has little effect on the performance of the proposed method. Therefore, the method proposed in this paper has a good robustness.
Evaluating the Proposed Method with Some Regular Evaluation Criterions
In this subsection, we compute the values of some regular evaluation criterions with the proposed method for two standard data sets, which is show in Tables 2 and 3, respectively. In Table 3, "-" means an infinity value, corresponding to the cases when the denominator is 0 in MCC. Tables 2 and 3 show that the values of the evaluation criterion are close to 1 for the proposed method. Then the selection of the kernel parameter using the proposed method will benefit the protein subcellular localization.
Protein Subcellular Localization Prediction Based on KDA
To improve the localization prediction accuracy, it is necessary to reduce dimension of high-dimensional protein data before subcellular classification. The flow of protein subcellular localization prediction is presented in Figure 5. In the whole process of Figure 5, dimension reduction with KDA is very important, in which the kernel selection is a key step and constructs the research focus of this paper. Kernel selection includes the choice of the type of kernel function and the choice of the kernel parameters. In this paper, Gaussian kernel function is adopted for KDA because of its good nature, learning performance, and catholicity. So, the emphasis of this study is to decide the scale parameter of the Gaussian kernel, which plays an important role in the process of dimensionality reduction and has a great influence on prediction results. We put forward a method for selecting the optimum Gaussian kernel parameter with the starting point of reconstruction error idea in [15].
Algorithm Principle
Kernel method constructs a subspace in the feature space by the kernel trick, which makes normal samples locate in or nearby this subspace, while novel samples are far from it. The reconstruction error is the distance of a sample from the feature space to the subspace [11], so the reconstruction errors of normal samples should be different from those of the novel samples. In this paper, we use the Gaussian KDA as the descending algorithms. Since the values of the reconstruction errors are influenced by the Gaussian kernel parameters, the reconstruction errors of normal samples should be differentiated from those of the novel samples by suitable parameters [11].
In the input space, we usually call the samples on the boundary as edge samples, and call those within the boundary as internal samples [16,17]. The edge samples are much closer to novel samples than the internal samples, while the internal samples are much closer to normal states than the edge samples [11]. We usually use the internal samples as the normal samples and use the edge samples as the novel samples, since there are no novel samples in data sets. Therefore, the principle is that the optimal kernel parameter makes the reconstruction errors have a reasonable difference between the internal samples and the edge samples.
Kernel Discriminant Analysis (KDA) and Its Reconstruction Error
KDA is an algorithm by applying kernel trick into linear discriminant analysis (LDA). LDA is an algorithm of linear dimensionality reduction together with classifying discrimination, which aims to find a direction that maximizes the between-class scatter while minimizing the within-class scatter [18]. In order to extend the LDA theory to the nonlinear data, Mika et al. proposed the KDA algorithm, which makes the nonlinear data linearly separable in a much higher dimensional feature space than before [9]. The principle of the KDA algorithm is shown as follows. As shown in Figure 5, first, for a standard data set, some features of protein sequences such as PSSM-based expressions are extracted to form the sample sets. The specific feature expressions used in this paper are discussed in Section 4.2. Second, the kernel parameter is selected in an interval based on the sample sets to reach its optimal value in KDA model. Third, with this optimal value, we used the KDA to realize the dimension reduction of the sample sets. Lastly, the low dimensional data is treated by certain classifier to realize the classification and the final prediction.
In the whole process of Figure 5, dimension reduction with KDA is very important, in which the kernel selection is a key step and constructs the research focus of this paper. Kernel selection includes the choice of the type of kernel function and the choice of the kernel parameters. In this paper, Gaussian kernel function is adopted for KDA because of its good nature, learning performance, and catholicity. So, the emphasis of this study is to decide the scale parameter of the Gaussian kernel, which plays an important role in the process of dimensionality reduction and has a great influence on prediction results. We put forward a method for selecting the optimum Gaussian kernel parameter with the starting point of reconstruction error idea in [15].
Algorithm Principle
Kernel method constructs a subspace in the feature space by the kernel trick, which makes normal samples locate in or nearby this subspace, while novel samples are far from it. The reconstruction error is the distance of a sample from the feature space to the subspace [11], so the reconstruction errors of normal samples should be different from those of the novel samples. In this paper, we use the Gaussian KDA as the descending algorithms. Since the values of the reconstruction errors are influenced by the Gaussian kernel parameters, the reconstruction errors of normal samples should be differentiated from those of the novel samples by suitable parameters [11].
In the input space, we usually call the samples on the boundary as edge samples, and call those within the boundary as internal samples [16,17]. The edge samples are much closer to novel samples than the internal samples, while the internal samples are much closer to normal states than the edge samples [11]. We usually use the internal samples as the normal samples and use the edge samples as the novel samples, since there are no novel samples in data sets. Therefore, the principle is that the optimal kernel parameter makes the reconstruction errors have a reasonable difference between the internal samples and the edge samples.
Kernel Discriminant Analysis (KDA) and Its Reconstruction Error
KDA is an algorithm by applying kernel trick into linear discriminant analysis (LDA). LDA is an algorithm of linear dimensionality reduction together with classifying discrimination, which aims to find a direction that maximizes the between-class scatter while minimizing the within-class scatter [18]. In order to extend the LDA theory to the nonlinear data, Mika et al. proposed the KDA algorithm, which makes the nonlinear data linearly separable in a much higher dimensional feature space than before [9]. The principle of the KDA algorithm is shown as follows.
Suppose the N samples in X can be divided into C classes and the ith class contains N i samples The between-class scatter matrix S φ b and the within-class scatter matrix n φ n of X are defined in the following equations, respectively: φ(x i ) is the total mean of X. To find the optimal linear discriminant, we need to maximize J(W) as follows: is a projection matrix, and w k (k = 1, 2, · · · , d) is a column vector with N elements. Through certain algebra, it can be deduced that W is made up of the eigenvectors corresponding to the top d eigenvalues of S φ w −1 S φ b . Also, the projection vector w k can be represented by a linear combination of the samples in the feature space: where a k j is a real coefficient. The projection of the sample X onto w k is given by: Let a = a 1 , a 2 , · · · , a d T be the coefficient matrix where a k = a k 1 , a k 2 , · · · , a k N T is the coefficient vector.
Combining Equations (1)-(5), we can obtain the linear discriminant by maximizing the function J(a): max J(a) = a T Ma a T La (6) where identity matrix, and 1 N i I is the N i × N i matrix that all elements are 1 N i [9]. Then, the projection matrix a is made up of the eigenvectors corresponding to the top d eigenvalues of L −1 M.
According to the KDA algorithm principle in (3) or (6), besides the Gaussian kernel parameter s, the number of retained eigenvectors d also affects the algorithm performance. Generally, in this paper, the proposed method is mainly used to screen an optimum S under a predetermined d value.
where σ is the scale parameter which is generally estimated by s. Note that φ(x) 2 = K(x, x) = 1. The kernel-based reconstruction error is defined in the following equation: where t(x) is the vector obtained by projecting φ(x) onto a projection matrix a.
The Proposed Method for Selecting the Optimum Gaussian Kernel Parameter
The method of kernel parameter selection relies on the reconstruction errors of the internal samples and the edge samples. Therefore, first we find a method to select the edge samples and the interior samples, then we propose the method for selection of the Gaussian kernel parameter.
The Method for Selecting Internal and Edge Samples
Li and Maguire present a border-edge pattern selection method (BEPS) to select the edge samples based on the local geometric information [16]. Xiao et al. [11] modified the BEPS algorithm so that it can select both the edge samples and internal samples. However, their algorithm has the risk of making all samples in the training set become the edge samples. For example, when all samples are distributed on a spherical surface in a three-dimensional space, every sample in the data set will be selected as the edge samples since its neighbors are all located on one side of its tangent plane. In order to solve this problem, this paper innovatively combines the ideas in [19,20] to select the internal and edge samples, respectively, which is not dependent on the local geometric information. The main principle is that the edge sample is usually surrounded by the samples belonging to other classes while the internal sample is usually surrounded by the samples belonging to its same class. Further, the edge samples are usually far from the centroid of this class, while the internal samples are usually close to the centroid. So, a sample will be selected as the edge sample if it is far from the centroid of this class and there are samples around it that belongs to other classes, otherwise it will be selected as the internal sample.
Specifically, suppose the ith class X i = x 1 , x 2 , · · · , x N i in the sample set X is picked out as the training set. Denote c i be the centroid of this class: We use the median value m of the distances from all samples in a class to its centroid to measure the distance from a sample to the centroid of this class. A sample is conserved to be far from the centroid of this class if the distance from this sample to the centroid is greater than the median value. Otherwise, the sample is considered to be close to the centroid.
Denote dist x i , x j as the distance between any two samples x i and x j , and N ε (x) as the ε-neighborhood of X: The value of neighborhood ε is given as follows. Let u be a given number which satisfies 0 < u < N i . Density u (X i ) is the mean radius of neighborhood of X i for the given number u: where dist u (x i ) is the distance from x i to its uth nearest neighbor. So, Density u (X i ) is used as the value of ε for the training set X i . The flow for the selection of the internal and edge samples is shown in Table 4. Input: X = {X 1 , X 2 , · · · , X C }, the training set X i = {x 1 , x 2 , · · · , x N i } (1 ≤ i ≤ C).
1. Calculate the radius of neighborhood ε using Equation (11). 2. Calculate the centroid c i of the i th class according to Equation (9). 3. Calculate the distances dist j (j = 1, 2, · · · , N i ) from all samples in training set to c i , respectively, and the median value m of them. 4. For each training sample x j of the set X i • Calculate the N ε x j according to Equation (10).
•
If dist j > m and there are samples in N ε x j belonging to other classes, x j is selected as an edge sample.
•
If dist j < m and no sample in N ε x j belongs to other classes, x j is selected as an internal sample.
Output: the selected internal sample set Ω in , the selected edge sample set Ω ed .
In Table 4, a sample X is considered to be the edge one when the distance from X to the centroid is larger than the median m and there are samples in N ε (x) belonging to other classes in this case. A sample X is considered to be the internal one when the distance from X to the centroid is less than m and in this case all samples of N ε (x) belong to this class.
The Proposed Method
In order to select the optimum kernel parameter, it is necessary to propose a criterion aiming to distinguish reconstruction errors of the edge samples from those of the internal samples. A suitable parameter not only maximizes the difference between reconstruction errors of the internal samples and those of the edge samples, but also minimizes the variance (or standard deviation) of reconstruction errors of the internal samples [11]. According to the rule, an improved objective function is proposed in this paper. The optimal Gaussian kernel parameter S is selected by maximizing this objective function.
where · ∞ is the infinite norm which computes the maximum absolute component of a vector and std(·) is a function of the standard deviation. Note that in the objective function f(s), our key improvement is to use the infinite norm to compute the size of reconstruction error vector since it can lead to a higher accuracy than many other measurements, which has been verified by a series of our experiments. The reason is probably that the maximum component is more reasonable to evaluate the size of a reconstruction error vector than others such as the 1-norm, p-norm (1 < p < +∞) and the minimum component of a reconstruction error vector in [11]. According to (8), when the number of retained eigenvectors is determined, we can select the optimum parameter s from a candidate set using the proposed method. The optimum parameter ensures that the Gaussian KDA algorithm performs well in dimensionality reduction, which improves the accuracy of protein subcellular location prediction. The proposed method for selecting the Gaussian kernel parameter can be presented in Table 5. Input: A reasonable candidate set S = {s 1 , s 2 , · · · , s m } for Gaussian kernel parameter, X = {X 1 , X 2 , · · · , X C }, the training set X i = {x 1 , x 2 , · · · , x N i } (1 ≤ i ≤ C), the number of retained eigenvectors d. 1. Get the internal sample set Ω in and the edge sample set Ω ed from the training set X i using Algorithm 1. 2. For each parameter s i ∈ S, i = 1, 2, · · · , m • Calculate the kernel matrix K using Equation (7). • Reduce dimension of the K using the Gaussian KDA algorithm. • Calculate RE(Ω ed ) and RE(Ω in ) using Equation (8).
•
Calculate the value of objective function f(s i ) using Equation (12).
3. Select the optimum parameter s = argmax Output: the optimum Gaussian kernel parameter S.
As the end of this section, we want to summarize the position of the proposed method in protein subcellular localization once more. First, two kinds of regularization forms of PSSM are used to extract the features in protein amino acid sequences. Then, the KDA method is performed on the extracted features for dimension reduction and discriminant analysis according to the KDA algorithm principle in Section 3.3 with formulas (1)- (6). During the procedure of KDA, the novelty of our work is to give a new method for selecting the Gaussian kernel parameter, which is summarized in Table 5. Finally, we choose the k-nearest neighbors (KNN) as the classifier to cluster the dimension-reduced data after KDA.
Materials
In this section, we introduce the other processes in Figure 5 except KDA model and its parameter selection, which are necessary materials for the whole experiment.
Standard Data Sets
In this paper, we use two standard datasets that have been widely used in the literature for Gram-positive and Gram-negative subcellular localizations [13], whose protein sequences all come from the Swiss-Prot database.
For the Gram-positive bacteria, the standard data set we found in the literature [13,14,21] is publicly available on http://www.csbio.sjtu.edu.cn/bioinf/Gpos-multi/Data.htm. There are 523 locative protein sequences in the data set that are distributed in four different subcellular locations. The number of proteins in each location is given in Table 6. For the Gram-negative bacteria, the standard data set of subcellular localizations is presented in the literature [13,22], which can be downloaded freely from http://www.csbio.sjtu.edu.cn/bioinf/ Gneg-multi/Data.htm. The data set contains 1456 locative protein sequences located in eight different subcellular locations. The number of proteins in each location is shown in Table 7.
Feature Expressions and Sample Sets
In the prediction of protein subcellular localizations with machine learning methods, feature expressions are important information extracted from protein sequences, which have certain proper mathematical algorithms. There are many efficient algorithms used to extract features of protein sequences, in which two of them, PsePSSM [12] and PSSM-S [13], are used in this paper. The two methods rely on the position-specific scoring matrix (PSSM) for benchmarks which is obtained by using the PSI-BLAST algorithm to search the Swiss-Prot database with the parameter E-value of 0.01. The PSSM is defined as follows [12]: where M i→j represents the score created in the case when the ith amino acid residue of the protein sequence is transformed to the amino acid type j during the evolutionary process [12]. Note that, usually, multiple alignment methods are used to calculate PSSM, whose chief drawback is being time-consuming. The reason why we select PSSM instead of simple multiple alignment in this paper to form the total normalized information content is as follows. First, since our focus is to demonstrate the effectiveness of dimensional reduction algorithm, we need to construct high-dimensional feature expressions such as PsePSSM and PSSM-S, whose dimensions are as high as 1000 and 220, respectively. Second, PSSM has many advantages, such as those described in [23]. As far as the information features are concerned, PSSM has produced the strongest discriminator feature between fold members of protein sequences. Multiple alignment methods are used to calculate PSSM, whose chief drawback is being time-consuming. However, in spite of the time-consuming nature of constructing a PSSM for the new sequence, the extracted feature vectors from PSSM are so informative that are worth the cost of their preparation [23]. Besides, for a new protein sequence, we only need to construct a PSSM for the first time, which could be used repeatedly in the future for producing new normalization forms such as PsePSSM and PSSM-S.
Pseudo Position-Specific Scoring Matrix (PsePSSM)
Let P be a protein sample, whose definition of PsePSSM is given as follows [12]: (j = 1, 2, · · · , 20; ξ < L) (16) where L is the length of P, G ξ j is the correlation factor by coupling the ξ-most contiguous scores [22]. According to the definition of PsePSSM, a protein sequence can be represented by a 1000-dimensional vector.
Sample Sets
For the two benchmark data, PsePSSM and PSSM-S are used to extract features, respectively. Finally we get four experimental sample sets GN-1000, GN-220, GP-1000 and GP-220, shown in Table 8.
Evaluation Criterion
To evaluate the performance of the proposed method, we use Jackknife cross-validation, which has been widely used to predict protein subcellular localization [13]. The Jackknife test is the most objective and rigorous cross-validation procedure in examining the accuracy of a predictor, which has been used increasingly by investigators to test the power of various predictors [24,25]. In the Jackknife test (also known as leave-one-out cross-validation), every protein is removed one-by-one from the training dataset, and the predictor is trained by the remaining proteins. The isolated protein is then tested by the trained predictor [26]. Let x be a sample set with N samples. For each sample, it will be used as the test data, and the remaining N − 1 samples will be used to construct the training set [27]. In addition, we use some criterion to assess the experimental results, defined as follows [12]: where TP is the number of true positive, TN is the number of true negative, FP is the number of false positive, and FN is the number of false negative [12]. The value of MCC (Matthews coefficient correlation) varies between −1 and 1, indicating when the classification effect goes from a bad to a good one. The values of Specificity (Spe), sensitivity (Sen), and the overall accuracy (Q) all vary between 0 and 1, and the classification effect is better when their values are closer to 1, while the classification effect is worse when their values are closer to 0 [13].
The Grid Searching Method Used as Contrast
In this section, we introduce a normal algorithm for searching S, the grid-searching algorithm, which is used as a contrast with the proposed algorithm in Section 3.4.
The grid-searching method is usually used to select the optimum parameter, whose steps are as follows for the candidate parameter set S [28].
•
Compute the kernel matrix k for each parameter s i ∈ S, i = 1, 2, · · · , m. • Use the Gaussian KDA to reduce the dimension of K. • Use the KNN algorithm to classify the reduced dimensional samples.
•
Calculate the classification accuracy.
•
Repeat the above four steps until all parameters in S have been traversed. The parameter corresponding to the highest classification accuracy is selected as the optimum parameter.
Conclusions
Biological data is usually high-dimensional. As a result, it is necessary to reduce dimension to improve the accuracy of the protein subcellular localization prediction. The kernel discriminant analysis (KDA) based on Gaussian kernel function is a suitable algorithm for dimensional reduction in such applications. As is known to all, the selection of a kernel parameter affects the performance of KDA, and thus it is important to choose the proper parameter that makes this algorithm perform well. To handle this problem, we propose a method of the optimum kernel parameter selection, which relies on reconstruction error [15]. Firstly, we use a method to select the edge and internal samples of the training set. Secondly, we compute the reconstruction errors of the selected samples. Finally, we select the optimum kernel parameter that makes the objective function maximum.
The proposed method is applied to the prediction of protein subcellular locations for Gram-negative bacteria and Gram-positive bacteria. Compared with the grid-searching method, the proposed method gives higher efficiency and performance.
Since the performance of the proposed method largely depends on the selection of the internal and edge samples, in the future study, researchers may pay more attention to select more representative internal and edge samples from the biological data set to improve the prediction accuracy of protein subcellular localization. Besides this, it is also meaningful to research how to further improve the proposed method to make it suitable for selecting parameters of other kernels. | 8,916.6 | 2017-12-01T00:00:00.000 | [
"Biology",
"Computer Science"
] |
TFOFinder: Python program for identifying purine-only double-stranded stretches in the predicted secondary structure(s) of RNA targets
Nucleic acid probes are valuable tools in biology and chemistry and are indispensable for PCR amplification of DNA, RNA quantification and visualization, and downregulation of gene expression. Recently, triplex-forming oligonucleotides (TFO) have received increased attention due to their improved selectivity and sensitivity in recognizing purine-rich double-stranded RNA regions at physiological pH by incorporating backbone and base modifications. For example, triplex-forming peptide nucleic acid (PNA) oligomers have been used for imaging a structured RNA in cells and inhibiting influenza A replication. Although a handful of programs are available to identify triplex target sites (TTS) in DNA, none are available that find such regions in structured RNAs. Here, we describe TFOFinder, a Python program that facilitates the identification of intramolecular purine-only RNA duplexes that are amenable to forming parallel triple helices (pyrimidine/purine/pyrimidine) and the design of the corresponding TFO(s). We performed genome- and transcriptome-wide analyses of TTS in Drosophila melanogaster and found that only 0.3% (123) of total unique transcripts (35,642) show the potential of forming 12-purine long triplex forming sites that contain at least one guanine. Using minimization algorithms, we predicted the secondary structure(s) of these transcripts, and using TFOFinder, we found that 97 (79%) of the identified 123 transcripts are predicted to fold to form at least one TTS for parallel triple helix formation. The number of transcripts with potential purine TTS increases when the strict search conditions are relaxed by decreasing the length of the probe or by allowing up to two pyrimidine inversions or 1-nucleotide bulge in the target site. These results are encouraging for the use of modified triplex forming probes for live imaging of endogenous structured RNA targets, such as pre-miRNAs, and inhibition of target-specific translation and viral replication.
In addition, we are including one pdf copy and one docx source file of the manuscript: We would also like to thank the reviewers for their time and valuable assessment.
Sincerely, Atara Neugroschl and Irina Catrina >>The RNAMotif and TFOFinder programs are not redundant; RNAmotif can be used to identify potential RNA targets in a large dataset, which then can be analyzed using TFOFinder.However, while RNAmotif can be used to identify any regions that may contain a purine duplex in RNA targets of interest, it does not take into consideration the target structure.
This was briefly discussed in the manuscript, and at the Reviewer #1's recommendation we expanded this section to include more information in the paragraph first introducing TFOFinder, as shown below: >> Line clean file 94 & tracking file 77: Yes, "frame" here means backbone, and we did change it, as the reviewer implied it will make it clearer.<< • Line 81, "in a greater mismatch discrimination" should be "with a greater mismatch discrimination." >> Line clean file 98 & tracking file 81: This was corrected as indicated.<< • Line 100, "duple-formation" should be "duplex-formation." >> Line clean file 117 & tracking file 100: This was corrected as indicated.<< • Line 134, FRET should be defined.
>> Line clean file 144 & tracking file 127: This was defined in the text only, but it was not included in the list of abbreviations, as it only appears once.
"FRET (Fluorescence/Förster Resonance Energy Transfer)" << • Line 133, this paragraph seems to be out of place.The authors provided introductory material and then introduced their new tool.This paragraph of introductory material sits between two paragraphs that discuss the tool.Is there a better location in the introduction to move this paragraph?We added a reference for this number, which we defined in a previous publication (reference [47]) and we included the following information immediately following the sentence first mentioning "ss-count": "The ss-count fraction indicates the extent to which a sequence is predicted to be singlestranded in the predicted MFE and/or SO structures.The larger the value of the ss-count fraction, the more likely it will be that the sequence will have a single-stranded character, where 1 = fully single-stranded and 0 = fully double-stranded.The ss-count fraction was calculated by dividing the sum of the ss-count numbers of the individual bases in the TTS to the product of the probe length and number of total structures (MFE and SO structures) in the input file.The sscount number represents the number of structures of the total structures in which a base is predicted to be single-stranded, and the ss-count file is one of the output files obtained when predicting RNA structure using mfold."<< • Line 309, "one highlighted in the red box for the of the 12th 310 SO structure" does not make sense and needs editing.At the reviewer's request, we checked the above reference and found that it was imported in EndNote using the citation file provided by the Journal and we believe it is complete.If the reviewer has a specific observation regarding this reference, we would be happy to check it again.<<
Reviewer #2: General Comment
The authors have presented a potentially valuable and innovative tool for designing TFOs targeting RNA in the model species D. melanogaster (genome and transcriptome) and the vRNA8 of influenza A. They have searched for double-stranded fragments of a user-defined length (4-30 nt) composed of consecutive purines within predicted secondary structures of the RNA target of interest.
The literature review and description of the methods employed by the authors are clear and concise, and the rationale for the study is evident.We appreciate the authors providing the link to the Github repository containing the TFOFinder python code.
While we believe that the wider scientific and bioinformatics community can benefit from this work, we suggest the authors consider applying the FAIR (Findable, Accessible, Interoperable, and Reusable) principles to the manuscript to ensure reproducibility and reusability of the codebase.It would be helpful if the authors could provide test data to demonstrate the usage of the provided scripts and how they integrate with other tools used in the complete study.
>>We apologize for not including examples with our scripts.At the Reviewer #2's recommendation we updated the GitHub repository to include examples of TFOFinder input and output files for the 67 th RNA identified by our D. melanogaster transcriptome search (ove-RE mRNA, S1 Table ).We also wrote and uploaded in GitHub a tutorial file describing in detail the TFOFinder example mentioned above.We will make available upon request any of the files mentioned in our manuscript.
The GitHub repository was updated by making a new folder for all TFOFinder-related files, and the GitHub link was updated, and the following text was included in the revised manuscript: Line clean file 463 & tracking file 468: The link to the GitHub repository was updated: "(https://github.com/icatrina/TFOFinder)" Lines clean file 472-475 and tracking file 477-480: "A tutorial file can be found in the abovementioned GitHub repository.This tutorial provides details for the download and installation
•
PCOMBIOL-D-23-00670_with_tracking_071223.pdf file reflects all changes we made to our original submission as follows: o Insertions o Deletions o Changed lines = grey outside border o Moved from o Moved to o Change format = side bubble • PCOMBIOL-D-23-00670_clean_071223reftxt.docx file is the clean version of the revised manuscript.
>>
Lines clean file 340-345 & tracking file 343-349: The legend of Fig 5 was edited as follows: "Fig 5. Secondary structures for two ncRNAs, predicted with mfold.(A) Full MFE structure (mfold) of the shortest transcript (CR44598-RA) identified to contain one TTS, which is highlighted in the red box and also shown magnified (right).(B) A longer ncRNA (CR44619-RA, 1,023-nt) containing several TTS, and one TTS containing a mispair and 1-nt bulge in the MFE structure (left, red arrows) is highlighted in the red boxes for the MFE and 12 th SO structure (mfold; right)."<< • Line 377, define NCBI.>>Line clean file 404 & tracking file 409: This was defined in the text at the first mention, but it was not included in the list of abbreviations, as it only appears three times in the manuscript."NCBI (National Center for Biotechnology Information)" << • Ref 14 doesn't look complete.>>Lines clean file 540-543 & tracking file 560-563: "14.Gupta P, Zengeya T, Rozners E. Triple helical recognition of pyrimidine inversions in polypurine tracts of RNA by nucleobase-modified PNA.Chem Commun (Camb).2011;47(39):11125-7. doi: 10.1039/c1cc14706d.PubMed PMID: 21909545; PubMed Central PMCID: PMCPMC3757498."
Text. Example of descriptors used for the RNAmotif searches." <<
>> Line clean file 84 & tracking file 67: This was corrected as indicated.<< Lines clean file 170-176 & tracking file 153-159: "The TFOFinder program takes into consideration the predicted secondary structure(s) of an RNA target of interest and designs the corresponding TFO probes, features that are not implemented in RNAmotif.However, when large-scale • Line 77, what is meant by "frame"?Backbone? | 2,010.4 | 2023-04-26T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Fortran code for generating random probability vectors, unitaries, and quantum states
The usefulness of generating random configurations is recognized in many areas of knowledge. Fortran was born for scientific computing and has been one of the main programming languages in this area since then. And several ongoing projects targeting towards its betterment indicate that it will keep this status in the decades to come. In this article, we describe Fortran codes produced, or organized, for the generation of the following random objects: numbers, probability vectors, unitary matrices, and quantum state vectors and density matrices. Some matrix functions are also included and may be of independent interest.
Perhaps because of its intuitive syntax and variety of well developed and optimized tools, Fortran, which stands for Formula translation, is the primary choice programming language of many scientists. There are several nice initiatives indicating that it will be continuously and consistently improved in the future [26,27], what places Fortran as a good option for scientific programming. It is somewhat surprising thus noticing that Fortran does not appear in Quantiki's list of "quantum simulators" [28]. For more details about codes under active development in other programming languages, see e.g. Refs. [29][30][31][32][33][34][35][36][37]. In this article, with the goal of starting the development of a Fortran Library for QIS, we shall explain (free) Fortran codes produced, or organized, for generators of random numbers, probability vectors, unitary matrices, and quantum state vectors and density matrices. Some examples of free software [38] programming languages with which it would be interesting to develop similar tools are: Python, Maxima, Octave, C, and Java.
This article is structured as follows. We begin (in Sec. II) recapitulating some concepts and definitions we utilize in the remainder of the article. In Sec. III, the general * Electronic address<EMAIL_ADDRESS>description of the code is provided. Reading this section, and the readme file, would be enough for a black box use of the generators. More detailed explanations of each one of the them, and of the related options, are given in Sections IV, V, VI, VII, and VIII. In Sec. IX we summarize the article and comment on some tests for the generators.
II. SOME CONCEPTS AND DEFINITIONS
In Quantum Mechanics (QM) [39,40], we associate to a system a Hilbert space H. Every state of that system corresponds to a unit vector in H. Observables are described by Hermitian operators O = j o j |o j o j |, i.e., o j ∈ R and |o j form an orthonormal basis. Born's rule bridges theory and experiment stating that if the system is prepared in the state |ψ = j c j |o j and O is measured, then the probability for the outcome o j is p j = |c j | 2 = | o j |ψ | 2 . We recall that a set of numbers p j is regarded as a discrete probability distribution if all the numbers p j in the set are non-negative (i.e., p j ≥ 0) and if they sum up to one (i.e., j p j = 1). In QM, preparations and tests involving incompatible observables lead to quantum coherence and uncertainty and to the consequent necessity for the use of probabilities.
When we lack information about a system preparation, a complex positive semidefinite matrix (ρ ≥ 0) with unit trace Tr(ρ) = 1, dubbed the density matrix, is the mathematical object used to describe its state [39,40]. In these cases, if the pure state |ψ j is prepared with probability p j , all measurement probabilities can be computed in a succinct way using the density operator ρ = j p j |ψ j ψ j |. The ensemble {p j , |ψ j } leading to a given ρ isn't unique. But, as ρ is an Hermitian matrix, we can write its unique eigen-decomposition ρ = d j=1 r j |r j r j | with r j being a probability distribution and |r j an orthonormal basis. We observe that the set of vectors with properties equivalent to those of (r 1 , · · · , r d ), which are dubbed here probability vectors, define the unit simplex.
The mixedness of the state of a system follows also when it is part of a bigger-correlated system. Let us assume that a bi-partite system was prepared in the state |ψ ab . All the probabilities of measurements on the system a can be computed using the (reduced) density matrix obtained taking the partial trace over system b [41]: ρ a = Tr b (|ψ ab ψ ab |).
Up to now, we have discussed some of the main concepts of the kinematics of QM. For our purposes here, it will be sufficient to consider the quantum mechanical closed-system dynamics, which is described by a unitary transformation [39,40]. If the system is prepared in state |ψ , its evolved state shall be given by: |ψ t = U |ψ , with U U † = I, where I is the identity operator in H. The unitary matrix U is obtained from the Schrödinger equation i ∂U/∂t = HU , with H being the system Hamiltonian at time t. Between preparation and measurement (reading of the final result), a Quantum Computation (in the circuit model) is nothing but a unitary evolution; which is tailored to implement a certain algorithm.
III. GENERAL DESCRIPTION OF THE CODE
The code is divided in five main functionalities, which are: the random number generator (RNG), the random probability vector generator (RPVG), the random unitary generator (RUG), the random state vector generator (RSVG), and the random density matrix generator (RDMG). Below we describe in more details each one of these generators, and the related available options.
A module named meths is used in all calling subroutines for these generators in order to share your choices for the method to be used for each task. A short description of the methods and the corresponding options, opt_rxg (with x being n, pv, u, sv, or dm), is included in that module. To call any one of these generators, include call rxg(d,rx) in your program, where d is the dimension of the vector or square matrix rx, which is returned by the generator. If you want, for example, a random density matrix generated using a "standard method" just call rdmg(d,rdm); the same holds for the other objects. If, on the other hand, you want to choose which method is to be used in the generation of any one of these random variables, add use meths after your (sub)program heading, declare opt_rxg as character(10), and add opt_rxg = "your_choice" to your program executable statement section.
IV. RANDOM NUMBER GENERATOR
Beforehand we 'have' to initialize the RNG with call rng_init(); remember to do that also after changing the RNG. As rn is an one-dimensional double precision array, if you want only one random number (RN), then just set d = 1. As the standard pseudo-random number generator (pRNG), we use the Fortran implementation by Jose Rui Faustino de Sousa of the Mersenne Twister algorithm introduced in Ref. [42]. This pRNG has been adopted in several software systems and is highly recommended for scientific computations [43]. As less hardware demanding alternatives, we have also included the Gnu's standard pRNG KISS [44] and the Petersen's Lagged Fibonacci pRNG [45], which is available on Netlib. The options opt_rng for these three pRNGs are, respectively, "mt", "gnu", and "netlib". The components of rn provided by these pRNG are uniformly distributed in [0, 1]. Because of their use in the other generators, we have also implemented the subroutines rng_unif(d,a,b,rn), rng_gauss(d,rn), rng_exp(d,rn), which return ddimensional vectors of random numbers with independent components possessing, respectively, uniform in [a, b], Gaussian (standard normal), and exponential probability distributions (see examples in Fig. 1).
V. RANDOM PROBABILITY VECTOR GENERATOR
Once selected the RNG, it can be utilized, for instance, for the sake of sampling uniformly from the unit simplex. That is to say, we want to generate random probability vectors (RPV) with p j ≥ 0 and d j=1 p j = 1; and the picked points p should have uniform density in the unit simplex. In the following, we describe briefly some methods that may be employed to accomplish (approximately) this task.
The normalization method (opt_rpvg = "norm") starts from the defining properties of a probability vector and uses the RNG to draw uniformly At last we use shuffling of the components of p to obtain an unbiased RPV [47]. A somewhat related method, which is used here as the standard one for the RPVG, was proposed by Życzkowski, Horodecki, Sanpera, and Lewenstein (ZHSL) in the Appendix A of Ref. [48]; so opt_rpvg = "zhsl". The basic idea is to consider the volume Π d−1 j=1 d(p d−j j ) and d − 1 uniform random numbers r j and to define Other possible approach is taking d independent and identically distributed uniform random numbers r j (thus and generated using the method indicated in the figure (refer to the text for more details). In the inset is shown the 2D scatter plot for the first two components (p1, p2) of five thousand RPVs with d = 3 and produced using the ZHSL (red) or the Normalization (green) method (in the last case the points are (1 − p1, 1 − p2)). opt_rpvg = "iid") and just normalizing the distribution, i.e., p j := r j /( d k=1 r k ) [47]. A related sampling method was put forward in Ref. [49] by Devroye (opt_rpvg = "devroye"); see also the Appendix B of Ref. [50]. The procedure is similar to the previous one, but with the change that the random numbers r j are drawn with an exponential probability density. Yet another way to create a RPV, due to Kraemer (opt_rpvg = "kraemer") [51] (see also Refs. [52,53]), is to take d − 1 random numbers uniformly distributed in [0, 1], sort them in nondecreasing order, use r 0 = 0 and r d = 1, and then defining p j = r j − r j−1 for j = 1, · · · , d. For sorting we adapted an implementation of the Quicksort algorithm from the Rosetta Code Project [54].
With exception of the iid, all these methods lead to fairly good samples. With regard to the similarity of the probability distributions for the components of the RPVs generated, one can separate the methods in two groups: (a) ZHSL, Kraemer, and Devroye, and (b) trigonometric and normalization. Concerning the choice of the method, it is worth mentioning that for moderately large dimensions of the RPV, the group (a) excludes the possibility of values of p j close to one. This effect, which may have unwanted consequences for random quantum states generation, is less pronounced for the methods (b), although here the problem is the appearance of a high concentration of points around the corners p j = 0 (see Fig. 1).
If R(N ) is the computational complexity (CC) to generate N RNs and O(N ) is the CC for N scalar additions, then for d ≫ 1 we have the following estimative:
VI. RANDOM UNITARY GENERATOR
A complex matrix U is unitary, i.e., with I being the identity matrix, if and only if its column vectors form an orthonormal basis. So, starting with a complex matrix possessing independent random elements which have identical Gaussian (standard normal) probability distributions, we can obtain a random unitary matrix (RU) via the QR factorization (QRF) [55,56]. We implemented it using the modified Gram-Schmidt orthogonalization (opt_rug = "gso") [57,58], which is our standard method for generating random unitaries. We also utilized LAPACK's implementation of the QRF via Householder reflections (opt_rug = "hhr"); so you will need to have LAPACK installed [59]. Random unitaries can be obtained also from a parametrization for U (d). We have implemented a RUG in this way using the Hurwitz parametrization (opt_rug = "hurwitz"); for details see Refs. [60,61]. Here a rough estimate for the computational complexity is:
VII. RANDOM STATE VECTOR GENERATOR
Pure states of d-dimensional quantum systems are described by unit vectors in C d . The computational basis |j = (δ 1j , δ 2j , · · · , δ dj ) can be used to write any one of these vectors as The continuous line is for 1/d. In the inset is shown the probability density for the eigen-phases and its spacings (divided by the average) for ten thousand 20x20 random unitary matrices. On bottom: Probability of finding a positive partial transpose bipartite state of dimension d = dad b , with da = 2, for ten thousand random density matrices produced for each value of d. The continuous lines are the exponential fits, p = αe −βd , with (α, β) being (1.81, 0.26), (18.77, 1.08), and (265.21, 2.08) for, respectively, the std, ginibre (ptrace), and bures method. In the inset is shown the average L1-norm quantum coherence C l1 (ρ) = j =k |ρ j,k | (divided by log 2 d) and the relative entropy of quantum coherence Cre(ρ) = S(ρ diag ) − S(ρ), with S(ρ) = −Tr(ρ log 2 ρ) being von Neumann's entropy and ρ diag is obtained from ρ by erasing its off-diagonal matrix elements, in basis |j (10 4 samples were produced for each value of d).
which are guaranteed to be normalized if d j=1 |c j | 2 = 1. A simple way to create random state vectors (RSVs) is by using normally distributed real numbers to generate the real and imaginary parts of the complex coefficients in Eq. (3), and afterwards normalizing |ψ (opt_rsvg = "gauss").
Using the polar form for the coefficients in Eq. (3), c j = |c j |e iφj , and noticing that |c j | 2 is a probability distribution, we arrive at our standard method (opt_rsvg = "std") for generating RSVs. We proceed then by defining |c j | 2 =: p j and writing Then we utilize the RPVG to get p = (p 1 , · · · , p d ) and the RNG to obtain the phases (φ 1 , · · · , φ d ), with φ j uniformly distributed in [0, 2π]. Using these probabilities and phases we generate a RSV. See examples in Fig. 2. For these two first methods, when d ≫ 1, In addition to these procedures, we have included yet another RSVG using the first column of a RU (opt_rsvg = "ru"):
VIII. RANDOM DENSITY MATRIX GENERATOR
Our standard method (opt_rdmg = "std") for random density matrix (RDM) generation (see e.g. Refs. [48,62]), starts from the eigen-decomposition and creates the eigenvalues r j and the eigenvectors |r j = U |j using, respectively, the RPVG and RUG described before. So, in this case, CC(RDMG) ≈ CC(RPVG) We can also produce RDMs by normalizing matrices with independent complex entries normally distributed, named Wishart or Ginibre matrices (opt_rdmg = "ginibre"), where ||G|| 2 = Tr(G † G) is the Hilbert-Schmidt norm [63,64]. A related method, which produces RDMs with Bures measure (opt_rdmg = "bures"), uses with U being a random unitary [65]. At last, one can also generate RDMs via partial tracing a random state vector |ψ ab [66]: There are two issues arising from Fig. 2 that instantiate the utility of the numerical tool described in this article. The first one regards quantum coherence quantification, which has been rediscovered and formalized in the last few years [67,68]. We see that, while the average relative entropy of coherence concentrates around a certain value, the L 1 -norm coherence keeps growing with the dimension d. Such kind of qualitative difference, promptly identified in a simple numerical experiment, points towards a path that can be taken in order to identify physically and/or operationally relevant coherence quantifiers. The other issue refers to the too fast concentration of measure reported in Ref. [62]; and which gains more physical appeal with the too entangled state space reached by the last three RDMGs described in the last section.
It seems legitimate regarding the most random ensemble of quantum states as being the one leading to minimal knowledge; which can, by its turn, be identified with maximal symmetry [69]. Thus, for pure states we require such ensemble to be invariant under unitary transformations (UTs), what implies in no preferential direction in the Hilbert space. An ensemble of pure states drawn with probability density invariant under UTs is said to be generated with Haar measure. The same is the case for random unitaries [56]. We observe that all random unitary generators and random state vector generators described here produce Haar distributed random objects.
In the general case of density matrices, invariance under UTs only warrants ignorance about direction in the state space, but implies nothing with respect to the eigenvalues distribution. In this regard, in general, different metrics lead to distinct probability densities, which are then used for constructing methods to create random density matrices accordingly. Therefore, as advanced in Ref. [69], this situation calls for the application of physical or conceptual motivations when choosing a RDMG. In this sense, we think that the too fast concentration of measure issue, in conjunction with the well known difficulty of preparing entangled states in the laboratory, favors the standard random density matrix generator described above.
IX. CONCLUDING REMARKS
To summarize, in this article we described Fortran codes for the generation of random numbers, probabil-ity vectors, unitary matrices, and quantum state vectors and density matrices. Our emphasis here was more on ease of use than on sophistication of the code. For this is the starting point for the development of a Fortran Library for Quantum Information Science. In addition to including new capabilities for the generators described here and to optimize the code, we expect to develop this work in several directions in the future. Among the intended extensions are the inclusion of entropy and distinguishability measures, non-classicality and correlation quantifiers, simulation of quantum protocols, and remote access to quantum random number generators. Besides, in order to mitigate the explosive growth in complexity that we face in general when dealing with quantum systems, d = dim H ∝ exp(No. of parties), it would be fruitful to parallelize the code whenever possible.
We performed some simple tests and calculations for verification of the code's basic functionalities. Some of the results are reported in Figs. 1 and 2. The code used for these and other tests is also included and commented, but we shall not explain it here. Several matrix functions are provided in the files matfun.f90 and qnesses.f90. For instructions about how to compile and run the code see the readme file. In our tests, we used BLAS 3.6.0, | 4,376.4 | 2015-12-16T00:00:00.000 | [
"Computer Science"
] |
Using machine learning to improve the accuracy of genomic prediction of reproduction traits in pigs
Background Recently, machine learning (ML) has become attractive in genomic prediction, but its superiority in genomic prediction over conventional (ss) GBLUP methods and the choice of optimal ML methods need to be investigated. Results In this study, 2566 Chinese Yorkshire pigs with reproduction trait records were genotyped with the GenoBaits Porcine SNP 50 K and PorcineSNP50 panels. Four ML methods, including support vector regression (SVR), kernel ridge regression (KRR), random forest (RF) and Adaboost.R2 were implemented. Through 20 replicates of fivefold cross-validation (CV) and one prediction for younger individuals, the utility of ML methods in genomic prediction was explored. In CV, compared with genomic BLUP (GBLUP), single-step GBLUP (ssGBLUP) and the Bayesian method BayesHE, ML methods significantly outperformed these conventional methods. ML methods improved the genomic prediction accuracy of GBLUP, ssGBLUP, and BayesHE by 19.3%, 15.0% and 20.8%, respectively. In addition, ML methods yielded smaller mean squared error (MSE) and mean absolute error (MAE) in all scenarios. ssGBLUP yielded an improvement of 3.8% on average in accuracy compared to that of GBLUP, and the accuracy of BayesHE was close to that of GBLUP. In genomic prediction of younger individuals, RF and Adaboost.R2_KRR performed better than GBLUP and BayesHE, while ssGBLUP performed comparably with RF, and ssGBLUP yielded slightly higher accuracy and lower MSE than Adaboost.R2_KRR in the prediction of total number of piglets born, while for number of piglets born alive, Adaboost.R2_KRR performed significantly better than ssGBLUP. Among ML methods, Adaboost.R2_KRR consistently performed well in our study. Our findings also demonstrated that optimal hyperparameters are useful for ML methods. After tuning hyperparameters in CV and in predicting genomic outcomes of younger individuals, the average improvement was 14.3% and 21.8% over those using default hyperparameters, respectively. Conclusion Our findings demonstrated that ML methods had better overall prediction performance than conventional genomic selection methods, and could be new options for genomic prediction. Among ML methods, Adaboost.R2_KRR consistently performed well in our study, and tuning hyperparameters is necessary for ML methods. The optimal hyperparameters depend on the character of traits, datasets etc. Supplementary Information The online version contains supplementary material available at 10.1186/s40104-022-00708-0.
Background
Genomic selection (GS) has been widely recognized and successfully implemented in animal and plant breeding programs [1][2][3]. It has been reported that the breeding costs of dairy cattle using GS were 92% lower than those of traditional progeny testing [4]. At present, the genetic gain rate of the annual yield traits of US Holstein dairy cattle has increased from approximately 50% to 100% [5]. The accuracy of GS is impacted by a number of factors, such as analytical methods of genomic prediction, reference population size, marker density, and heritability values. Currently, parametric methods are most commonly used for livestock and poultry genomic selection, mainly including genomic BLUP (GBLUP) [6], singlestep GBLUP (ssGBLUP) [7,8], ridge regression (RR) [9], least absolute shrinkage and selection operator (LASSO) [10], and Bayesian regression models [11,12] with the difference mainly depending on the prior distribution of marker effects. Nevertheless, these linear models usually only take into account the additive inheritance and ignore the complex nonlinear relationships that may exist between markers and phenotypes (e.g. epistasis, dominance, or genotype-by-environment interactions). In addition, parametric methods usually provide limited flexibility for handling nonlinear effects in high-dimensional genomic data, resulting in large computational demands [13]. However, studies have shown that considering nonlinearity may enhance the genomic prediction ability of complex traits [14]. Therefore, new strategies should be explored to more accurately estimate genomic breeding values.
Driven by applications in intelligent robots, self-driving cars, automatic translation, face recognition, artificial intelligence games and medical services, machine learning (ML) has gained considerable attention in the past decade. Some characteristics of ML methods make them potentially attractive for dealing with high-order nonlinear relationships in high-dimensional genomic data, e.g. allowing the number of variables larger than the sample size [15], capable of capturing the hidden relationship between genotype and phenotype in an adaptive manner, and imposing little or no specific distribution assumptions about the predictor variables as GBLUP and Bayesian methods [16,17].
Studies have shown that random forest (RF), support vector regression (SVR), kernel ridge regression (KRR) and other machine learning methods have advantages over GBLUP and Bayes B [18][19][20]. Ornella et al. compared the genomic prediction performance of support vector regression, random forest regression, reproducing kernel Hilbert space (RKHS), ridge regression, and Bayesian Lasso in maize and wheat datasets with different trait-environment combinations, and found that RKHS and random forest regression were the best [21]. González-Camacho et al. reported that the support vector machine (SVM) with linear kernel performed the best in comparison with other ML methods and linear models in the genomic prediction of the rust resistance of wheat [20]. Additionally, ML methods have also been widely used in the fields of gene screening, genotype imputation, and protein structure and function prediction [22][23][24][25], demonstrating its superiority as well. However, one challenge for ML is choosing the optimum ML method as a series of ML methods have been proposed and each has its own characteristics and shows different prediction abilities in different datasets and traits.
Therefore, the objectives of this study were to 1) assess the performance of ML methods in genomic prediction in comparison with existing prevail methods of GBLUP, ssGBLUP, and BayesHE and 2) evaluate the efficiency of different ML methods to explore the ideal ML method for genomic prediction.
Ethics statement
The whole procedure for blood sample collection was carried out in strict accordance with the protocol approved by the Animal Care and Use Committee of China Agricultural University (Permit Number: DK996).
Population and phenotypes
A purebred Yorkshire pig population from DHHS, a breeding farm in Hebei Province, China, was studied. Animals from this farm were descendants of Canadian Yorkshires, and they were reared under the same feeding conditions. A total of 2566 animals born between 2016 and 2020 were sampled, their 4274 reproductive records of the total number of piglets born (TNB) and the number of piglets born alive (NBA) with delivery dates ranging from 2017 to 2021 were available, and 3893 animals were traced back to construct the pedigree relationship matrix (A matrix). The numbers of full-sib and half-sib families were 339 and 301, respectively. A single-trait repeatability model was used to estimate the heritability. The fixed effect included herd-year-season, and random effects included additive genetic effects, random residuals, and permanent environment effects of sows (environmental effects affecting litter size across parities of sows). The information of animals, phenotypes and genetic components, as well as the estimated heritability, are listed in Table 1. The estimated heritability of TNB and NBA were both 0.12.
Derivation of corrected phenotypes
To avoid double counting of parental information, the corrected phenotypes (y c ) derived from the estimated breeding values (EBVs) were used as response variables in genomic prediction. The pedigree-based BLUP and single-trait repeatability model was performed to estimate the breeding values for each trait separately.
where y was the vector of raw phenotypic values; b was the vector of fixed effects including herd-year-season, in which season consisted of four levels (1st = December to February; 2nd = March to May; 3rd = June to August; 4th = September to November); a was the vector of additive genetic effects; pe was the vector of permanent environment effects of sows; and e was the vector of random error. X, Z a , and Z pe are the incidence matrices linking b, a and pe to y. The random effects were assumed to be normally distributed as follows: a~N (0, Aσ 2 a ), pe~N (0, Iσ 2 pe ), and e~N (0, Iσ 2 e ), where A was the pedigree-based relationship matrix; I was the identity matrix; and σ 2 a , σ 2 pe , and σ 2 e were the variances of additive genetic effects, permanent environment effects of sows, and residuals, respectively. A total of 3893 individuals were traced to construct matrix A. Their EBVs were calculated using the DMUAI procedure of the DMU software [26]. The y c were calculated as EBV plus the average estimated residuals for multiple parties of a sow following Guo et al. [27].
Genotype data and imputation
Two kinds of 50 K define SNP panels, PorcineSNP50 BeadChip (Illumina, CA, USA) and GenoBaits Porcine SNP 50 K (Molbreeding, China) were used for genotyping. A total of 1189 sows were genotyped with the Porci-neSNP50 BeadChip, which included 50,697 SNPs across the genome, and 1978 individuals were genotyped using the GenoBaits Porcine SNP 50 K with 52,000 SNPs. There were 30,998 common SNPs between these two SNP panels, and 601 individuals were genotyped with both SNP panels; therefore, 2566 genotyped individuals were finally used for further analysis, including 1189 animals with the PorcineSNP50 BeadChip and 1377 pigs with the GenoBaits Porcine SNP 50 K. The animals genotyped with GenoBaits Porcine SNP 50 K were imputed to the PorcineSNP50 BeadChip using Beagle 5.0 [28]. The reference population size for genotype imputation was 3720. Imputation accuracy was assessed by the dosage R-squared measure (DR2), which is the estimated squared correlation between the estimated allele dose and the true allele dose. The genotype correlation (COR) and the genotype concordance rate (CR) were also calculated based on the 601 overlapped animals to evaluate the imputation accuracy. After imputation, quality control of the genotype was carried out using PLINK software [29]. SNPs with a minor allele frequency (MAF) lower than 0.01 and call rate lower than 0.90 were removed, and individuals with call rates lower than 0.90 were excluded. Finally, all animals and 44,922 SNPs on autosomes remained for further analysis.
GBLUP y c ¼1μþZgþe in which y c is the vector of corrected phenotypes of genotyped individuals. μ is the overall mean, 1 is a vector of 1 s, g is the vector of genomic breeding values, e is the vector of random errors, and Z is an incidence matrix allocating records to g. The distributions of random effects were: g~N (0, G σ 2 g ) and e~N (0, I σ 2 e ), where G was the genomic relationship matrix (G matrix), and σ 2 g and σ 2 e were the additive genetic variance and the residual variance, respectively. ssGBLUP ssGBLUP had the same expression as GBLUP, except that it used y c of both genotyped and nongenotyped individuals by combining the G matrix and A matrix. It was assumed that g followed a normal distribution N (0, H σ 2 g ). The inverse of matrix H was: To prevent the problem that the singular matrix cannot be inverted, G w = (1-w) G a + wA 22 , and w was equal to 0.05 [30].
BayesHE
BayesHE was developed by Shi. et al. [31], it was based on global-local priors to increase the flexibility and adaptability of the Bayesian model. In this study, the first form of BayesHE (BayesHE1) was used [31], and the Markov chain Monte Carlo (MCMC) chain was run for [31], and the DMUAI procedure implemented in DMU software [26] was used for GBLUP and ssGBLUP analyses.
Support vector regression
Support vector machine (SVM) was based on statistical learning theory. SVR was the application of SVM in regression for dealing with quantitative responses, which used a linear or nonlinear kernel function to map the input space (the marker dataset) to a higher dimensional feature space [32], and performed modelling and prediction on the feature space. In other words, we can build a linear model in the feature space to deal with regression problems. The model formulation of SVR can be expressed as: in which h(x) T β is the kernel function, β is the vector of weights, and β 0 is the bias. Generally, the formalized SVR was given by minimizing the following restricted loss function: in which V ε (r) is the ε-insensitive loss and C ("cost parameter") is the regularization constant that controls the trade-off between prediction error and model complexity. y is a quantitative response, and ||·|| is the norm in Hilbert space. After optimization, the final form of SVR can be written as: is the kernel function. In this research, grid search was used to find the best kernel function and the optimal hyperparameters of C and gamma. An internal fivefold cross-validation (5-fold CV) strategy was performed to tune the hyperparameters when performing a grid search.
Kernel ridge regression
Kernel ridge regression (KRR) is a nonlinear regression method that can effectively discover the nonlinear structure of the data [33]. KRR uses a nonlinear kernel function to map the data to a higher dimensional kernel space, and then builds a ridge regression model to make the data linearly separable in this kernel space. The linear function in the kernel space was selected according to the mean squared error loss of ridge regularization [33]. The final KRR prediction model can be written as: where λ is the regularization constant, and K is the Gram matrix with entries thus, for n training samples, the obtained kernel matrix is: I is the identity matrix, k ′ = K(x i , x j ) with j = 1,2,3, …,n, n is the number of training samples, and x i is the test sample. In the expanded form, The grid search was used to find the most suitable kernel function and λ in this study, and an internal 5fold CV strategy was used for tuning the hyperparameters.
Random forest
Random forest (RF) is an ML method that uses voting or the average of multiple decision trees to determine the classification or predicted values of new instances [34]. Random forest was essentially a collection of decision trees, and each decision tree was slightly different from other trees. Random forest reduced the risk of overfitting by averaging the prediction results of many decision trees [20]. Random forest regression can be written in the following form: in which y is the predicted value of random forest regression, t m (ψ m (y : X)) is an individual regression tree, and M is the number of decision trees in the forest. The prediction was obtained by passing down the predictor variables in the flowchart of each tree, and the corresponding estimated value at the terminal node was used as the predicted value. Finally, the predictions of each tree in RF were averaged to calculate the final prediction of unobserved data. The grid search was used to find the most suitable hyperparameter M and the maximum depth of the tree, and the inner 5-fold CV was performed to tune the hyperparameters.
Adaboost.R2
Adaboost.R2 is an ad hoc modification of Adaboost. R and an extension of Adaboost.M2 created to deal with regression problems, which repeatedly used a regression tree as a weak learner followed by increasing the weights of incorrectly predicted samples and decreasing the weights of correctly predicted samples. It builds a "committee" by integrating multiple weak learners [35], making its prediction effect better than those of weak learners. Adaboost.R2 regression model can be written as: where y is the predicted value, f t (x) is the predicted value of the t-th weak learner, ε t is the error rate of f t (x) and ε t ¼ L t Ä ð1− L t Þ, L t is the average loss and L t ¼ P m i¼1 L t ðiÞ D t ðiÞ; L t (i) is the error between the actual observation value and the predicted value of the i-th predicted individual, and in which Z t is a normalization factor chosen such that D t + 1 (i) will be a distribution. In the current study, SVR and KRR were used as weak learners of Adaboost.R2. For these four ML methods, the vectors of genotypes (coded as 0, 1, 2) were the input independent variables, corrected phenotypes y c was used as the response variable, and the Sklearn package for Python (V0.22) was used for genomic prediction. We sought the optimal hyperparameter combination from a grid of values with different hyperparameter combinations, and the combination in the grid with the highest Pearson correlation was selected as the optimal hyper-parameter in each fold (grid search). Meanwhile, the optimal hyperparameters for SVR, KRR, RF and Adaboost.R2 in CV according to the grid search are shown in Table 2.
Accuracy of genomic prediction
Fivefold cross-validation (5-fold CV) was used to estimate the accuracies of genomic prediction, in which 2566 individuals were randomly split into five groups with 513 individuals each. For each CV, four of the five groups were defined as the reference population, and the remaining group was treated as the validation population. The genotyped reference and validation sets in each replicate of 5-fold CV were the same for all methods, and it should be noted that nongenotyped individuals were added to the reference population in ssGBLUP. For all methods, the accuracy of genomic prediction was calculated as the Pearson correlation of y c (corrected phenotypes) and PV (predicted values). In addition, the prediction unbiasedness was also calculated as the regression of y c on PV of the validation population. The 5fold CV scheme was repeated 20 times, and the overall prediction accuracy and unbiasedness were the averages of 20 replicates. The Hotelling-Williams Test [36] was performed to compare the prediction accuracy of different methods after parameter optimization.
Meanwhile, prediction ability metrics, e.g., mean squared error (MSE) and mean absolute error (MAE), were also used to evaluate the performance of regression models in the present study. MSE can take both prediction accuracy and bias into account [37], and the smaller the value of MSE is, the better the accuracy of the model to describe the experimental data. The MAE could better reflect the actual situation of the predicted value error. Their formulas can be written as follows.
where m represents the number of animals in each CV test fold of 5-fold CV, f is the vector of predicted values (PV) and y is the vector of observed values (y c ). The final MSE and MAE were the average of 20 replicates. In addition, to be more in line with the actual situation of genomic selection, we compared ML methods and traditional genomic selection methods in using earlygeneration animals to predict the performance of animals of later generations. Therefore, the younger animals born after January 2020 were chosen as the validation population, and the population sizes of the reference and validation were 2222 and 344, respectively. The accuracy of genomic prediction was evaluated as r (y c , PV), the Pearson correlation between corrected phenotypes y c and predicted values PV.
Results
Genotype imputation accuracy Figure 1 illustrates the accuracy of imputing GenoBaits Porcine SNP 50 K to PorcineSNP50 BeadChip across minor allele frequency (MAF) intervals and chromosomes. DR2, CR and COR were not sensitive to MAF except that COR was lower when the MAF was less than 0.05 and in the range of 0.45 to 0.5 (Fig. 1a). DR2, CR and COR on each chromosome were 0.978~0.988, 0.984~0.988 and 0.957~0.972, respectively, and no significant differences were observed in DR2, CR and COR between chromosomes (Fig. 1b). In the same scenarios, the COR values were smaller than those of DR2 and CR. The averaged DR2, CR and COR across all variants were 0.984, 0.985 and 0.964, respectively, indicating that the imputation was sufficiently accurate to analyse the two SNP panels together.
Accuracy of genomic prediction in cross-validation
Comparison of ML methods with (ss) GBLUP and BayesHE Meanwhile, GBLUP, ssGBLUP and BayesHE had similar performance, and no significant differences in prediction accuracy were found among them. Nevertheless, ssGBLUP produced an average improvement of 3.7% Fig. 1 Imputation accuracy. Imputation accuracy of GenoBaits Porcine SNP 50 K to PorcineSNP50 BeadChip at different minor allele frequency (MAF) intervals (a) and chromosomes (b). DR2, the estimated squared correlation between the estimated allele dose and the true allele dose; Genotype concordance rate (CR), the ratio of correctly imputed genotypes; Genotype correlation (COR), the correlation coefficient between the imputed variants and the true variants compared with GBLUP (1.2% for TNB; 6.3% for NBA), while less bias was observed by GBLUP in all scenarios. BayesHE yielded similar accuracy to GBLUP (0.243 and 0.248 for TNB; 0.207 and 0.208 for NBA), but the unbiasedness of BayesHE was much closer to 1 (1.015 for TNB; 1.009 for NBA).
On the other hand, the mean squared error (MSE) and mean absolute error (MAE) were also used to assess the performance of different methods. As shown in Table 4, after tuning the hyperparameters, the ML methods were generally superior to GBLUP, ssGBLUP and BayesHE in terms of MSE and MAE. For two reproduction traits TNB and NBA, all ML methods yielded lower MSE and MAE than GBLUP, ssGBLUP and BayesHE. The performance of GBLUP, ssGBLUP and BayesHE was very close, and ssGBLUP produced a slightly lower MSE (5.26 for TNB; 3.95 for NBA) and MAE (1.748 for TNB; 1.532 for NBA) among these three methods, while they were still higher than those obtained from RF, which performed the worst among the four ML methods and generated 5.212 and 3.901 of MSE and 1.747 and 1.527 of MAE on TNB and NBA, respectively.
Comparison among ML methods
Tables 3 and 4 indicate that the ML methods performed better than GBLUP, ssGBLUP and BayesHE. They also showed that RF had the lowest accuracy even though no significant differences were observed among the ML methods in this study. After tuning the parameters, the accuracies of SVR, KRR, Adaboost.R2_SVR and Adaboost.R2_KRR was improved by an average of 5.8%, 6.2%, 5.5% and 6.1% compared to RF, ranging from 8.1% to 9.3% for TNB and from 2.4% to 4.0% for NBA. For TNB, SVR and KRR showed the highest accuracies (0.295 for both), and Adaboost.R2_KRR yielded the highest accuracies on NBA (0.258). In the comparison of unbiasedness, SVR produced the lowest genomic prediction bias, and the regression coefficient was close to 1.0, while Adaboost.R2 method with both base learner SVR and KRR produced larger bias. As a trade-off between Table 3 Accuracies and unbiasedness of genomic prediction on TNB and NBA from seven methods in 20 replicates of 5-fold CV It should be noted that the better performance of the ML methods was acquired by tuning the hyperparameters (Tables 2,3). Compared with using hyperparameters set to default, the accuracy was improved by 14.3% on average from the ML methods with optimal hyperparameters; the accuracy of SVR, KRR, RF and Adaboost.R2 with optimal hyperparameters improved the genomic prediction accuracies for TNB by 15.7%, 11.7%, 9.8% and 15.0%, respectively; and for NBA, the improvements were 13.4%, 15.3%, 10.2% and 23.4%, respectively. For unbiasedness, except for SVR on TNB, the unbiasedness of all ML methods using the default parameters was lower than the unbiasedness using the optimal parameters. On the other hand, Tables 3 and 4 indicate that ML methods with default hyperparameters did not yield advantages over GBLUP, ssGBLUP and BayesHE. Table 5 presents the accuracy and MSE of genomic prediction on TNB and NBA by applying different methods to predict younger animals. On the one hand, a similar trend was obtained for GBLUP, BayesHE and ssGBLUP as in CV. GBLUP performed comparably with BayesHE, while ssGBLUP yielded higher accuracies and lower MSE than GBLUP and BayesHE for both traits. On the other hand, different from the results in CV, the superiority of ML methods with optimal hyperparameters was not significant in predicting younger animals, although they still improved the accuracies and reduced the MSE compared with the outcomes when using the default hyperparameters. Table 5 indicates that Adaboost.R2_ KRR and RF still outperformed GBLUP and BayesHE as was demonstrated in the CV, ssGBLUP performed comparably with RF, and ssGBLUP yielded slightly higher accuracy and lower MSE than Adaboost.R2_KRR in the prediction of TNB; in contrast, for NBA, Adaboost.R2_ KRR performed significantly better than ssGBLUP. Meanwhile, after tuning the parameters, RF and KRR obtained higher accuracies and lower MSE than GBLUP and BayesHE, respectively. The performance of RF was significantly improved, and it performed better than that of KRR and SVR. In the prediction of younger animals, SVR with either default hyperparameters or optimal hyperparameters performed the worst, which was different from its performance in the CV.
Computing time
The average computation time to complete each fold in CV for each genomic prediction method is demonstrated in Table 6. Running time of the methods was measured in minutes on an HP server (CentOS Linux 7.9.2009, 2.5 GHz Intel Xeon processor and 515G total memory). Among all methods, KRR was the fastest algorithm; it took an average of 1.16 min in each fold of CV to complete the analysis, requiring considerably less time than GBLUP (2.07 min) and ssGBLUP (3.23 min). The computing efficiency of SVR (5.28 min) and Adaboost.R2_KRR (5.16 min) was comparable to that of KRR, GBLUP and ssGBLUP. However, RF (53.45 min) and Adaboost.R2_SVR (85.34 min) ran slowly among the Optimal hyperparameters: The optimal hyper-parameters of each machine learning method obtained by using grid search The different superscript of accuracy indicates the significant difference by the Hotelling-Williams test ML methods. Adaboost.R2 based on KRR (Adaboost.R2_ KRR) was much more time-saving than Adaboost.R2_ SVR. Since the MCMC algorithm required more iteration time to reach convergence, BayesHE was the slowest as expected, and it took 226.12 min for each fold of CV.
Discussion
Our results elucidated that ssGBLUP performed better than GBLUP in terms of accuracy in all scenarios investigated, which was consistent with previous studies [27,[38][39][40]. This could be explained by the fact that GBLUP utilized phenotypic information only from genotyped individuals, while ssGBLUP simultaneously used information from both genotyped and nongenotyped individuals to construct a genotype-pedigree relationship matrix (H matrix). Since nongenotyped individuals were related to individuals in the validation population on the pedigree, ssGBLUP took advantage of the phenotypic information of the whole population to obtain better prediction. However, in our research utilizing 5-fold CV and predicting younger animals, ssGBLUP produced only slightly higher accuracies for the two reproduction traits.
The lower improvement of ssGBLUP may be due to the following reasons. (I) Weak relationship between the nongenotyped reference population and genotyped candidates in the pedigree. In our study, only 143 of the 789 nongenotyped reference population used by ssGBLUP had pedigree information, and only 46 and 40 individual sires and dams were included in the 2566 genotyped individuals, indicating that the relationship between nongenotyped reference animals and genotyped candidates was weak, making a small contribution to the genomic prediction. Li et al. [39] showed that the improvement of ssGBLUP over GBLUP on accuracy was almost entirely contributed by nongenotyped close relatives of candidates. It can also be observed in Additional file 1: Fig. S1 that the greater the weight of the A matrix, the lower the accuracy, indicating that the information obtained from pedigree is limited, resulting in ssGBLUP not exerting its advantages greatly. (II) The low heritability of TNB and NBA. In this study, the heritability for the two traits were both 0.12, which was generally consistent with other reports [27,41,42]; therefore, sufficient accuracy could not be achieved with the pedigree information. This also confirmed by other studies, that a certain improvement can be achieved by adding a smaller reference population for traits with medium or high heritability [2,43].
In this study, we investigated the performance of ML methods in genomic prediction, and demonstrated their superiority compared to classical methods GBLUP, ssGBLUP and Bayesian methods. Generally, the following characteristics of ML methods make it potentially attractive to genomic prediction. (I) Although ML methods generally require moderate fine-tuning of hyperparameters, the default hyperparameters usually do not perform poorly [34]. According to our results, ML methods after tuning parameters gained advantages over using the default hyperparameters; in addition, without tuning hyperparameters, almost all ML methods in CV and Adaboost.R2_KRR in predicting younger animals performed better than GBLUP and BayesHE (Tables 3, 4, 5). (II) ML methods can handle situations where the number of parameters is larger than the sample size, and they are very efficient in the case of high-density genetic markers for GS [44]. (III) ML methods do not make distribution assumptions about the genetic determinism underlying the trait, enabling us to capture the possible nonlinear relationships between genotype and phenotype in a flexible way [44], and it is different from GBLUP and Bayesian methods, which assume that all marker effects follow the same normal distribution or have different classes of shrinkage for different SNP effects. In addition, ML methods can take the correlation and interaction of markers into account as well, while linear models based on pedigree and genomic relationships may not provide a sufficient approximation of the genetic signals generated by complex genetic systems [16]. Consequently, for traits with fully additive architecture, conventional linear models outperformed ML models [45], but when traits are affected by nonadditive effects, especially epistasis, ML methods can achieve more accurate predictions [25]. These make ML methods gain a large advantage over GBLUP and BayesHE even though they only use genotyped animals.
In our experiments with 5-fold CV, our results showed that ML methods improved the prediction accuracy of the reproduction traits in the Chinese Yorkshire pig population. SVR, KRR, RF and Adaboost.R2 reflected the superiority of the ML methods, with average improvements over GBLUP of 20.5%, 21.0%, 14.1% and 20.5%, respectively. In predicting younger animals, our results also indicated that RF and Adaboost.R2_KRR gained [46] also pointed out that compared with SVR, KRR, and RF, Adaboost possessed the most potent prediction ability in the genomic prediction of economic traits in Chinese Simmental beef cattle. Abdollahi-Arpanahi et al. [47] reported that the gradient boosting method yielded the best prediction performance in comparison with GBLUP, BayesB, RF, convolutional neural networks (CNN) and multilayer perceptron (MLP) in the genomic prediction of the sire conception rate (SCR) of Holstein bulls. Azodi et al. [48] compared the performance of six linear and five nonlinear ML models using data on 18 traits from six plant species and found that no one algorithm performed best across all traits, while ensemble learning performed consistently well.
In 5-fold CV, Adaboost.R2 and RF did not show the advantages of ensemble learning compared with single learning methods (SVR and KRR). For Adaboost.R2, mainly because the current SVR and KRR are sufficient to exert prediction abilities, which may limit the benefit of using boosting. In addition, the slow tuning process of Adaboost.R2, we did not precisely tune the hyperparameters, resulting in lower prediction accuracy than SVR and KRR. For RF, its prediction accuracy is mainly affected by the number and maximum depth of decision trees [46], but to weigh the practical application feasibility of RF, it is impractical to precisely tune the number of trees due to the slow tuning process. We obtained only approximate hyperparameters, leading to the most ideal RF model not being trained, further compromising its performance. In predicting younger animals, particularly for RF, they were precisely tuned based on the hyperparameter ranges of CV, resulting in the dramatic improvement of Adaboost and RF compared to SVR and KRR. Our results implied that ensemble learning is helpful to improve genomic prediction. Recently, another type of ensemble learning based on a hierarchical model also demonstrates advantages in genomic selection. Liang et al. [49] developed a stacking ensemble learning framework (SELF) that integrated SVR, KRR, and ENET to perform genomic prediction and showed excellent performance.
Our results indicated that tuning hyperparameters is necessary for ML methods, confirming that ML algorithms are sensitive to user-defined parameters during the training phase [37]. After tuning the hyperparameters in CV and in genomic prediction of younger individuals, the average improvement was 14.3% and 21.8% over those using default hyperparameters, respectively. The ML methods with optimal hyperparameters generally outperformed GBLUP and Bayesian methods, while they performed comparably with GBLUP and BayesHE in the case of default hyperparameters. On the other hand, our results also showed that the optimal hyperparameters depend on the characteristics of traits, datasets etc.. When optimal hyperparameters obtained in CV were used in predicting younger animals, the prediction accuracies of all ML methods were decreased compared to their performance with default parameters (Additional file 1: Table S1). In CV, many replicates were used for tuning hyperparameters, and the optimal hyperparameters were easily obtained for SVR and KRR due to their fast computing, while in predicting younger individuals, the hyperparameters were tuned based on only one genomic prediction, and they may not be sufficient to exert the generalization performance of SVR and KRR, leading to their relatively poorer prediction ability.
Moreover, our results indicated that the optimal hyperparameters may reduce the risk of overfitting (Tables 3, 4 and 5), which is a key element for the quality of the final predictions [50]. In this study, different ML models control overfitting with different parameters. For example, SVR mainly increases the fault tolerance of the model by increasing the regularization parameter C to achieve a regularization effect to reduce the degree of overfitting. KRR mainly tunes the hyperparameter λ that controls the amount of shrinkage to reduce noise, thereby controlling overfitting. For RF, the tendency of overfitting can be reduced by adding decision trees due to bagging and random feature selection, and the bias can be reduced by increasing the depth of the decision tree. Adaboost is an iterative algorithm, and each iteration weights the samples according to the results of the previous iteration; thus, with the continuation of iteration, the bias of the model will be continuously decreased. Accordingly, the tuning process highlights the flexibility of ML and increases the advantages of ML methods over conventional genomic selection methods.
Therefore, it is crucial to fine-tune the hyperparameters during the training phase when the dataset changes [16,37,48]. Meanwhile, it should be noted that the effect of the default hyperparameters usually did not perform poorly as discussed above, and failure to find suitable hyperparameters may greatly reduce the prediction effect of ML methods [46]. If hyperparameter automation can be realized during ML operation, it will greatly improve the efficiency of hyperparameter optimization and greatly broaden the application of ML methods in genomic prediction.
Conclusions
In this study, we compared four ML methods, GBLUP, ssGBLUP and BayesHE to explore their efficiency in genomic prediction of reproduction traits in pigs. We compared the prediction accuracy, unbiasedness, MSE, MAE and computation time of different methods through 20 replicates of 5-fold CV and genomic prediction of younger animals. Our results showed that ML methods possess a significant potential to improve genomic prediction over that obtained with GBLUP and BayesHE. In 5-fold CV, ML methods outperformed conventional methods in all scenarios; they yielded higher accuracy and smaller MSE and MAE, while in genomic prediction of younger animals, RF and Adaboost.R2 performed better than GBLUP and BayesHE. ssGBLUP was comparable with RF and Adaboost.R2_KRR was overall better than ssGBLUP. Among ML methods, Adaboost.R2_KRR consistently performed well in our study. Our findings also demonstrated that tuning hyperparameters is necessary for ML methods, and the optimal hyperparameters depend on the characteristics of traits, datasets etc. | 8,162 | 2022-05-17T00:00:00.000 | [
"Agricultural and Food Sciences",
"Computer Science"
] |
Parameter-free hybrid functional based on an extended Hubbard model: DFT+U+V
In this article, we propose an energy functional at the level of DFT+U+V that allows us to compute self-consistently the values of the on-site interaction, Hubbard U and Hund J, as well as the intersite interaction V. This functional extends the previously proposed ACBN0 functional [Phys. Rev. X 5, 011006 (2015)]. We show that this ab initio and self-consistent pseudo-hybrid functional yield improved electronic properties for a wide range of materials, ranging from sp materials to strongly-correlated materials. This functional can also be seen as an alternative general and systematic way to construct parameter-free hybrid functionals, based on the extended Hubbard model and a selected set of Coulomb integrals, and might be use to propose novel approximations. By extending the DFT+U method to materials where strong local and nonlocal interactions are relevant, this work opens the door to the ab initio study the electronic ionic and optical properties of a larger class of strongly correlated materials in and out of equilibrium.
I. INTRODUCTION
During the last few decades, density functional theory (DFT) has emerged as one of the reliable and efficient numerical method to simulate a wide range of materials. However, it is well known that the most employed local and semilocal functionals suffer from many problems, in particular the so-called "delocalization problem", which prevent using DFT with usual functionals on materials where strong local electron-electron are taking place 1,2 . More advanced functionals, such as some based on the meta-generalized gradient approximation, or hybrid functionals can solve some of these problems, but are still not ideal for strongly-correlated systems. In order to overcome this problem, one well established method is to perform dynamical mean field theory (DMFT) calculations 3,4 , possibly using the outcome of a DFT calculation, a method usually referred as DFT+DMFT, which has become the state of the art method to treat strongly correlated materials 5 . In this framework, the effective electronic parameter describing the local interaction, the Hubbard U , is computed within the framework of constrained random-phase approximation (cRPA) [6][7][8][9] .
As an alternative effective approach, DFT+U method originally proposed by V. Anisimov, A. Lichestein, and coworkers 1,10-12 has become a well established and successful way to improve the treatment of correlated solids upon DFT, without the numerical burden of DFT+DMFT or GW+DMFT. In order to correct the over-delocalization of the electrons, it was proposed to include an energy penalty U for the localized 3d or 4f orbitals orbitals, in the spirit of the mean-field Hubbard model. 1,2,[10][11][12] The success of the DFT+U method mainly originates from the simplicity of the method, its relative low computational cost, and the fact that it can predict the proper magnetic ground state of chargetransfer and Mott insulators. 1 Of course, due to the strong simplifications involved in the DFT+U method, it has some intrinsic deficiencies, such as yielding infinite life-times for quasi-particles, or opening the bandgap only by making a long-range order in magnetic materials. The DFT+U approach is also not applicable to some really strongly-correlated systems and for these systems once needs to go beyond DFT+U and use DMFT or cluster DMFT frameworks. In this work, we focus on materials with not so strong correlations, but correlated enough such that the standard functionals do not capture the right correlation effects. The DFT+U approach still remains very attractive when it comes to the calculation of larger systems, such as twisted bilayer systems with small twist angles 13 , or for out-of-equilibrium situations, using realtime TDDFT+U 14,15 . It also improves the description of the optical properties of some correlated materials within linear response 16 .
Here, we develop an efficient numerical approach tailored towards materials for which not only local correlations are important, but also nonlocal correlations, which means a strong interaction between neighboring localized electrons. One example of such systems are charge-ordering insulators, such as Fe 3 O 4 17 , for which an electron is delocalized on two sites and hence the Mott-Hubbard localization cannot occur. Nonlocal interaction also play a key role in low-dimensional systems, such as ad-atoms on Si(111) surfaces 18 or sp electron systems such as graphite and graphene [19][20][21] . Finally, the role of the intersite interaction is strongly debated for high-T c superconductors and dictate many properties 22 . In all these systems, the intersite interaction plays a decisive role and the related self-interaction error contained in (semi-)local functional of DFT is crucially hampering the capability of these functionals. The low-energy physics of these systems is well described by an extended Hubbard model. In particular, if we only account for charge interaction between neighboring sites, the extended Hubbard Hamiltonian reads where σ denotes the spin index, t ij are the hopping matrix elements, U represents the on-site interaction, and V ij are the nonlocal Coulomb matrix elements between the neighboring sites i and j.
Our goal is to derive the expression of an energy functional containing at the same time U and J describing the multi-band on-site interaction, and the intersite interaction V describing the charge interaction between the different atomic sites. The functional can be seen as an hybrid functional, in which the Kohn-Sham orbitals are expended into a basis of atomic orbitals, including the onsite terms, and some of the intersite terms. At variance with most of the proposed hybrid functionals, we do not use a mixing parameter to determine the weight of the exchange interaction, but we base our approach on the extended Hubbard model and the related DFT+U + V scheme, which allows us to propose a fully ab initio and self-consistent scheme, which yields an estimate of the U , J, and V effective electronic parameters. Moreover, we use an approximate double-counting term, as commonly done for DFT+U , which does not requires any parameter to be adjusted. The self-consistency in the calculation of the Hubbard U can be crucial in the case of transition-metal complexes 23 , and we expect this to be equally true for the intersite interaction. In our approach, the on-site Hubbard U , Hund J and the intersite interaction V are all evaluated at the same time, ab initio and self-consistently, to ensure the consistency of our approach. It is important to stress that except for the additional cost of computing more Coulomb integrals at the beginning of the calculation, our approach does not represent a major extra cost compare to the usual DFT+U and the ACBN0 functional 16,24 . This makes our method very attractive. Another important application of method is the possibility to directly extend it to the time-dependent case, only assuming the adiabatic approximation, or to couple to other degrees of freedom, such as phonons.
This paper is organized as follow. First we briefly review the DFT+U and the DFT+U + V methods in Sec. II. Then we present our generalization of the ACBN0 functional, the extended ACBN0 functional, in Sec. III. We then test our new functional on different system and compare our results with prior works. Finally, we draw our conclusions in Sec.V.
II. DFT+U AND DFT+U
where E ee is the electron-electron interaction energy, and E dc accounts for the double counting of the electronelectron interaction already present in E DFT . Although an exact form of the double-counting term was recently proposed in the context of DFT+DMFT, 25 this doublecounting term is not known in the general case and several approximated forms have been proposed along the years. The E ee and E dc energies depend on the density matrix of a localized orbitals basis set {φ I,n,l m }, which are localized orbitals attached to the site I. In the following we refer to the elements of the density matrix of the localized basis as occupation matrices, and we denote them {n I,σ mm }. Combining these two expressions, we obtain the E U energy to be added to the DFT total energy, which only depends on an effective Hubbard U parameter where I is an atom index, n, l and m refer to the principal, azimuthal, and angular quantum numbers, respectively, and σ is the spin index. In the case of a periodic system, the occupation matrices n I,n,l,σ mm are given by where w k is the k-point weight and f σ nk is the occupation of the Bloch state |ψ σ n,k . Here, |φ I,n,l m are the localized orbitals that form the basis used to describe electron localization andP I,n,l mm is the projector associated with these orbitals, usually definedP I,n,l mm = |φ I,n,l m φ I,n,l m |. The definition of the orbitals and of the projector will be discussed in more details below. In the following, we omit the principal quantum number n for conciseness.
Recently an extension of the DFT+U method was proposed by V. Leiria Campo Jr and M. Cococcioni 27 , in order to account for the intersite electronic interaction V , in the spirit of the extended Hubbard Hamiltonian 22 , accounting only for the charge interaction between neighboring sites. In a similar spirit, Belozerov and coworkers proposed a LDA+DMFT+V approach that they applied to the monoclinic phase of VO 2 28 . The Hubbard U is usually defined as the averaged of the on-site interactions of the localized orbitals where V ee is the screened Coulomb interaction. In a similar way, the most akin definition of the averaged inter-site interaction V is defined as 27 (6) For conciseness, we omit below the quantum numbers in our notation and refer in the following to V I,l I ,l as V IJ . The expression for E ee and E dc become for DFT+U +V 27 where * IJ denotes that for each atom I the sum runs over its neighboring atoms J. This definition uses a generalization of the occupation matrix where it is clear that n IIσ mm is the usual occupation matrix n Iσ mm as defined in Eq. 4. Combining the previous expressions, we arrive to the energy E U V that must be added to the DFT energy This energy is the expression for the DFT+U + V proposed in Ref. 27, and is invariant under rotation of the orbitals of the same atomic site. This is a generalization of the work of Dudarev et al. 26 , where the double-counting expression is a generalization of the fully-localized limit (FLL) double-counting of DFT+U . The motivation of this specific expression for the intersite interaction was done in Ref. 27 is therefore not discussed here. Below we show how it is possible to extend the work of Agapito et al. 24 to evaluate the average intersite interaction V ab initio and self-consistently, in a form of a pseudo-hybrid calculation. 24, an approximation to the electron interaction energy named ACBN0 functional was proposed, allowing for an efficient ab initio evaluation of the DFT+U energy, which can be seen as a screened Hartree-Fock evaluation of the on-site U , or equally as a psuedo-hybrid functional in which the (screened) Hartree-Fock energy is included only on a selected localized subspace. We propose here an extension of this approach, which includes not only the on-site interaction, but also the charge exchange between two sites. Below, we refer to this new functional as the extended ACBN0 functional.
In our generalized functional, the electron interaction energy is given where we restricted only to the charge interaction between two sites, neglecting other two-site interaction and also three-and four-site interactions, as this is most likely to be the largest contribution to the energy 27 . In Eq. (9), the renormalized occupation matricesn I,σ mm and occupationsN I,σ ψ nk are respectively given bȳ where the sums run over all orbitals of the system owning the quantum numbers n and l, and being attached to atoms of the same type as the atom I, as this quantity is similar to the Mulliken charge of atom I. 24 Here we defined a generalized renormalized occupation matrices where we introduced an ansatz similar to the one of the original paper of the ACBN0 functional namely a degree of screening in thisn IJ,σ mm . It is clear that for the on-site case, this expression reduces to the expression of Ref. 24. Let us comment on the motivation for this renormalization of the generalized occupation matrix. In the case of on-site interaction, the original motivation was that if a wavefunction is fully delocalized, i.e. not occupying the localized subspace, it should gives a U of zero, and the ACBN0 functional should reduce to the (semi-)local functional used to describe the itinerant electrons. By including the weighting coefficient, which is the Mulliken charge of the set of orbitals, the effect is enhanced 24 . Here, as we consider an intersite interaction, the idea is the opposite. In the atomic limit, for which the wavefunctions are not delocalized over different sites, there should be no intersite interaction. This is well seen by the fact that for the Coulomb integral of the form (φ I m φ I m |φ J m φ J m ) to be nonzero, there must be an overlap of the charge density originating from the two atomic sites. Hence, if a wavefunc-tion is not delocalized over the two considered sites, the productN Iσ ψ nkN Jσ ψ nk vanishes, and the wavefunction does not contribute to the intersite V . Importantly, thanks to this weighting coefficient, if a wavefunction is fully localized on one site, the wavefunction does not screen the interaction. In the context of cRPA, the screening is given by the "rest" of the system, i.e., the electrons in the localized subspace do not contribute to the screening. This property is preserved by the renormalization factor of the ACBN0 functional, as well as in our extended ACBN0 functional, which at least partly explains the success of this approach.
From Eq. (9), the effective inter-siteV IJ is given, for J a neighbor of the atom I, bȳ This expression is the main result of this section. With it, one can evaluate the intersite interaction ab initio and self-consistently, similarly to what is done for the effective U in the ACBN0 functional. We also derived the expression of the extended ACBN0 functional for the case of noncollinear spins, as presented in Appendix A.
So far, we remained elusive on the orbitals used to define the localized subspace. In the Octopus code 16 , we construct the localized orbitals {φ I,n,l m } by taking the radial part of the pseudo-atomic wavefunctions given by the pseudopotential files, and multiplying it by the usual spherical harmonics, in order to obtain the pseudoatomic orbitals. More precisely, in case of periodic solids, we use in all the above equations not the isolated localized orbital, but the Bloch sums of the localized orbitals, which read as which amounts for introducing a phase factor when the sphere on which the atomic orbital is defined crosses the border of the real-space simulation box. Here N corresponds to the number of unit cells forming the periodic crystal, i.e., the number of k-points of the simulation. The projection to Kohn-Sham states selects a single momentum, explaining why the k-point index was not specified in the above equations. Also note that the normalization factor in the Bloch sum vanishes when the we use the periodicity of the crystal in computing the sum over the entire crystal, which reduces to a sum over the unit cell without the normalization factor.
In order to be able to treat various type of solids, including weakly correlated solids such as Si, we also implemented the Löwdin orthonormalization procedure, which transform the set of non-orthogonal localized orbitals {φ I,n,l m } into an orthonormal set of localized orbitals where i and j indices run over all the considered orbitals, and the overlap matrix for the set of considered orbitals is (S k ) ij = φ ik |φ jk . Importantly, using these orthogonalized orbitals, we obtain a trace of the on-site occupation matrix is consistent with the Mulliken population analysis and leads to the exact same trace of the occupation matrix than the dual projector defined in Ref. 29 for non-orthogonal basis set. Due to the periodicity of the Bloch sums of the localized orbitals, we only need to compute projections on the orbitals of the atoms inside the simulation box, irrespective of the number of neighboring atoms considered. This is also the case for a simpler DFT+U calculation, making the cost of DFT+U + V calculation only mildly more expensive to the one of a more standard DFT+U calculation, at variance with the original formulation of DFT+U + V , which requires the construction of larger supercells to include further neighbors 27 . We stress here that the orthonormal orbitals given by Eq. 14 are not the one we use for computing the Coulomb integrals, as they are periodic orbitals, and Coulomb integrals computed using these orbitals would contain both on-site and intersite overlaps, which is not what we want here. For this reason, the Coulomb integrals are computed before performing the orthonormalization procedure, from the atomic orbitals of the pseudopotential. More precisely, each of them are evaluated on a portion of the grid, and the Coulomb integrals are computed on the union of these two spheres, using a non-periodic Poisson solver. This is sketched in Fig. 1, in which the violet points correspond to the grid points obtained from the union of the two spherical meshes centered on two atoms (indicated by green crosses). This is the points we used in our implementation. Another possible choice would be to use a single large sphere centered at the middle of the two atoms. This is indicated in red in Fig. 1. This choice obviously leads to more grid points and we found that there is no difference in between the two choices, as the atomic orbitals rapidly decay away from the center of the atom. Finally we note that a formulation in terms of Wannier functions would look very much the same as the one we have presented here.
A. ZrSiSe
The nodal-line semimetal have received a lot of attention recently. Due to the vanishing density of states at the Dirac or Weyl points, the screening of the Coulomb interaction is altered, and long-range Coulomb interaction is a crucial ingredient in the description of these materials 30 Hence, the nodal-line semimetal exhibit strong nonlocal correlations 31,32 , which are not captured by a local Hubbard U as used in DFT+U . This represent therefore an interesting potential application of our extended ACBN0 functional. In order to benchmark our functional, we decided to investigate ZrSiSe, and use hybrid functional as a reference to compare with. In Ref. 31 and 33, it was shown that hybrid functional calculations with a fraction of exact exchange of 7% reproduce best the experimental results, compared to the standard fraction of 25% used in the HSE06 functional 34 . We employed the experimental lattice constant of 3.623Å and we sampled the real-space using a spacing of 0.3 bohr and the Brillouin zone using a 7 × 7 × 3 k-point grid. We considered localization of the d orbitals of Zr and included the interaction with the first nearest neighbors, and we employed the localdensity approximation for the DFT exchange-correlation Fig. 2. We found that DFT+U + V has almost the same band dispersion than the one obtained from the hybrid functional close to the Weyl point, showing the validity of our functional in describing this material. We recently applied our functional to reproduce the measured angle-resolved photoemission spectrum of ZrSiSe 33 including spin-orbit coupling. Formulae for noncollinear spins are presented in Appendix A
B. Graphene and graphite
Low-dimensional sp materials received a lot of attention recently as they display strong local and nonlocal Coulomb interaction 20,21 . In Ref. 20, authors reported cRPA calculations of the on-site and intersite interaction for graphene at half-filling. Similar calculations were performed in graphene and graphite in Ref. 21. In order to illustrate the flexibility and robustness of our implementation, we compute the intersite interactions for graphene, treated here with mixed periodic boundary conditions, and graphite, which is a fully periodic material. We use a 15 × 15 k-point grid to sample the two-dimensional Brillouin zone of graphene and a 12 × 12 × 4 grid in the case of graphite. We employ an in-plane lattice constant of 2.47Å and an out-of-plane constant of 6.708Å for graphite. The real-space grid is sampled by a spacing of 0.45 Bohr and we employed the norm-conserving Pseudodojo pseudopotential 35 . We compare the previously reported values to the ones obtained by our functional in Tab. I. A major difference is that the in the present calculations all p orbitals are considered, whereas prior studies only considered p z orbitals. As a result, both onsite and intersite interaction are found to be smaller in our case that in previous works 20,21 , which is fully compatible with considering more orbitals in the localized subspace, which, in the language of cRPA, reduces the screening from the rest of the system. I. Calculated values of the on-site (U ) and intersite (V0i) interactions for graphite, graphene and benzene. Values are given in eV. The notation V0i denotes the intersite interaction between an atom in the unit cell, and its i-th neighbor.
In the case of graphite, two values are indicated corresponding to the two sublattices of the system.
Already in the 1950s, in the context of π-conjugated systems, it was proposed by Pariser, Parr, and Pople a one-band model with nearest-neighbor hopping and intersite interaction. This model has been widely studied and few expressions have been proposed to interpolate the Coulomb interaction between the 1/r behavior at long distance, and the short range on-site value, which is the Hubbard U . In order to get a more physical insight on the values obtained by our method, we compare them to the popular Ohno interpolation formula, which reads In this expression, the intersite interaction between atoms i and j separated by the distance r ij is estimated from the on-site interaction U and an effective dielectric constant .
The fact that we can fit the values of the intersite interaction with the Ohno potential indicates that the Coulomb integrals are properly computed. The most interesting point is that the effective dielectric constant = 2.15 that is used here matches the calculated intersite interaction values matches reasonably very well with the effective dielectric constant of graphite of 2.5 found experimentally or from cRPA calculations 21 . This shows that the functional correctly describe the screening at place in graphite. Fig. 4 shows the band structure of graphene using the extended ACBN0 functional, including both s and p orbitals, orthogonalized using the Löwdin orthonormalization, as explain above. The comparison of the band dispersion close to the K point (right panel of Fig. 4) shows that the extended ACBN0 leads to Fermi velocity quite close to the GW one around the K point, compared to the LDA calculation. This demonstrates that our functional to be the experimental one of a = 5.431Å. Overall, our results are found to be in reasonable agreement with the values of Ref. 27. However, it is worth noting that the band structure of silicon computed from LDA+U + V almost matches perfectly the one obtained from linear response values. This shows that whereas the values of U and V are not fully transferable from one implementation to another one, the observables obtained from the two approaches, the extended ACBN0 functional and linear response, give very similar results. We checked that using the generalized gradient approximation instead of the LDA one for the exchange-correlation part does not lead to a significant change of the calculated values of the effective electronic parameters.
V. CONCLUSION
In conclusion, we presented in this work an efficient method to compute ab initio and self-consistently the effective electronic parameters U , J, and V . We implemented the DFT+U + V and our novel energy functional in the real-space TDDFT code Octopus. We presented results for ground-state calculation showing that our implementation yield results in good agreement with the ones previously reported in the literature. We applied our functional to a nodal-line semimetal, ZrSiSe, showing that our functional produce very similar results obtained from a by far more expensive hybrid functional calculation. Applied to low-dimensional sp compounds, our functional gives results in qualitative agreement with cRPA calculations. Finally, we tested our functional on bulk silicon and we found that our functional reproduce well the results of linear-response in supercell 27. Let us comment on the choice of localized orbitals. In this work we employed pseudo-atomic orbitals obtained from the pseudopotentials. However, our approach is not limited to pseudopotential-based codes and can straightforwardly be used using any type of localized orbitals, such as for instance Wannier orbitals. Finally, we note that in this work we followed Ref. 27 and only considered specific intersite interaction. Determining how reliable and general is this approximation will require further investigations and would require extending the presented energy functional to include other intersite interactions. The method we presented here is, with this respect, general enough such that one could easily extend it to include other interaction terms. The extension of this functional to the time-dependent case, or to compute forces and vibrational properties of solids will be investigated in a future work.
interaction energy for noncollinear spin systems For the inter-site interaction, the double counting term is given by Putting everything together, one obtains that the rotationally-invariant form corresponding to Eq. | 5,985 | 2019-11-25T00:00:00.000 | [
"Physics"
] |
Heterogeneous virulence of pandemic 2009 influenza H1N1 virus in mice
Background Understanding the pathogenesis of influenza infection is a key factor leading to the prevention and control of future outbreaks. Pandemic 2009 Influenza H1N1 infection, although frequently mild, led to a severe and fatal form of disease in certain cases that make its virulence nature debatable. Much effort has been made toward explaining the determinants of disease severity; however, no absolute reason has been established. Results This study presents the heterogeneous virulence of clinically similar strains of pandemic 2009 influenza virus in human alveolar adenocarcinoma cells and mice. The viruses were obtained from patients who were admitted in a local hospital in China with a similar course of infection and recovered. The A/Nanchang/8002/2009 and A/Nanchang/8011/2009 viruses showed efficient replication and high lethality in mice while infection with A/Nanchang/8008/2009 was not lethal with impaired viral replication, minimal pathology and modest proinflammatory activity in lungs. Sequence analysis displayed prominent differences between polymerase subunits (PB2 and PA) of viral genomes that might correlate with their different phenotypic behavior. Conclusions The study confirms that biological heterogeneity, linked with the extent of viral replication, exists among pandemic H1N1 strains that may serve as a benchmark for future investigations on influenza pathogenesis.
Background
Following the emergence of the initial few cases in Mexico and California in 2009, the world faced another episode of pandemic caused by the novel influenza A H1N1 virus (pdm H1N1 hereafter) that carried a unique combination of gene segments from four different lineages [1]. The virus spread so rapidly that within two months of the first confirmed report, the World Health Organization (WHO) declared a level VI global emergency alert. Epidemiologic observations affirm the presence of seasonal flu imprints in pandemic H1N1 strains such as high attack rate with mild presentation and selflimiting infection in the majority of human cases [2]; however, some of them led to severe respiratory illness and eventually death [3,4]. Absence of known virulence markers such as lysine (K) residue at 627 in PB2 and the multi-basic cleavage site in hemagglutinin (HA), as well as truncated PB1-F2 and NS1 proteins [1], support the modest morbidity profile of pandemic H1N1 viruses. In addition, several in vivo studies conducted in ferrets and mice confirm the subtle disease profile due to pdm H1N1 despite its efficient replication in the lower respiratory tract of the host coupled with increased levels of innate and adaptive immune mediators [5][6][7]. However, severe and fatal human cases are reasonably explained by the presence of underlying host illness and bacterial co-infections that dysregulate host immune functions and consequently weaken host's ability to control viral replication [8][9][10]. One can consider the important role of host-associated factors in disease outcome, the reason pdm H1N1 behaved differently in humans remains elusive. Recently, Safronatz et al. reported the diversified behavior of pdm H1N1 strains of Mexican origin in cynomolgus macaques that indicates the possible link between viral heterogeneity and degree of disease severity [11].
Several laboratory animals including mice, ferrets, cotton rats and nonhuman primates have been successfully used as suitable models of influenza infection [12]. Among them ferrets are considered the best because of their natural susceptibility to the virus and its similar pathogenesis to that of humans [13]; however, their use in large-scale screening is not feasible. Small laboratory animals, particularly mice, have shown promising potential for virological studies. We have previously described the infection of prototypic pdm H1N1 strain, A/ Mexico/4108/2009 in mice with significant viral replication and marked lung pathology [14].
The high magnitude of the 2009 pandemic and potential risk of future outbreaks necessitate the evaluation of newer viral strains to resolve ambiguities about the severity of infection. In this study, we evaluated three different H1N1 influenza viruses that were isolated from adult patients admitted in a local hospital in the southern part of China during the second pandemic wave. Interestingly, these strains exhibited mild to severe pathogenic potential in terms of viral replication, disease progression, and induction of proinflammatory response in vitro and in vivo. Sequence analysis reveals that the mutations in polymerase subunits (PB2 and PA) might correlate with the phenotypic trait of the viruses. This study presents the co-circulation of heterogeneous pdm H1N1 during this period that cannot be a neglected factor in evaluating the pathogenesis of 2009 pandemic influenza infection.
Differential response of pdm H1N1 strains in A549 cells
To better understand the pathogenesis profile of local isolates, we randomly selected three strains of pdm H1N1, namely A/Nanchang/8002/2009 (NC2), A/Nanchang/8008/2009 (NC8), and A/Nanchang/8011/2009 (NC11). All were isolated from adult patients with severe clinical profiles and without underlying illnesses (Table 1). First we evaluated the replication and inflammatory response of these strains in human adenocarcinoma alveolar epithelial (A549) cells that were inoculated with each viral strain at multiplicity of infection (MOI) 2. Out of three, two pdm H1N1 strains (NC2 and NC11) exhibited severe cytopathic effects while, unexpectedly, NC8 caused only mild infection in A549 cells. Viral titration of culture supernatants was performed in MDCK cells at different time points. Kinetics studies showed that NC2 and NC11 grew in high titers in 24 h which was further increased in 48 h while NC8 showed poor replication capacity throughout the study period (P < 0.0001) Impaired replication of NC8 was also verified by significantly lower viral mRNA and protein levels over the period of 24 h post infection. Confocal laser fluorescent microscopy showed that NC8 was capable of infecting cells but replication was delayed compared to that of NC2 infection (Figure 1) Transcriptional analysis of major inflammatory mediators was performed by real time PCR at four different time points over the period of 48 h post infection. We observed that NC8 was unable to mount efficient inflammatory response as a consequence of its poor replication. As shown in Figure 2, CXCL10 expression was mute in cases of NC8 infection compared with NC2 and NC11 throughout the study period (P < 0.0001). In addition, cellular interferon responses including interferon (IFN) γ, IFN αA2 (P < 0.0001), and IL29 (P <0.0068) were also weak after NC8 infection compared with NC2 and NC11; however, in the case of IFN-γ, significant difference was observed only in 24 h (P < 0.05). A similar difference was observed in the case of IL6 at 24 h post infection (P 0.0176). Conclusively, immune mediators peaked between 8 to 24 h after NC2 and NC11 infection, while minimal variations in the gene expression were observed in NC8-infected cells.
The above-mentioned results indicated relatively poor replication ability of the NC8 strain in mammalian cells coupled with weak inflammatory response that prompted us to scrutinize how NC8 behaves in an avian environment. In contrast with A549 cells, a different situation was observed in chicken embryo nated eggs in which the virus titers of NC8 were higher than those from NC2 and NC11 (Table 1).
Differential pathogenesis of pdmH1N1 strains in mice
We next evaluated the pathogenesis of NC2, NC8 and NC11 viruses in C57/BL6 mice ( Figure 3). Each group of animals was infected with the same dose of influenza virus intranasally and observed for weight loss and lethality up to 14 days. Most strikingly, these viral strains, with apparently the same clinical profile in humans, behaved differently in C57/BL6 mice in lethality (P 0.0007) and weight loss (P 0.0007). When using 20% weight loss as the humane point, NC2 was 100% lethal within four d.p. i. at 10 5 EID 50 . At the same viral dose, NC11 exerted 90% lethal response within eight d.p.i., although no significant difference was noted in median death day (MDD), number of survivors and weight loss kinetics between the two viral strains. On the other hand, the same infection dose of NC8 did not cause death in animals while the virus at 10-fold concentration (10 6 EID 50 ), resulted only in 30% lethality (P < 0.001) coupled with a significant dichotomy in clinical course of infection within the NC8 infected group. Weight losses were milder and delayed as compared with those of the other groups (P < 0.001). The weight loss kinetics of individual animals infected by each virus is given in additional file 1. Taken together, these wild type pdm H1N1 strains showed heterogeneous attitude in C57/BL6 mice.
To confirm the heterogeneous nature of NC2 and NC8, viral infection was further established in C57/BL6 and BALB/C mice in a dose dependent manner. In addition, comparisons between the survival curves of C57/BL6 and BALB/C were analyzed since different mouse strains might differ in susceptibility of wild type
strains. C57/BL6 mice appeared to be more susceptible to viral infection than BALB/C; however, the trend was clearer at low viral doses. In general, NC2 infected mice faced 100% lethality irrespective of viral dose; however, they exhibited dose dependent kinetics in weight loss and survival curve with an extension of MDD from days 3 to 7 at high (10 5 EID) and low viral inocula (10 3 EID) respectively. In contrast, NC8 was unable to mount lethal infection irrespective of viral inocula and animal strains except 30% lethality with modest weight loss in C57/BL6 mice infected with 10 6 EID 50 ( Figure 4). These observations clearly demonstrate the presence of two different virulence phenotypes of pdm H1N1 strains, of which one seems to be better adapted to mammalian hosts than the other.
Altered replication of viral strains in mice
We also determined the replication and organ distribution (viral loads) of pdm H1N1 strains in different animal body tissues by cell culture method. Lung homogenates of NC2 and NC8 infected (10 5 EID 50 ) C57/BL6 mice mimic the viral replication profile of A549 cells at 1 d.p.i. with slightly enhanced viral titers of NC11 ( Figure 5a); however, all three viral strains grew at the same rate at 3 d.p.i. (data not shown). The results indicate delayed replication of NC8 that might provide a chance for the host immune system to overcome the infection, which eventually results in a non-lethal infection. Comparison of viral replication of NC2 in lungs of BALB/C and C57/BL6 mice showed that NC2 at 10 5 EID 50 replicated well in both mouse strains. To further understand the growth pattern of NC2 and NC8 strains, we titrated lung homogenates of BALB/C mice infected with dose series of the viruses. We found that NC8 was not able to replicate at day 1. Although day 3 showed signs of NC8 replication in animal lungs, the viral titers were still significantly lower than those of NC2 (P < 0.002) ( Figure 5). No extrapulmonary viral spread . Log rank sum test shows significant differences in mortality curves between NC2 and NC8 irrespective of inocula and mice strain (P < 0.0001). ). (a, c): Inter-group comparisons in weight loss shows significant difference NC2 vs NC8 groups (P < 0.0001), C57/BL6 vs BALB/C; NC2_ 10 4 (P < 0.0001), NC2_10 5 (P < 0.0001). (b, d): Survival differences are also significant in C57/BL6 vs BALB/C after NC2 infection: 10 4 (P < 0.0028), 10 5 (P 0.0005), 10 6 (P 0.0669).
was evidenced. The results endorsed the above observations and confirmed that early viral replication contributes to the pathogenesis of pandemic H1N1 infection.
Lung pathology
The extent of alveolar damage caused by NC2 and NC8 was assessed by histology over time. Figure 6 presents the comparison of haematoxylin and eosin (H&E) stained infected lung tissues. In C57/BL6 mice, NC2 caused mild to moderate cellularity in interstitial space at 1 d.p.i. compared with NC8, which did not show any sign of inflammation (Figure 6a and b). Severe interstitial inflammation with damaged alveolar structure, moderate cellular exudates, and hemorrhage in the lumen and peribronchial spaces were noticed after three days of NC2 infection (Figure 6c and d). Alveolar edema and distortion of respiratory epithelium was also observed in NC2 infected BALB/C mice (Figure 6e). In contrast, inflammatory scores were much lower after three days of NC8 infection as shown in Figure 6f. Histological observations support the heterogeneous nature of NC2 and NC8 infection as observed by survival and viral replication experiments.
Evaluation of host immune responses
First we compared the ability of NC2, NC11 and NC8 viruses to induce host inflammatory immune response in animals. C57/BL6 mice were infected with equal amount of viruses while mock infection with HBSS was administered to the blank group. We also noted whether weight loss and viral replication patterns were similar to those in the above-mentioned experiments. As expected, the gene expression of major immune mediators including CXCL10, IFNβ, TNFα, IL29 and IL6 of NC2 and NC11 infected mice was more pronounced than in those who were infected with NC8. At 1 d.p.i., the expression of Evaluation of host immune mediators. Transcriptional data for major host immune response genes (b-f) and viral mRNA levels (a) are shown at 1d.p.i. in C57/BL6 mice infected with 10 5 EID 50 of various strains of pandemic H1N1 influenza virus (NC2, NC8 and NC11) and mock infection with HBSS was given to blank animals. Results are presented as mean ± SEM of mRNA levels normalized with mouse β-actin gene. Statistical differences were calculated by Mann Whitney U test. ** -P < 0.001, * -P < 0.01. major inflammatory mediators CXCL10 and IFN β was upregulated in NC2 and NC11 infected groups, while the highest increase in the expression of TNFα and IL28A was observed in NC11 and NC2 respectively (Figure 7). On the other hand, transcriptional analysis of NC8 infected C57/BL6 mice showed attenuated response compared with NC2 and NC11 (P < 0.0001), mimicking the data obtained from A549 cells probably due to inefficient viral replication.
On the basis of the above-mentioned data, we chose the NC2 infection model to compare the kinetic response in BALB/C and C57/BL6 mice by real time RT-PCR analysis. More robust gene expression was observed in C57/BL6 mice compared with BALB/C, who showed comparatively attenuated responses throughout the experiment. In C57/BL6 mice, proinflammatory response was marked with significant induction of early immune mediators such as CXCL10, IFN β and TNFα. The trend was similar for IL6, indicating the classical switching of innate and adaptive arms. The highest level of expression was achieved at 1 d.p.i. in each case with the exception of IL28A, which was progressively increased over time (additional file 2).
Genetic characterization
Whole viral genomes of these strains were further sequenced to evaluate genetic mutations that might explain their biological behavior in cells and mice. Several mutations were found in each gene segment with respect to prototypic pdm H1N1 strains, A/California/ 07/2009 and A/California/04/2009. Comparison between NC2, NC8 and NC11 genomes revealed that NC8 differed from NC2 and NC11 at three different positions in polymerase subunits PB2-V227I, R299K and PA-E243G. HA analysis also showed the substitution of alanine at position 409 in NC2 and NC11 which was not present in NC8 (Table 1). Experimental data have already shown that NC2 and NC11 are more virulent than NC8 due to efficient viral replication; possibly these amino acid residues in PB2, PA and HA gene have an important role in host adaptation and the virulence of pdm H1N1 influenza virus. However, additional studies are required to probe the biological relevance of these amino acid changes.
Genetic characterization of NA, PB1, NP, NS1 and M2 also showed various mutations in these segments; however, none of them clearly defined the different pathogeneses of these pdm H1N1 strains (additional file 3).
Discussion
Here we present the heterogeneous virulence of three different strains of influenza H1N1 in human adenocarcinoma cells (A549) and mice that were isolated from clinically similar human cases from South China in December 2009. Two different patterns of biological heterogeneity were observed: first, two strains (NC2 and NC11) showed efficient viral replication and subsequent effects on tissue histology, induction of proinflammatory response and causing lethality in mice, although their behavior was not totally identical and some minor differences in the kinetics of the disease in mice were observed. Secondly, NC8 showed delayed replication that eventually led to non-lethal infection and muted inflammatory response in mice. These results have a relevance to the previously published epidemiological reports that associate effective viral replication and delayed clearance with disease severity in humans [9,15,16]. Most of the previous studies agree that pdm H1N1 exert homogeneous and modest infection but with efficient pulmonary viral replication in mice [17]; however, its pathogenesis is more than that of seasonal strains [5,18] and subdued compared with 1918 pandemic and other swine origin influenza viruses [17,19]. Nonetheless, the virus has been shown to increase virulence upon expression of truncated viral proteins by reverse genetic tools and after mice adaptation [20]. In addition to the heterogeneous nature of these strains, we also demonstrate that C57/BL6 mice are more susceptible for pdm H1N1 infection than BALB/C strains; however, variation in disease kinetics did not change the infectivity ratios as observed previously [14].
In this study, NC2 and NC11 viruses were able to induce of proinflammatory cytokines effectively whereas immune responses were mute in the case of NC8 infection. It is interesting to note that pdm H1N1 strains also display a differential cytokine response which may or may not be linked with viral growth. Previous studies have shown a robust gene expression of innate immune response genes with delayed switch to adaptive immunity after pdm H1N1 infection; however, overall responses are considered to be higher compared with those of contemporary seasonal strains [7,17].
Clinically, 2009 influenza pandemic caused self recovering mild disease in the vast majority of patients while only a small group of patients developed serious respiratory complications [21,22]. No explanation has yet been offered for why the clinical profile varies from one patient to another. The published studies interrogating host markers and viral pathogenesis in vitro and in vivo are mostly limited to the characterization of a narrow range of prototypic pdm H1N1 strains [5,[17][18][19]. Consequently, important aspects of the disease may remain unexplored. Pandemics provide a greater chance for influenza viruses to mutate; however, unveiling their impact on viral pathogenecity is an enduring goal that can be achieved by continuous surveillance. Lab investigation of newer strains might provide valuable information about the pathogenesis that could be missing in initial studies.
In the present study, although all these viral strains were isolated from patients who finally recovered, the viruses were able to produce biological heterogeneity in mice that refute the common paradigm of the evaluation of influenza pathogenesis which is at present based on the clinical profile and disease outcome of patients. Such attributes have previously been observed in clinically relevant influenza H5N1 strains [23,24]. In humans, viral heterogeneity may have specific effects on individuals with different genetic background and demography; therefore, infection with such viruses might result in a variable clinical course of infection. However, it is also important to remember that treatment strategies, immunocompetance, and clinical management influence the disease severity and outcome and consequently mask the true picture of viral pathogenesis.
Virulence and interspecies transmission of influenza virus is often considered a polygenic phenomena [25][26][27][28]. The triple reassortant pandemic 2009 influenza virus stands out from ancestral pandemic and reassorted strains because it rapidly transmits to humans despite the absence of any traditional virulence markers; for example, the C-terminal PDZ ligand domain of NS1 [26], functional PB1-F2 protein, and PB2-K627 [1]. Therefore, efforts have been made to determine other possible virulence determinant(s). Recent laboratory investigations conducted with mouse adapted pdm H1N1 strains speculate the role of HA (D131E and S186P) [29] and PB2 genes, such as glutamate-to-glycine substitution at 158 [30], aspartate to asparagines at 701 [31], Threonineto-alanine at 271 [32] and second site suppressor mutation [33] in viral replication and mouse adaptation, although none of them was demonstrated in wild type strains. In this study, the genetic characterization showed that the non-lethal NC8 strain contained three mutations (PB2-V227I, R299K and PA-E243G) in polymerase subunits compared with virulent strains. Previous studies have reported that both PB2 and PA genes are genetically linked with each other [34]; furthermore, N-terminal mutations in these genes might lead to intermediate or complete loss of viral RNA transcription [35]. Therefore, we might speculate that these mutations are interlinked and collectively responsible for altered replication of the NC8 strain. On the other hand, this virus (NC8) strikingly replicated more than other viruses with > log10 ratio in embryonated chicken eggs, indicating the ease of growth in an avian environment. Here it is important to consider that the 2009 pandemic virus contains polymerase subunits PB2 and PA of North American avian lineage. We do not know whether these substitutions in NC8 are the remains of ancestral avian strains or not, but upon sequence analysis of global pdm H1N1 isolates, we found that these amino acid residues (PB2-I227, K299, PA-G243) are conserved in pdm H1N1 strains, thus raising the possibility that collectively they have some role in the adaptation to the mammalian host and they might link to the heterogeneity of pdm H1N1. However, in vivo studies with mutant strains are required to prove the hypothesis. In the case of HA gene, NC2 and NC11 contained the A409V mutation compared to NC8 and prototype California strains. It is worthwhile to indicate that NC2 also exhibited HA-E391K, which has recently been identified as a fast-growing mutation with the ability to destabilize the HA oligomerization process, thus modifying the membrane fusion properties of the pandemic influenza virus [36,37]. However, no association with virulence and progression of disease has been established yet. Taken together, we hypothesize that these mutations in PB2, PA and HA genes might have no relevance with human disease but in the case of zoonotic transmission of influenza viruses to human, it may yield more pathogenic viruses.
Conclusions
In conclusion, the study provides evidence about the heterogeneous replication and virulence of clinically relevant pandemic influenza H1N1 viruses in mice and human alveolar adenocarcinoma cells. Replication efficiencies might link with the notable mutations in viral polymerase complex genes PB2 and PA. Heterogeneous virulence that the viruses displayed in cells and mice may not be linked with the human disease; however, it provides a background to understand the differences in symptomatology, immune responses, and viral dynamics of clinically relevant cases. The study mandates the more comprehensive analysis of 2009 pandemic influenza H1N1 strains and the factors which might be responsible for a different phenotypic behavior in humans.
Viral strains
A total of three pandemic Influenza H1N1 strains, namely A/Nanchang/8002/2009 (NC2), A/Nanchang/ 8008/2009 (NC8), A/Nanchang/8011/2009 (NC11), were used for in vitro and in vivo studies. All were isolated from nasopharyngeal (NP) swabs of adult patients who were admitted to a local hospital in Nanchang, Jiangxi province of China, in December 2009. Samples were collected before initiation of virological treatment in each case. These patients had similar courses of infection in terms of viral shedding and disease severity. They had no underlying illnesses (Table 1). All patients eventually recovered. Viral isolation was attained in 9-to 11day-old embryonated eggs as described previously [38] with the exception of incubation at 33°C. Samples with hemagglutination titer > 1:2 were considered positive and further confirmed by real time RT-PCR for pdm H1N1 virus using pandemic H1N1 influenza diagnostic kit (Liferiver, Shanghai, China) based on World Health Organization and US CDC protocol [39]. The viral stocks were further titrated by egg infectious dose 50 (EID 50 ) and used for in vitro and in vivo assays without further passage.
Sequencing
Whole viral genome sequencing was performed for each strain. RNA were extracted from NP swabs using Trizol (invitrogen) followed by reverse transcription by highcapacity cDNA RT kit (Applied Biosystems, Foster City, USA) and PCR using primers specific for each viral gene segment. Purified PCR preps (Promega, Madison, USA) were sequenced from Invitrogen (Guangzhou, China). Sequences were aligned and assessed by ClustalW multiple alignment tools. Comparisons were made with the prototype strains A/California/04/2009 and A/California/ 07/2009.
Infection in human adenocarcinoma alveaolar epithelial (A549) cells
An in vitro infection model was developed in adenocarcinoma human alveolar epithelial cells (A549) (ATCC, USA). Briefly, A549 cells, freshly seeded in 24-well plates, were infected with three different strains of pdm H1N1 (NC2, NC8, NC11) at MOI 2 in vHAM's F12 medium (M & C Gene Technology) containing 1 μg/ml of TPCK trypsin. MOI was calculated by EID 50 titers. After 2 hrs of adsorption, cell supernatants were replaced with fresh medium followed by incubation at 37°C. Similar treatment with the exception of virus was provided to uninfected cells (blank). Each point was performed in six replicate wells and the experiment was repeated thrice.
For kinetic studies, samples were collected at different time points such as 8 h post infection, 1 day post infection (d.p.i.) and 2 d.p.i.. In the case of the viral loads, supernatants were collected and titrated in MDCK cells (ATCC). For the determination of immune mediators, RNA was extracted using the SV total RNA isolation system (Promega) and reverse transcribed with the high-capacity cDNA RT kit (Applied Biosystems, Foster City, USA) followed by amplification using SYBR Green master mix (Invitrogen). Relative gene expression was calculated after normalization with human β-actin gene.
Confocal laser fluorescent microscopy
A549 Cells seeded on 24-well plates containing cover glass were infected with viral strains at MOI 2 for 1 h at 37°C followed by washing with HEPES (sigma) thrice and the addition of vHAMF12 medium (with no TPCKtrypsin). Cells were incubated at 37°C for different time intervals, fixed with 2% paraformaldehyde and blocked with 5% bovine serum albumin (BSA) (Sigma). Viral staining was performed with influenza A nucleoprotein antibody (southern biotech) for 16 h at 4°C. Alexa fluor 555 goat anti mouse IgG (H + L) (Beyotime) diluted 1: 500 in PBS containing 0.05% Tween20 and 3% BSA was used as a secondary antibody while cells were stained for DNA using 4 ′ ,6, diamino-2-phenylindole (DAPI) (Sigma) diluted 1:1000 in PBS. Slides were observed by confocal laser fluorescence microscope (Olympus Fluoview FV1000). Data is the representative of three independent experiments.
Animal experiments
Female C57/BL6 and BALB/C mice (8-10 weeks of age) were obtained from Vital River Laboratory (Beijing, China) and maintained on a standard animal diet in a SPF facility with controlled temperature and humidity. Initially, to compare the virulence and pathogenesis of viral strains (NC2, NC8, NC11), C57/BL6 mice (n = 10) were intranasally infected with 10 5 EID 50 in a final volume of 50 μl. NC8 infection at a higher dose of 10 6 EID 50 was further compared with NC2 and NC11 due to non-lethal infection. To investigate the detailed virulence profile of pdm H1N1 strains, MLD 50 experiments were set up in C57/BL6 and BALB/C mice. Animals were grouped (n = 10) and infected with 10-fold diluted pdm H1N1 influenza strains ranging from 10 6 to 10 3 per mouse. Mock infection with HBSS was given to healthy controls. Animals were observed daily for weight loss and mortality up to 14 d.p.i.. A loss of more than 20% in original body weight was considered the humane end point for mortality.
Viral loads
Three animals from each group were euthanized at days 0, 1 and 3 p.i. and their organs collected, i.e., lungs, liver and brain. Organ homogenates were prepared in vDMEM (10% w/v) and assayed for viral loads in MDCK cells with the detection limit of 10 TCID 50 /ml as described previously [14].
Histopathology
On days 0, 1 and 3, p.i. animals were euthanized, and lung tissues were removed and fixed in 10% buffered formalin. Fixed tissues were processed for paraffin wax-embedded sectioning and 5 μm thin sections were stained with hematoxylin and eosin (H & E) and observed; pictures were taken using a Nikon Eclipse 80i microscope (Nikon).
Measurement of cytokines by quantitative PCR (qPCR)
For the measurement of host immune response, lungs of virus-infected animals (n = 5/group) were collected in an RNAlater (Ambion Inc) at different time intervals. Expression of immune response genes was studied by real time qPCR performed with 0.5pmol/μl of forward and reverse primers targeting the gene of interest. Reactions were run in duplicate, and mean values were normalized with β-actin gene expression. Primer pairs and PCR conditions will be provided upon request.
Statistics
Statistical analyses were performed using PAWS Statistics 18 (SPSS Inc., Chicago, IL, USA). Fisher's exact and Chi square tests were used for comparison of categorical data, and the two-tailed t-test was applied in cases of continuous variables. Survival analyses were performed by the Kaplan-Meier method and significant differences were measured by log-rank test. Contingency analysis was applied to assess the number of survivors in each group. Significant differences in viral loads, cytokine measurement, and weight loss and hazard ratios were analyzed by Student's t-test.
Ethics statement
This study was approved by the ethical committees of Shantou University Medical College, Shantou, China (permit number SUMC 2011-058) and Infectious Disease Hospital, Nanchang University, Nanchang 9th Hospital (permit number 2009-02). Written consents were obtained from all participants involved in the study. | 6,729.8 | 2012-06-06T00:00:00.000 | [
"Biology",
"Medicine"
] |
Using castor oil to separate microplastics from four di ff erent environmental matrices †
The detection of environmental microplastics (MP) is limited by the need to rigorously separate polymers from the surrounding sample matrix. Searching for an a ff ordable, low-risk and quick separation method, we developed a protocol to separate microplastics (size range: 0.3 – 1 mm; virgin polymers: PP, PS, PMMA and PET-G) from suspended surface solids (marine and fl uvial) as well as soil and sediment using castor oil. We demonstrate e ff ective separation of the four polymers in a spike-recovery experiment. The mean (cid:1) SD MP spike-recovery rate was 99 (cid:1) 4% with an average matrix reduction of 95 (cid:1) 4% (dry weight, n ¼ 16). The protocol was validated by separating non-spiked environmental Rhine River suspended solids samples recovering 74 (cid:1) 13% of MP. There PS comprised 76% of the non-retrieved MP and additional H 2 O 2 digestion was needed to su ffi ciently reduce the highly abundant natural matrix. This castor oil lipophilicity-based protocol (i) achieves high MP recovery rates as a function of its environmental matrix reduction ability and (ii) provides environmentally friendly, non-hazardous and resource-e ffi cient separation of MP from four di ff erent, typically investigated environmental compartments using one and the same method. Based on the Rhine River sample validation, the protocol is a potent replacement for traditional density separation techniques. Samples with high biogenic concentrations may require additional digestion.
Introduction
Microplastics (MP; <5 mm) are an emerging contaminant with planetary boundary implications. 1,2MP have been widely reported in marine, 3 freshwater 4,5 and terrestrial 6,7 environments, and their rising environmental concentrations, high bioavailability and ecotoxicological potential have led to escalating global concern regarding the effects of MP. 8 Surveys on environmental MP are published nearly every week; however, vast inconsistencies prevail in the methodologies used for sampling, purication, quantication and chemical analysis. 9To minimise the interference of natural residues during chemical analysis (for instance Fourier Transform Infrared [FTIR] or Raman spectroscopy 10 ), it is vital to rigorously separate synthetic polymers from the environmental matrix.A number of separation and purication processes are known in preparatory protocols for MP analysis, including electro- 11 and density separation based on NaCl, ZnCl 2 or NaI, and purication protocols using NaClO, HNO 3 , H 2 O 2 or KOH. 12,13However, many of these techniques are complex, time consuming and require extensive sample manipulation, and are thus prone to contamination and loss. 12reover, some reagents used for density separation or chemical purication of MP samples (e.g.ZnCl 2 , 14 NaI, 15 HCl, 16 or H 2 O 2 (ref.17)) pose a threat to health and/or the environment if mismanaged.9][20][21] Apart from their ecological hazards, many of these reagents are costly, may consume precious resources that could be better used elsewhere, ordepending on the economic resources or geographical location of the investigating unitmay not be easily available. 22Due to the increasing, widespread demand for monitoring and risk assessment of environmental MP, simpli-cation and democratisation of standardized methods for MP sampling, purication and quantication are urgently required. 9,23A generally applicable, efficient, accurate, rapid, cheap purication method could aid the compilation of valuable extensive datasets on the distribution of MP in different environments.
In this study, we aimed to develop a method that enables the separation of some of the most common types of MP particles found in the environment from a diverse set of typically investigated matrices.We based our approach on an oil extraction protocol for separating MP from sediments that employs canola oil. 22Here, we report a simple, rapid, cheap extraction protocol based on castor oilwhich has a higher molecular weight than canola oil (933.45 vs. 876.6g mol À1 )for efficient and accurate recovery of various MP particles (PP, PS, PMMA and PET-G) from four typical environmental sample matrices: uvial suspended surface solids (FSS), marine suspended surface solids (MSS), marine beach sediments (MBS) and agricultural soil (AS).In order to validate the microplastic recovery as well as the matrix reduction potential on non-spiked environmental samples the castor oil separation protocol was executed on ve Rhine River FSS samples.These were collected at different locations between Switzerland and the German-Dutch border in order to capture a variety of microplastics and biogenic residue abundance as well as diversity in polymers and state of polymer degradation. 24,25
Environmental sample collection
Fluvial suspended surface solids (FSS) and marine suspended surface solids (MSS) were collected using a 0.3 mm neuston net mesh (see Table S1 † for the details of all hereinaer mentioned materials and instruments) and marine beach sediments (MBS) and agricultural soil (AS) were collected using a stainless-steel spoon (Table S2, Fig. S1 †).The samples were fractionated (0.3-1.0 mm for FSS and MSS and 0.063-1 mm for MBS and AS) using geological sieves and stored at 7 C. Prior to analysis, all samples were dried at 40 C for 24 h.Each environmental matrix was divided into four replicates with specic target dry weights to the nearest mg: 1.0 g for FSS and MSS and 10.0 g for MBS and AS.Due to formation of aggregates in the MSS and FSS aer drying, these samples were disaggregated by adding 100 mL distilled water (aq.dest.) and stirring at 400 rpm and 60 C for 15 min prior to the separation protocol.Five additional Rhine River FSS samples were collected to examine whether nonspiked eld samples potentially containing MP could be efficiently separated using the oil separation protocol.The Rhine River samples were collected at different locations (Table S2 †).Each sample was collected over 10 min from the centre of the river cross section using a Manta Trawl (mesh: 300 mm), resulting in a mean (AESD) ltered volume of 84.2 AE 8.7 m 3 (ESI, Table S2 †).
Microplastics for spiking
The environmental samples were spiked with synthetic polymer particles to assess the MP recovery rate of the protocol.Selection of the polymers was based on (i) global production volumes 26 and the frequency of reported identication in the environment 27 and (ii) the inclusion of a range of polymer densities from below to above the specic density values of fresh water and saltwater.
We used fragments of four common polymer types: polypropylene (PP; specic density, r ¼ 0.84 g cm À1 , Table S4, Fig. S5 †), polystyrene (PS; 1.05 g cm À1 , Table S5, Fig. S7 †), polymethyl methacrylate (PMMA; 1.19 g cm À1 , Table S6, Fig. S9 †) and glycol modied polyethylene terephthalate (PET-G; 1.27 g cm À1 , Table S7, Fig. S11 †).PP, PS and PET are among the six most commonly produced plastics (including PE, PVC and PUR). 28PP and PS are typically two of the three most commonly identied polymers in environmental plastic studies (the other is PE). 27PMMA was additionally selected as a major representative MP identied in the Rhine River. 24Particle sizes ranged from 0.3-1 mm.
As the oil separation protocol involves an aqueous phase with a specic density (r) of $1 g cm À1 , polymers that covered the specic density range of 0.84-1.27g cm À1 were deliberately selected.This density-range coverage is also important for separation of MP from heavier matrices, such as MBS (r $ 2.6 g cm À1 ), as the aqueous phase of the water-matrix mixture in every separation process will have a specic density (r) of $1 g cm À1 aer the heavier solid fraction has settled.
Particles were mechanically fragmented and sieved into small (0.3-0.5 mm) and large (0.5-1.0 mm) fractions (see Table S1 † for material and instrument details).Each fraction was numerically quantied using a stereomicroscope and chemically analysed by attenuated total reection (ATR)-FTIR.Spectra were compared against a reference spectra library using Opus 7.5 soware.
Castor oil microplastic and environmental matrix separation protocol
For the four environmental matrices, the four replicates of each pre-weighed residue were transferred into a separation funnel (n ¼ 16, see Table S1 † for all material and instrument details), suspended in 100 mL aq.dest.water and spiked with 100 MP particles (15 small and 10 large fragments of each polymer type, resulting in n ¼ 1600 MP particles for the entire experiment).
The sealed funnels were shaken for 30 s by hand to ensure thorough mixing of the spiked samples.Ten mL of castor oil (Table S1 †) was added to each replicate.To guarantee the entire sample made contact with the castor oil, the separation funnel was inverted and shaken for 1 min by hand.For this the separation funnel was rmly held using two hands at the top and the bottom, respectively, and vigorously shaken and rotated at shoulder level.Subsequently, the separation funnels were rotated back to their upright position and the walls and lid of the funnel were rinsed with 400 mL aq.dest.water to ensure any remaining residue and oil droplets were returned to the mixture.Thereaer, the MBS and AS samples were le to settle for 15 min and the MSS and FSS samples for 45 min, according to previous experience (please refer to Fig. S2 † for a schematic diagram of the entire procedure).
The lower aqueous and solid phase was then drained from the separation funnel into a clean glass jar, sealed and stored at 7 C.The remaining oil phase was drained, vacuum ltered onto ash-less hardened cotton/cellulose lter paper (pore size: 25 mm), and the lter was washed with 100 mL ethanol (EtOH, 96%).Before and during draining, the lid and walls of the funnel were thoroughly rinsed using an additional 100 mL EtOH to transfer all residue onto the lter.The lter paper was transferred to a glass Petri dish, sealed with paralm and stored at 7 C for visual polymer spike-recovery and further FTIR analysis.
For visual spike-recovery, the lters containing the separated oil fraction ltrates were dried at 40 C for 24 h and weighed to dene dw-reduction (in %, Table S3 †).Finally, the extracted spiked polymer particles on the lter (Fig. S3 †) were picked by hand, quantied using a stereomicroscope and then chemically analysed by FTIR (as described in the section "Microplastics for spiking").
The ve Rhine River FSS samples collected to examine nonspiked environmental microplastic recovery rates and matrix reduction efficiency (Table S2 †) were subjected to the same castor oil separation protocol.Subsequently the separated sample fractions were rinsed with aq.dest.and dried.All resulting fractions (oil as well as the water and solid phase) were visually assessed for MP within the size range of 0.3-1 mm.Totally 978 putative environmental microplastic particles were detected in both the oil and water and solid phase of all ve Rhine River samples combined, of which 40% were chemically investigated using FTIR.Aer the oil separation, further dw matrix reduction potential of these ve FSS-samples was investigated by subjecting the oil-extracted, rinsed and dried fractions to H 2 O 2 .For this, the pre-weighed dry sample residues from the upper oil-phase were placed in separate glass Petri dishes (diameter 6 cm), covered with 10 mL of H 2 O 2 (30%) and incubated at 50 C for 18 hours (adapted from ref. 17).Subsequently, the sample residues were rinsed on a 300 mm mesh using aq.dest., re-transferred back to the Petri dishes, dried for 6.5 h at 60 C and weighed (dw).
Quality control and protection against contamination
The effect of EtOH rinsing on preventing castor oil-FTIR interference 22 was quantied.For this, spectral hit quality indices (HQI) of PS and PP MP (size range longest axis: 0.5-1.2mm) were compared aer four different treatments in triplicate: (i) untreated, pure MP, (ii) MP submerged in EtOH (96%) for 30 min, (iii) MP submerged in castor oil for 30 min and (iv) MP submerged in castor oil for 30 min and subsequently submerged in EtOH (96%) for 5 min.MP from treatment (iv) reached signicantly higher HQIs compared to treatment (iii) (Fig. S13 †).Interestingly, MP submerged in EtOH (ii) reached higher HQIs than untreated pure MP (i) (Fig. S13 †), indicating a general benet of EtOH treatment for MP spectroscopy, also outside the application of the presented protocol.To prevent samples from contamination, glassware was used whenever possible.Containers, such as Petri dishes, were always covered with a lid or aluminium foil when not in use.Where the use of plastic materials for processing was unavoidable (e.g. the PTFE stopcock in the separation funnel), the item was thoroughly rinsed before use with deionised water and EtOH (70%).White lab coats (100% cotton) were worn in the laboratory at all times.Nitrile gloves were worn whenever the operator's hands came into close contact with samples and glassware.To prevent crosscontamination between instruments or receptacles, all used items were thoroughly washed with warm water and labware detergent.Procedural blanks were run during the visual sample examination phase ($4 h) to assess the laboratory atmosphere contamination potential (adapted from ref. 29).For this, three thoroughly rinsed glass Petri dishes (diameter: 13 cm) were placed uncovered on the laboratory bench during the entire visual sample examination phase.Subsequently they were rinsed and drained onto cotton/cellulose lter paper and the lter paper was visually examined under a super-lighted stereomicroscope.No MP fragments were recorded in any of the blanks.
Statistical analysis
Statistical analyses were performed using GraphPad Prism 7.03 for Windows (GraphPad Soware, La Jolla, CA, USA).A Kruskal-Wallis test was performed to evaluate differences between the four dw matrix reduction rates in the spike-recovery experiment.A Friedman test was run to assess differences of total microplastic recovery rates between the different matrices in the spike-recovery experiment.Both tests were followed by a Dunn's multiple comparison test to evaluate where differences lie.To compare matrix dw reduction of the ve non-spiked Rhine River samples (i) aer oil separation and (ii) aer additional H 2 O 2 treatment, a Kolmogorov-Smirnov test was applied.To compare HQIs of PS and PP particles aer four different treatments as described in "Quality control and protection against contamination" unpaired t tests were carried out aer a Shapiro-Wilk normality test.
Environmental matrix reduction and recovery of spiked microplastics
The oil separation protocol reduced the irrelevant part of the environmental matrices by a mean of 95 AE 4% (AESD, dw, n ¼ 16).The highest matrix reduction was achieved for AS (98 AE 1%), followed by MBS (97 AE 1%), MSS (94 AE 1%) and FSS (91 AE 4%, n ¼ 4 each).AS dw reduction was signicantly higher than FSS (p < 0.01).The mean recovery rate for all four synthetic polymers (PP, PS, PMMA and PET-G) over all sample replicates was 99 AE 4%.Spiked MP with a large diameter (0.5-1 mm) were recovered at a rate of 100 AE 2%, and those with a small diameter (0.3-0.5 mm) were recovered at a rate of 98 AE 4%.The highest spike recovery rate was observed for the MSS samples, from which 100 AE 2% of spiked MP (of all polymer types and sizes) were recovered, followed by 99 AE 3% for the AS replicates, 99 AE 3% for the FSS replicates and 97 AE 5% for the MBS replicates.PP (both size fractions) had the highest recovery rate from all four environmental matrices (99 AE 3%), followed by PS (99 AE 3%), PMMA (99 AE 4%), and PET-G (98 AE 5%; Fig. 1
, Table S3 †).
A before-and-aer comparison using ATR-FTIR spectroscopy conrmed that the spiked polymers were not chemically altered during treatment (Fig. S6, S8, S10 and S12 †).The nondestructive nature of the protocol is an important factor for bias-reduced environmental analysis; some published protocols involve potentially plastic-modifying steps such as acidic or alkaline purication 30,31 or ultrasonication. 32luvial suspended solid samples (FSS) turned out to be the hardest to separate (Fig. 1), due to their high proportions of lowdensity biogenic particulate matter.Therefore, ve non-spiked Rhine River FSS samples were subjected to the oil separation protocol for validation.Through the castor oil separation the environmental matrix could initially be reduced by 51 AE 11% dw.A subsequent H 2 O 2 treatment of these remaining residues resulted in a signicantly higher nal dw matrix reduction of 82 AE 6% (Fig. 2).Clearing up the environmental matrix generally will also have a positive effect on the visual microplastic recovery rates, as sample insight improves on removal of nonplastic particles. 13We identied a total of 978 synthetic particles, distributed very heterogeneously among the ve samples.This large range of MP concentrations represents the highly variable pollution levels of the different Rhine River stretches. 24,25Using the castor oil separation protocol, a mean 74 AE 13% of environmental MP could be retrieved in the upper oil phase (totally 773 MP particles retrieved from the upper oil phase, Fig. 2).Of the totally 205 MP particles retrieved from the lower aqueous and solid phase upon oil separation, the nonspiked Rhine River samples exhibited a large proportion of PS (76%, Fig. S4 †).This was unexpected, as the overall recovery rate for PS particles in the spike-recovery experiment was 99%.PS opaque microbeads (33%) and PS foam (29%) were the largest contributors to the aqueous and solid PS abundance (Fig. S4 †).The microbeads in the aqueous and solid phase (n ¼ 67) stemmed from only one of the ve environmental Rhine River samples.In an earlier study of the Rhine River such microbeads were identied to most likely be ion exchange resin (IER) beads. 25Possibly, the ion-active surface of IER beads weakens their lipophilicity, thus resulting in their relatively low recovery rate upon oil separation (67%).The other dominant shapecategory found in the lower aqueous and solid phase was foams (68 of 205 MP, of which 60 were PS).Remarkably, despite their very low density (0.01-0.45 g cm À3 (ref.33)) only 70% of foamed PS retained in the oil phase aer separation.Possibly, their rough, scraggly surface promotes heteroaggregation with ambient, denser solid environmental particles and thus causes their settling below the oil phase. 34Nevertheless, the Rhine River sample ndings demonstrate the applicability of the method to genuine eld samples, albeit not quite yielding the same high level of recovery as in the spiking experiment.Among both the oil as well as the aqueous and solid phase of the Rhine River samples aer separation, there was a high congruence of MP category abundance (solid fragments, foams, spherules, etc., Fig. S4 †).However, the variety and abundance of different polymer types was distinctively poorer in the aqueous and solid phase (Fig. S4 †).Fibres were present in both fractions but not accounted for in this investigation as their sound environmental and polymer identication is reportedly highly bias-afflicted. 32,35Due to the vast heterogeneity of MP abundance among the ve Rhine River samples (11-692 MP) a statistical investigation of shape-related separation efficiency was not possible.Two samples with the highest MP abundances (692 and 235 MP) majorly inuence the shape-related MP separation patterns depicted in Fig. S4.† This test reveals that weathered environmental MP, potentially clustered in compact heteroaggregates with non-plastic suspended solids, are not as well separable from the surrounding matrix as the spiked MP.
Characteristics and composition of the environmental matrices, such as FSS, may strongly vary depending on the season of collection, yielding varying abundance of e.g.lipophilic chitin crustacean exoskeletons. 17Furthermore, weathered, environmental MP are prone to alterations in their visual and chemical characteristics. 36Such potential modications were apparent in the retrieved environmental microplastics in the form of e.g.fading colours and surface cracks.This nding alludes to the potential need for better disaggregation of environmental FSS samples prior to, and additional purication aer oil separation.
Properties and advantages of a castor oil-based separation approach
In this study, we present a rapid, reliable method to extract commonly found MP with various polymeric characteristics from four typical environmental matrices, FSS, MSS, MBS and AS, that yields high polymer recovery and matrix reduction rates.In contrast to separation and purication protocols involving numerous treatment and sample transfer steps (e.g.ref. 17), the presented oil separation is non-toxic and performed practically entirely within a closed system, which is a great advantage in terms of reducing the risk of sample contamination.Furthermore, there is minimal necessity for sample transfer between containers during the protocol, and hence the risks of sample losses or contamination during transfer exposure are minimised.We show, however, that depending on the quality of the environmental matrix, additional purication aer oil separation can be highly benecial (e.g. using H 2 O 2 ).Therefore, this oil separation protocol may, depending on the matrix at hand, serve as a valuable alternative to density separation, but not always as a replacement for further purication measures.Application of this single protocol to an array of distinct matrices fosters the potential for comparative research on MP pollution across different environmental compartments, 9 while avoiding the need to use expensive and potentially hazardous reagents such as ZnCl 2 (ref.14) and NaI 15 in density-based separation protocols or H 2 O 2 (ref.32) in enzymatic/oxidative purication protocols.The efficient isolation of any sample residue offered by the oil separation protocol represents an immensely important factor required for the success of both automatic and manual spectroscopic MP assessment techniques. 17reviously reported separation techniques resulted in matrix reduction rates of up to 80% for MBS using uidisation and otation 15 and 98% for MSS using enzymatic digestion. 17In comparison, the matrix reduction rate of this oil separation protocol (95 AE 4%) lies in the upper ranges of these other techniques.The presented castor oil separation protocol achieved very high MP spike-recoveries from four different matrices which are almost identical to an earlier published oil extraction protocol 22 which was only tested on sediments (99 AE 4 vs. 99 AE 1.4%).Previous work on MBS using uidisation and otation reported spike recovery rates of between 18 and 100% 37 and 91-99%. 15Spiked MP were recovered from MSS at a rate of 84 AE 3% using an enzymatic digestion protocol 17 and 96-100% using the Munich Plastic Sediment Separator (MPSS). 14In comparison, the here presented protocol resulted in spike recovery rates of 99 AE 4% over all tested matrices.
Chemical considerations and background to the lipophilic castor oil approach
The natural castor oil employed in this protocol consists of approximately 99% long-chain C 18 fatty acids ($90% ricinoleic acid, C 18 H 34 O 3 ). 38The high molecular weight of these longchain aliphatic hydrocarbon-dominated fatty acids enables stable attraction between the non-polar lipophilic component of the fatty acid molecules and the non-polar lipophilic carbohydrate surface of synthetic polymer fragments (e.g.PP [C 3 H 6 ] n ) in a quasi-micellar manner.Furthermore, castor oil features one of the highest viscosities of the natural plant oils (>300 cP vs. <200 cP for canola oil), allowing the formation of a thick oil layer around the polymer fragments.The oil-polymer clusters have a lower overall density than watereven for high-density plastics such as PET-G (1.27 g cm À3 ).Therefore, these clusters move to the top of the separation funnel, where they merge with the castor oil and become separated from the lower aqueous environmental matrix phase.Due to the presence of a hydroxyl group on its twelh C atom, ricinoleic acid is sufficiently polar to easily dissolve in EtOH following the oil separation procedure.
In contrast to density-based separation approaches the here presented castor oil based microplastic separation protocol relies on the lipophilic and at the same time hydrophobic properties of synthetic hydrocarbon polymers. 39,40Within the separation funnel, separation and stratication of the liquid water/matrix and oil phases are driven by both chemical and gravitational forces.Hence, suspended solids with a specic density lower than water ($1 g cm À3 ) but higher than castor oil ($0.96 g cm À3 ) settle in the top layer of the water and solid phase, just below the oil phase.This phenomenon presents a challenge to precise manual separation of the water and oil phases while handling the polytetrauorethylene (PTFE) stopcock during sample release for ltration.Especially during the separation of FSS, a large residue settled at the oil-water interface, which ultimately increased the total mass of solids oil-extracted from the matrix, thus limiting the matrix reduction rate for this matrix.In a previously published canola oil separation method, an additional enzymatic digestion step was applied when excess biomass was encountered during separation. 22Indicated by the strong reduction rates of the nonoleophilic matrix, as presented in our manuscript, it is most probable that highly biofouled MP, where contact between the castor oil and the polymer is inhibited, would not be separated as efficiently as unfouled MP.For samples where MP are strongly biofouled 41,42 we would recommend applying sample digestion 17 prior to oil separation to remove excess biogenic material from polymers.Further research is needed concerning the castor oil recovery potential specically of biofouled, weathered, smaller (<0.3 mm) and denser MP (e.g.polytetra-uorethylene [PTFE] $ 2.2 g cm À3 (ref.43)).We suggest repeating the separation process for the lower lying watermatrix phase in a series of further separation run-throughs.In our hands, a series of further separation run-throughs led to further separation and enhanced the rates of recovery, but residue reduction and MP recovery clearly depended on the environmental matrix and the characteristics of the MP particles (such as size, tendency to form aggregates, etc.).
Conclusions
Still today, aer more than a decade of intensive research on microplastics worldwide, at least two major handicaps prevail: (i) there is a downright lack of uniformity in sampling, processing and analysis within the scientic community, and (ii) the inevitable separation and purication of environmental samples prior to the identication and analysis of potential plastics is more oen than not enormously time and material consuming as well as prone to sample manipulation.Every methodology (e.g.oil-, density-, electro-or visual separation) ultimately quanties a spectrum of the possible variables.Besides developing uniform protocols to guarantee comparability of environmental data, knowing the limits of each method is crucial, as it will facilitate to identify the most appropriate approach for every given case.Here, we were able to present a separation protocol for microplastics from environmental matrices which is highly simple and extremely efficient regarding the investments of time, material resources and health/environmental risk.The very same procedure was successfully demonstrated on four different types of environmental samples from the hydro-and lithosphere where the anthroposphere overlaps or affects them.This advance could possibly lead to a break-through in improving methodical homogeneity across the eld and accelerate the accumulation of ever so important data for moving to the next crucial steps in microplastics sciencesnamely, evaluating their ecological impact and nding mitigation and solution measures.
Fig. 1
Fig.1Mean matrix dry weight (dw) reduction and spike recovery rates (both in %) for the four polymer fragment types (PP, PS, PMMA and PET-G) from four environmental matrices: marine suspended surface solids (MSS), fluvial suspended surface solids (FFS), marine beach sediments (MBS), and agricultural soil (AS).Error bars indicate the standard deviation (n ¼ 4 for every data point plotted).The small spiked polymer particles had a diameter of 0.3-0.5 mm (n ¼ 15 per polymer and replicate), and the large spiked particles were 0.5-1 mm in diameter (n ¼ 10 per polymer and replicate; resulting in a total of 4 Â 25 MP particles ¼ 100 spikes for each of the 16 experimental replicates).There was no significant difference in total microplastic recovery rates between tested environmental matrices (p > 0.05).The dw matrix reduction rates of FSS and AS samples differed significantly (** ¼ p < 0.01).
Fig. 2
Fig. 2 Mean percentage (+SD) of recovered microplastics (MP) from the five non-spiked Rhine River fluvial surface suspended solid samples (FSS, red and blue hatched column, left).The centre and right-hand columns show the mean percentage (+SD) of dw matrix reduction after the oil separation and after subsequent H 2 O 2 treatment, respectively (n ¼ 5).The dw matrix reduction was significantly higher after H 2 O 2 treatment (** ¼ p < 0.01). | 6,392.4 | 2019-03-28T00:00:00.000 | [
"Physics"
] |
RANTES levels in peripheral blood, CSF and contused brain tissue as a marker for outcome in traumatic brain injury (TBI) patients
Traumatic brain injury (TBI) causes activation of several neurochemical and physiological cascades, leading to neurological impairment. We aimed to investigate the level of novel chemokine RANTES in plasma, cerebrospinal fluid (CSF) and contused brain tissue in traumatic brain injury patients and to correlate the expression of this chemokine with the severity of head injury and neurological outcome. This longitudinal case control study was performed on 70 TBI patients over a period of 30 months. Glasgow coma scale (GCS) and Glasgow outcome score were used to assess the severity of head injury and clinical outcome. Level of RANTES was quantified in plasma (n = 60), CSF (N = 10) and contused brain tissue (n = 5). Alterations in the plasma levels on 1st and 5th day following TBI were assessed. Patients were categorized as severe (GCS < 8) (SHI), moderate and mild Head injury (GCS > 8–14). 15 healthy volunteers were taken as the control group. The median plasma RANTES levels were 971.3 (88.40–1931.1); 999.2 (31.2–2054.9); 471.8 (370.9–631.9) for SHI, MHI and healthy control respectively and showed statistically significant variation (p = 0.005). There was no statistical difference in the mean 1st and 5th day RANTES levels for the SHI group. However, admission RANTES levels were significantly higher in patients who died than those who survived (p = 0.04). Also, RANTES levels were significantly higher in plasma as compared to contused brain tissue and CSF (p = 0.0001). This is the first study of its kind which shows that there is significant correlation of admission RANTES levels and early mortality. Another interesting finding was the significant upregulated in the expression of RANTES in plasma, compared to CSF and contused brain tissue following severe TBI.
Background
Traumatic brain injury leads to a complex cascade of pathophysiological and neurochemical events. The influx of neuroinflammatory mediators triggered following the primary injury, results in secondary insult to the brain.
Regulated upon activation normal T cells expressed and secreted (RANTES) is a C-C β chemokine (68 a.a.) is a selective chemo attractant of human monocytes and lymphocytes and induces the migration of monocytes, eosinophils, T cells, NK cells, mast cells, and basophils to sites of inflammation and infection [1], are released from multiple sources, predominantly CD8 + T cells, platelets, macrophages, eosinophils, fibroblasts, monocytes [2][3][4].
RANTES stimulates T cells via two discrete pathways, first is a transient Ca 2+ mobilization by GPCR-mediated pathway leading to cell polarization and migration, second is a sustained Ca 2+ surge dependent on protein tyrosine kinase (PTK)-mediated pathway resulting in multiple cellular responses including T cell proliferation or apoptosis, release of interleukin 2 (IL-2), IL-5, interferon γ (IFN-γ) and MIP-1β. Other chemokines do not produce these responses. Thus, in addition to inducing chemotaxis, RANTES can act as an antigen-independent activator of T cells in vitro [4,5].
RANTES and its receptor CCR5 have been linked to numerous pathological conditions in the brain and neurodegenerative diseases [4]. RANTES role in leukocyte infiltration has been established, recently RANTESmediated systemic inflammatory response, has been associated to chronic infection and augmented microvascular injury in the brain. Suggesting therapeutic utility of targeted modulation of RANTES-dependent pathways [6].
Injury and the resultant inflammation leads to the breakdown of the blood brain barrier (BBB) compromising permeability of circulating immune cells. Production of inflammatory cells including complement activation proteins by astrocytes, neuron and microglia in response to pathological challenge has previously been reported [1]. Subsequent to brain injury, chemokines initiate integrin clustering, recruit lymphocytes to injury sites, and steer them into the brain, ensuing which, these lymphocytes, together with neuronal cells, participate in proinflammatory cytokine-mediated stimulation of endothelial activation and chemokine secretion [7].
RANTES has been reported to play a role in inflammatory brain diseases such as cerebral malaria [8] and scrapie [9]. Owing to the neurotoxic and neuroprotective functions of chemokines, targeted specific therapies for TBI have not yet been developed to affect underlying causes. Previous studies have reported the elevated expression of RANTES in peripheral blood post brain injury in animal models, however whether plasma level of RANTES can predict severity of brain injury in critically injured trauma patients, remains unknown [10]. There have been very few studies in human subjects on the role of RANTES in the pathogenesis of human TBI.
Clinicians desire reliable biomarkers that reflect the immunologic status after acute TBI. Biomarkers that can help navigate personalized therapies, additionally where to measure (blood vs. CSF vs. tissue), as well as when after injury to measure the marker is a challenge [11]. This study intended to investigate the level of novel chemokine RANTES, in plasma, cerebrospinal fluid (CSF) and contused brain tissue in traumatic brain injury patients within 24 h and on day 5 of injury; also to correlate the expression of this chemokine with the severity of head injury and clinical outcome of the patient.
Setting and design
We conducted a prospective longitudinal case-control study (STROBEs criteria followed), in a level 1 trauma care center, for the duration of 30 months (December 2010-May 2013). 70 isolated traumatic brain injury patients (age group 16-65 years), were included in the study and categorized into four groups (n = 15) i.e. (i) severe head injury (SHI) (GCS ≤ 8) who died within 5 days of injury, (ii) SHI who survived beyond 5 days of injury, (iii) moderate and mild head injury (MHI) (GCS > 8-14) who were discharged within 5 days of injury, (iv) MHI who were hospitalized for more than 5 days of injury, following the assessment for injury using tools like Glasgow coma scale, and computed tomography (CT) findings.
Patients with isolated skull fracture, also patients who are immune-compromised or having pre-existing medical problem (diabetes/hypertension/hepatitis) were excluded from the study. Patients admitted after ≥24 h of injury and referred from other institutes were also excluded ( Fig. 1).
Age and gender matched 15 healthy controls (HC) were included in the study.
Parameters such as hospital length of stay (HLOS), ICU length of stay (ILOS) and Glasgow outcome score (GOS) at discharge, and development of sepsis (blood culture positivity) and cerebral meningitis (CSF culture positive) throughout hospital stay was recorded.
Peripheral blood was drawn on day 1 and day 5 of injury subsequently for measurement of chemokine RANTES using standard laboratory techniques.
Five contused brain tissue samples were also collected for chemokine analysis, from SHI who survived beyond 5 days of injury, at the time of surgery (within 24 h of injury) from the site of evacuation.
Also, 10 cerebrospinal fluid samples were taken only when clinically indicated from 10 separate patients with SHI the patients, as per the standard scheme of neurosurgical management.
Diagnosis of traumatic brain injury
TBI was diagnosed based on admission head CT findings [12].
Sample size calculation
Assuming a common mean ± SD (standard deviation) of 82,840 ± 400 (ρg/ml), one way ANOVA would have 90% power to detect 5% level, a difference in mean RANTES levels, with a sample size in each of three groups (viz. no TBI, mild and moderate, severe TBI) of 10 [12]. Therefore we proposed 15 subjects per group.
Similarly samples for day 5 were studied but only among moderate and severe who had hospital stay for more than 5 days. CSF 500 μl of cerebrospinal fluid was, collected. Samples were centrifuged at 3000 rpm for 10 min to remove cellular debris and supernatants was decanted and stored at −20 °C until analysis.
Contused brain tissue
Tissue were removed and placed in cold (2-4 °C) phosphate buffered saline (PBS) and stored at −80 °C until analysis. Tissue was homogenized as previously described by Hulse et al. [13].
Briefly, the tissue sample was rinsed with cell wash buffer, taken from the Bio-Plex ™ Cell Lysis Kit (catalog #171-304012 Bio-Rad; Hercules, CA) once. Tissue was cut into 3 × 3 mm pieces. 500 mM Phenylmethylsulfonyl Fluoride (PMSF) was prepared by adding 0.436 g PMSF (#P-7626 Sigma, St. Louis, MO, USA) to 5 ml dimethyl sulphoxide (#D2650 Sigma, St. Louis, MO, USA) [DMSO], stored in 0.5 ml aliquots at −20 °C. Lysing solution (10 ml) was prepared by mixing the other contents of the Cell Lysis Kit (#171-304012 Bio-Rad) as per manufacturer's instructions, vortexed gently and set aside on ice, and 40 μl of 500 mM PMSF was added afterwards. To 500 μl of lysing solution, tissue sample was added, tissue disruption was accomplished by drawing the samples up and down through a 1 ml pipette tip (cut back to a 2 mm opening) 20 times, subsequently centrifuged at 4500g for 15 min at 4 °C, supernatant was collected.
Statistical analysis
Statistical analysis was performed for the comparison of RANTES levels between the groups. Quantitative variables were summarized as mean ± SD or as median (range). Categorical data was expressed as frequency (%) and analyzed using Pearson chi square test. One-way analysis of variance (ANOVA) was applied for comparison between three groups. A p value of ≤0.05 was considered to be statistically significant.
Results
The study included 30 severe head injury patients (average age 35.1 ± 12.5; 87.9% male; 71.1% road traffic accidents) of which fifteen had subdural hematoma (SDH), five had epidural hematoma (EDH), five had subarachnoid hemorrhage (SAH) and five had multiple contusions. Nineteen mild head injury (GCS 14) and eleven moderate head injury (GCS 9-13) formed the moderate and mild head injury group.
On comparing the baseline parameters for the study groups (severe, moderate and mild head injury and healthy controls) age and gender frequency did not vary. Total leucocyte count was comparatively higher for the SHI group (p < 0.0001).
Nine (30%) SHI patients developed sepsis, of which two died within 5 days of injury, and 4 (13.3%) moderate and mild head injury patients developed sepsis.
Glasgow outcome scale (GOS) at discharge was worse in severe than in moderate and mild head injury group (p < 0.001) ( Table 1).
Of the 10 SHI patients (average age 32.7 ± 14.9 years) from whom CSF samples were collected, five had SDH, two EDH, one SAH and two had multiple contusions ( Table 2). 23-2054.96); for severe, moderate and mild head injury respectively. Overall these variations were observed to be statistically significant (p = 0.005), however further analysis revealed insignificant variation in the plasma RANTES levels between severe and moderate and mild head injury groups (p = 0.85) and significant elevation in the RANTES levels between SHI group and healthy controls, also between moderate and mild head injury group and healthy controls (p ≤ 0.001; 0.01) respectively (Fig. 2).
Plasma RANTES levels correlation with severity of head injury
On admission plasma RANTES levels (ρg/ml) of severe head injury patients who died within 5 days of injury 1155.37 (88.40-1931.13) was comparatively higher than the SHI patients who survived till discharge 742.22 (223.87-1523.66) (p = 0.04).
Only one patient died on the 7th day following injury, on admission RANTES level of 803.99 (ρg/ml) that elevated to 1481.8 (ρg/ml) on day 5.
RANTES alteration in the plasma at 1st and 5th day following TBI
To evaluate the trend in the plasma levels of RANTES following TBI, we quantified its levels on admission and on the 5th day post injury for 15 patients each in severe and moderate and mild head injury group. The median (min-max) on admission were 742.22 (223.87-1523.6) and 1251 (77.35-1895.58); and on day 5 they were observed to be 694.18 (83.51-1804.73) and 1148.05 (25.63-1718.82) for severe and moderate and mild head injury group respectively (Fig. 3).
Both the 1st and 5th plasma levels were slightly lower in severe group as compared to the moderate and mild head injury group; however this difference was not statistically significant ( Table 3).
The difference in the mean day 1 and 5 RANTES levels were calculated for the SHI group, a statistically insignificant decline of 34.7 (ρg/ml) was observed, a similar insignificant decline is reported in the moderate and mild head injury group also (Table 4).
RANTES in cerebrospinal fluid and cerebral contusion in TBI patients
RANTES levels were significantly higher in plasma, lower in contused brain tissue and lowest in CSF. The variation in the chemokine levels were statistically significant across the three groups and within the groups (p = 0.0001) ( Table 5). Median RANTES levels of 5 SHI patients for both plasma and contused brain were compared, however the variation were not observed to be statistically significant.
Of the ten patients whose CSF was analyzed for chemokine RANTES
Discussion
Secondary brain injury is a resultant of a complex cascade of events such as edema, ischemia, excitotoxicity, and inflammation that succeed the initial injury and last throughout acute hospitalization. There is a paucity of clinical studies that corroborate inflammation's contribution to secondary TBI, which has been well established in experimental conditions.
The following are the major findings of our study on the levels of regulated upon activation normal T cells expressed and secreted (RANTES) a member of the β-chemokine subfamily following traumatic brain injury.
On admission plasma RANTES level was almost twice as high in traumatic brain injury patients in comparison to the healthy control irrespective of the severity of head injury. SHI patients who died within 5 days of injury had higher RANTES levels compared to those who survived. Decline in the plasma RANTES levels by day 5 was observed in severe head injury patients who survived. In SHI patients, plasma levels of RANTES were three times higher than contused brain tissue, within 24 h of injury, however CSF levels of RANTES were significantly lower than contused tissue and plasma. We observed altered RANTES levels in every readily measured compartment, including plasma, CSF, and also in brain tissue posttraumatic brain injury. Our results demonstrate contribution of neuroinflammation in exacerbation of neurologic injury and augmented morbidity and mortality rather than facilitating repair. Similar to our study, Lumpkins et al. [12]. reported a significantly higher day 1 RANTES level in severe TBI compared with the non-TBI group (mean 1339 vs. 708 ρg/ml, p = 0.046). However, they reported no Similar insignificance for the RANTES levels between SHI and MHI was observed in our study; discordantly we report higher RANTES levels in MHI group as compared with the severe TBI. One possible explanation of this lower cytokine activation could be due to the dilution of the chemokine in the plasma pool via fluid and blood product transfusion. The decline in the 5th day RANTES levels from 923.1 (409.3-1291.0) on admission to 786.2 (83.5-1804.7) in severe head injury group was observed, likely due to the onset of sepsis, as the association of the down-regulation of RANTES levels with infection is well established in pediatric and adult population [14][15][16].
Low plasma RANTES level has been established as an independent predictor of mortality in myocardial infarction [17], cerebral malaria [18], inversely we observed a statistically significant association of on admission elevation of RANTES levels following severe traumatic brain to mortality.
Lee et al. [9] stated that central nervous system cells are primarily responsible for the increased chemokine gene expression; reactive were the reported sources of RANTES in scrapie which are triggered to release chemokines and cytokines.
In addition to chemokines, studies also report the elevation of mRNA and protein levels of chemokine receptors following injury. C-C chemokine receptor type 5 (CCR5) mRNA levels were observed to be upregulated in response to the elevated levels of chemokines following injury. CCR5 plays a role in microglial migration towards the lesion site after focal brain injury [19]. Trauma induced activation of astrocytes and microglial cells, may be the predominant cause of upregulation of chemokine levels in the brain following injury. [20].
The patterns of markers of inflammation observed in the peripheral blood tend to be echoed in CSF. Using cerebral micro dialysis, Helmy et al. [21] demonstrated acutely elevated CSF levels of RANTES after severe TBI. They report a stereotyped temporal peak, at least twice the median value of RANTES over the monitoring period on day 1 of cortical injury. CSF RANTES levels in human immunodeficiency virus infected subjects with cognitive impairment was reported to be 95.4 (<5-1442) (ρg/ml) [22].
Sustained elevation of CCL2 [Chemokine (C-C Motif ) Ligand 2] of the same β-chemokine family as RANTES was detected in CSF of severe head injury for 10 days after trauma, and in cortical homogenates of mice, peaking at 4-12 h after closed head injury, confirming the significant role of CCL2 in mediating post-traumatic secondary brain damage [23]. We observed a CSF RANTES level of 34.81 (5.43-226.80). Erikson et al. [24] used multianalyte technology to simultaneously determine the responses of 13 cytokines and chemokine in brain and blood to injections of lipopolysaccharide and path analysis to determine the major relations among these analytes. They report a peak in RANTES levels in brain and serum and concluded that the immune response in the brain is latent compared to that in the periphery, and that expression of these cytokines in the brain likely requires initiation of signaling pathways and transcriptional events within the central nervous system, as previously observed by Tonelli and Postolache [25].
Terao et al. [26] demonstrated a significant elevation in the brain tissue, not plasma, levels of RANTES wild-type mice (WT) subjected to focal cerebral ischemia-reperfusion (I/R), we report an elevation of plasma 837.36 ρg/ ml and brain tissue 237.39 ρg/ml RANTES level following injury. Significant elevation in brain tissue levels of RANTES in mice post focal cerebral I/R. BBB dysfunction induced by cerebral I/R was greatly attenuated in RANTES −/− mice suggesting that RANTES directly or indirectly increases BBB permeability.
The results of their study suggested that RANTES plays a major role in the recruitment of both leukocytes and platelets into the cerebral microvasculature after brain I/R. They also reported that circulating blood cells are the likely source of RANTES that mediate the I/R-induced cerebral responses. They observed a persistent elevation of brain tissue RANTES in WT mice transplanted with RANTES(−/−) bone marrow (RANTES > WT) which suggests that non-blood cells (endothelial cells, vascular smooth muscle cells and/or glial cells) are likely to account for the majority of the I/R-induced elevation of brain tissue RANTES, and concluded that of the total RANTES detected in brain tissue of WT-I/R, approximately 40% is derived from blood cells while 60% is derived from non-blood cells.
Hu et al. [27] reported that microglia obtained from fetal and adult brain specimens produced comparable amounts of RANTES suggesting that the capacity to produce this chemokine is acquired early in brain development. Astrocytes, which comprise the major glial cell type within the CNS, were found to be less capable of producing RANTES and anti-inflammatory cytokines regulate the production of RANTES.
Limitations
The major limitation of this study was that the cerebrospinal fluid samples were not taken from the same patients from whom plasma and contused brain tissue was taken. CSF of ten separate severe TBI patients was analyzed separately. Secondly the Glasgow outcome score of the TBI patients was assessed at discharge and not 1 year following severe TBI.
Conclusion
This is the first study of its kind which shows that there is significant correlation of RANTES levels within 24 h of injury and early mortality in isolated severe TBI patients. Plasma RANTES was significantly higher in TBI patients irrespective of the severity of injury, in comparison to healthy control. RANTES levels were significantly upregulated in plasma compared to brain tissue, suggesting an inflammatory response to TBI on a local and on a systemic (plasma) level. Our above reported data emphasize the role of neuroinflammation in the escalation of secondary insult which ultimately results in mortality. The pathophysiology of these results should stimulate future clinical trials targeted at alteration of RANTES levels, mitigating secondary brain injury to limit TBI outcomes.
Authors' contributions VA: carried out the acquisition of data, immunoassays, analysis and interpretation of data; drafted the manuscript and revised it critically for important intellectual content; AS: participated in its design, data analysis and interpretation, revised it critically for important intellectual content; DA: participated in the design of the study, patient recruitment, data interpretation, was involved in revising it critically for important intellectual content; SB: was involved in patient recruitment, revising it critically for important intellectual content; PP: conceived of the study, and participated in its design and coordination and helped to draft the manuscript; AKM: was involved in revising it critically for important intellectual content. All authors read and approved the final manuscript. | 4,761.4 | 2017-03-24T00:00:00.000 | [
"Biology",
"Medicine"
] |
Optimal pricing and inventory policies for non-instantaneous deteriorating items with permissible delay in payment: Fuzzy expected value model
Article history: Received 25 October 2011 Accepted February, 3 2012 Available online 18 February 2012 This study investigates optimal pricing and inventory policies for non-instantaneous deteriorating items with permissible delay in payment. The demand rate is as known, continuous and differentiable function of price while holding cost rate, interest paid rate and interest earned rate are characterized as independent fuzzy variables rather than fuzzy numbers as in previous studies. Under these general assumptions, we first formulated a fuzzy expected value model (EVM) and then some useful theoretical results have been derived to characterize the optimal solutions. An efficient algorithm is designed to determine the optimal pricing and inventory policy for the proposed model. The algorithmic procedure is demonstrated by means of numerical examples. © 2012 Growing Science Ltd. All rights reserved
Introduction
According to the modern view, uncertainty is considered essential to science; it is not only an unavoidable phenomenon but has, in fact, a great utility in real world applications. In essence, uncertainty occurs not only due to a lack of information but also as a result of ambiguity (impreciseness) due to the semantic statements by experts. In context of the inventory management, experts usually make interval-valued or linguistic statements about the time parameters and relevant data of inventory system. These interval-valued or linguistic statements lead to non-stochastic uncertainties. The fuzzy set theory was developed to model uncertainties in non-stochastic sense.
During last two decades, several researchers have investigated various types of inventory problems in fuzzy environments to model uncertainties in non-stochastic sense (e.g. Park;1987, Chen et al.;1996, Roy and Maiti;1997, Chang and Yao;1998, Lee and Yao;1999, Kao and Hsu;2002, Chen and Ouyang;, De & Goswami, 20062008). In aforementioned studies, the common feature is that the parameters (demand, cost coefficients etc.) were assumed to be triangular fuzzy numbers or trapezoidal fuzzy numbers. From literature survey, there are few literatures considered the parameters to be fuzzy variables. For instance, Wang et al. (2007) constructed EVM for EOQ model without backordering by characterizing the holding cost and ordering cost as fuzzy variables. Wang and Tang (2009) considered EVM for the EPQ problem with backorder in which the setup cost, the holding cost and the backorder cost are characterized as fuzzy variables, respectively. Recently, Soni and Shah (2011) developed fuzzy expected value production model by characterizing demand and production preparation time as fuzzy variables.
In recent years, researchers studied inventory problems for non-instantaneous items under different conditions. For example, Ouyang et al. (2006) studied an inventory model for non-instantaneous deteriorating items with permissible delay in payments. Geetha and Uthayakumar (2010) extended Ouyang et al.'s model incorporating time-dependent backlogging rate. However, both models consider constant demand rate and cost minimization objective. The assumption of constant demand is quite impractical in reality. It would be more realistic to consider the demand as selling price dependent. The basic idea is that price setting will influence the demand and potential profit. Therefore, we consider demand to be price sensitive.
Based on above discussion, we consider the time parameters, the holding cost rate and interest paid/earned rate in Geetha and Uthayakumar (2010) model may be varied slightly owing to some uncertainties in non-stochastic sense or uncontrolled environments. In addition, instead of constant demand rate we have assumed the demand rate as known, continuous and differentiable function of price. By incorporating above concepts we solve the new inventory model in the fuzzy sense. The main purpose of this study is to extend the paper of Geetha and Uthayakumar (2010) with a view to make the model more relevant and applicable practically.
The rest of the paper is organized as follows: In Section 2, the assumptions and notations which are used throughout the article are presented. In Section 3, fuzzy expected value model to maximize the total profit is formulated. Solution methodology comprising some useful theoretical results and algorithm to find the optimal solution is carried out in Section 4. Numerical examples are provided in Section 5 to illustrate the theory and the solution procedure. Finally, we draw a conclusion in Section 6.
Assumptions and Notations
The following notations and assumptions have been used in developing the mathematical model in this article. , , p t t Π : The total profit per unit time of inventory system.
Assumptions
(1) The inventory system involves single non-instantaneous deteriorating item.
Demand rate D (p) is any non-negative, continuous, decreasing function of the selling price.
During the fixed period, t 1 , the product has no deterioration. After that the on-hand inventory deteriorate with constant rate θ, where 0 < θ < 1. For simplicity, we assume that t 1 is given constant and t 1 ≤ t 2 .
There is no replacement or repair of deteriorated units during the period under consideration.
Shortages are allowed and backlogged partially. We assume the fraction of shortages backorder is ( ) , where x is the waiting time up to the next replenishment and δ is backlogging parameter 0 ≤ δ ≤ 1. This function has been utilized by many researchers (e.g. Abad (1996Abad ( , 2001, Dye (2007), Geetha and Uthayakumar (2010)).
During the trade credit period, M, the account is not settled; the revenue is deposited in an interest bearing account. At the end of the period, the retailer pays off the item ordered, and starts to pay the interest charged on the item in stock. (7) Replenishment rate is infinite and lead time is zero. (8) The system operates for an infinite planning horizon.
Holding cost rate, interest paid rate and interest earned rate are imprecise in nature and assumed to be non-interactive fuzzy variables defined on credibility space
The crisp inventory model
The inventory system evolves as follows: Q 1 units of items arrive at the inventory system at the beginning of each cycle. The inventory level is declining only due to demand rate over time interval [0, t 1 ]. The inventory level is reducing to zero owing to demand and deterioration during the time interval [t 1 , t 2 ]. After that, inventory level becomes zero and shortages begin to be accumulated during [t 2 , T]. The process is repeated as mentioned above.
Based on above description, the status of inventory at any instant of time Also, the ordering quantity over the replenishment cycle can be determined as The profit of the inventory system consists of the following components.
1. The ordering cost ( o C ) is A.
2. The inventory holding cost ( h C ) per cycle is given by The shortage cost ( s C ) per cycle due to backlog is given by The opportunity cost ( l C ) due to lost sale per cycle is given by 6. The sale revenue (R) is given by Next, based on the parameter values t 1 , t 2 and M, there are three cases to be explored.
Fuzzy Expected Value inventory model
In this article, we have considered the holding cost rate, interest paid rate and interest earned rate as fuzzy variables to tackle the reality in more effective way. When the parameters h , p i and e i (as per assumption) treated as fuzzy variables, the above inventory expressions become fuzzy and thereby the total profit per unit time becomes fuzzy variable on the credibility space ,Cr X P X . If the decision maker wants to determine optimal pricing and inventory policy such that fuzzy expected value of the total profit is maximal, a fuzzy EVM can be constructed as follows, Next section carried out the solution methodology for fuzzy EVM along with theoretical results to identify global optimal solution for ( ) 2 3 , , p t t .
Solution Methodology
Using linearity of operator E the fuzzy EVM given by Eq. (5) can be reduced to following single objective crisp problem.
Case 1: 0 < M ≤ t 1 From Eq. (6), the expected value of the total profit during the replenishment cycle per unit time can be written as follows, To maximize the expected total profit per unit time, it is necessary to solve the following equations simultaneously. 1 2 3 1 2 3 and , , , , In order to identify optimal solution for ( ) 2 3 , , p t t , firstly we prove that for any given p, the optimal pair of values ( ) 2 3 , t t not only exists but also is unique. Once this is done, we shall derive the existence of p for optimal pair of values ( ) 2 3 , t t .
From Eqs. (8) and (9) we obtain respectively, Equating right hand side of Eqs. (11) and (12) we have For convenience, let ( ) Thus, t 3 is a function of t 2 and p.
Now, we substitute ( ) ( ) into Eq. (11) and making some algebraic manipulation, we obtain Motivated by Eq. (15), we assume an auxiliary function, say ( ) [ ) and t 3 is given as in Eq. (14). Differentiating ( ) 1 2 F t with respect to t 2 , and using the relation in Eqs.
(13) and (14) we get 1 , t t ∈ ∞ and it can be shown that as t 2 gets Now, the optimal value of t 2 depends on sign of ( ) 1 1 F t so we examine two sub-cases as follows: Sub-case 1.1: Let ( ) Hence, optimal value occurs at point 2 1 t t = and corresponding optimal value of 3 t can be found from Eq. (14) and is given by Summarizing the above arguments, we obtain the following result.
Next, we analyze the condition under which the optimal selling price also exists and is unique. Since, Thus, there exist unique optimal selling price * 1 p that satisfy (10). Note that the lower bound of optimal selling price (say l p ) is the solution of ( ) ( )( ) Case 2: t 1 < M ≤ t 2 From (6), the expected value of the total profit during the replenishment cycle per unit time can be written as follows.
(23) and (24) we obtain respectively, Thus, t 3 is a function of t 2 and p.
Summarizing the above arguments and as discussed earlier in case 1, we can obtain the following result.
denotes the optimal value of ( ) 2 3 , t t for case 2 then we can obtain following result. , , ⎦ is concave and attains its global maximum at point ( ) ( ) Proof: Analogous to theorem 4.1.
Next, the condition for existence and uniqueness for the optimal selling price can be derived analogously as in Case 1. Consequently, there exist unique optimal selling price, denoted by * 2 p , that satisfy ( ) Case 3: M > t 2 From Eq. (6), the expected value of the total profit during the replenishment cycle per unit time can be written as follows.
. Hence, optimal value occurs at point 2 1 t t = and corresponding optimal value of 3 t can be found from Eq. (35) and is given by denotes the optimal value of ( ) 2 3 , t t for Case 3 then we can obtain following result. , , ⎦ is concave and attains its global maximum at point ( ) ( ) Proof: Similar to theorem 4.1.
Next, the condition for existence and uniqueness for the optimal selling price can be obtained similar manner as in Case 1. Therefore, there exist unique optimal selling price, denoted by * 3 p , that satisfy ( ) Based on the concavity behavior of objective function with respect to the decision variables the following algorithmic procedure was developed to identify global optimal solutions for ( ) 2 3 , , p t t .
Algorithm 4.1:
Step 1: Input the values of all parameters. Select membership functions for holding cost rate, interest paid rate and interest earned rate with appropriate parametric values.
Step 2: Set k = 1 and initialize the value of ( ) Step 3: Compare the values of M and t 1 . If M ≤ t 1 , then go to Step 4 otherwise go to Step 5.
Step 4: Calculate by Eq. (16). Execute any one of the following cases (4.1), (4.2). Step 5: Calculate To show the efficiency of proposed computational algorithm 4.1, we run the algorithm with starting value of p = 360. The graph (Fig. 1) shows clear concave function of t 2 and t 3 for given p. Consequently, the obtained solution is a global maximum solution.
Fig. 1.
Profit function (Example 4) with respect to t 2 and t 3
Conclusion
According to the model of Geetha and Uthayakumar (2010), a new fuzzy EVM with generalized price sensitive demand is formulated. In contrast to previous studies, we characterized the holding cost rate, interest paid rate and interest earned rate as independent fuzzy variables to tackle the reality in more effective way. A solution methodology along with some useful theoretical results followed by an efficient computational algorithm is developed to determine the optimal pricing and inventory decisions. The extended model is more effective as it can help the decision maker in subjective decisions with control on selling price. In future research on this problem, it would be interesting to consider other parameters viz. variable demand rate, partial backlogging rate etc. as fuzzy or fuzzy stochastic.
Let ξ be a fuzzy variable defined on the credibility space (X, P(X), Cr). Then its membership function µ is derived from the credibility measure through µ(x) = (2Cr {ξ = x}) ∧ 1, x ∈ R. | 3,231.2 | 2012-04-01T00:00:00.000 | [
"Computer Science"
] |
Linked Reactivity at Mineral-Water Interfaces Through Bulk Crystal Conduction
The semiconducting properties of a wide range of minerals are often ignored in the study of their interfacial geochemical behavior. We show that surface-specific charge density accumulation reactions combined with bulk charge carrier diffusivity create conditions under which interfacial electron transfer reactions at one surface couple with those at another via current flow through the crystal bulk. Specifically, we observed that a chemically induced surface potential gradient across hematite (α-Fe2O3) crystals is sufficiently high and the bulk electrical resistivity sufficiently low that dissolution of edge surfaces is linked to simultaneous growth of the crystallographically distinct (001) basal plane. The apparent importance of bulk crystal conduction is likely to be generalizable to a host of naturally abundant semiconducting minerals playing varied key roles in soils, sediments, and the atmosphere.
concentration was similar to the modern value, because Eocene estimates of Os burial flux (35) The semiconducting properties of a wide range of minerals are often ignored in the study of their interfacial geochemical behavior. We show that surface-specific charge density accumulation reactions combined with bulk charge carrier diffusivity create conditions under which interfacial electron transfer reactions at one surface couple with those at another via current flow through the crystal bulk. Specifically, we observed that a chemically induced surface potential gradient across hematite (a-Fe 2 O 3 ) crystals is sufficiently high and the bulk electrical resistivity sufficiently low that dissolution of edge surfaces is linked to simultaneous growth of the crystallographically distinct (001) basal plane. The apparent importance of bulk crystal conduction is likely to be generalizable to a host of naturally abundant semiconducting minerals playing varied key roles in soils, sediments, and the atmosphere.
T he chemical behavior of mineral-water interfaces is central to aqueous reactivity in natural waters, soil evolution, and atmospheric chemistry and is of direct relevance for maintaining the integrity of waste repositories and remediating environmental pollutants. Traditionally, explorations of fundamental reactions at these interfaces have probed the interaction of water and relevant dissolved ions with crystallographically well-defined mineral surfaces. The pursuit so far has been dominated by the assumption that distinct surfaces of any given crystal behave independently of each other. Except by diffusion through the solution phase or across sur-face planes, exchange of mass or electron equivalents between sites of differing potential energy at different locations on any given crystal is typically assumed to be negligible. This assumption is nonetheless questionable for the widespread group of minerals that are electrical semiconductors. For example, iron oxides often have moderate to low electrical resistivity (1) and have been studied as electrode materials for decades (2)(3)(4). Iron oxide crystal surfaces are chemically reactive with water and ions, leading to solution-dependent charging behavior that differs from one surface type to the next; differing points of zero charge for proton adsorption is but one example (5,6). This difference should give rise to a surface electric potential gradient (Dy 0 ) across any crystal that has two or more structurally distinct faces exposed to solution. In principle, this gradient can bias the diffusion of charge carriers (7,8). Hence, conditions could exist when the gradient across a single crystal is sufficiently large and the electrical resistivity of the material sufficiently *To whom correspondence should be addressed. E-mail<EMAIL_ADDRESS>low that interfacial electron transfer reactions at one surface couple with those at another by a current flowing spontaneously through the crystal bulk. The situation is analogous to galvanic metal corrosion, but instead of spatially disordered anodic and cathodic electron transfer sites, the anode and cathode are spatially confined to crystallographically distinct surface planes and are therefore physically separable for measurement. We demonstrate operability of these conditions for iron oxide, uncover their effects on the surface chemical behavior, and make the case that, in nature, different surfaces of certain abundant crystals are inextricably linked.
We examined hematite (a-Fe 2 O 3 ) because it is a wide band gap semiconductor (band gap 1.9 to 2.3 eV) (1, 3) and the most stable form of iron oxide under dry oxidizing conditions; it is extremely common in nature (9). It has the corundum structure type based on hexagonal close packed oxygen planes in which 2/3 of the available octahedral cavities are occupied by Fe 3+ . This structure gives rise to anisotropic electrical resistivity that is higher in the basal plane than along the trigonal axis by up to four orders of magnitude (10,11); reported bulk resistivities range from 10 2 to 10 6 ohm·m (1). When hematite is subjected to oxygen-limited aquatic environments, particularly in acidic conditions, it can be reductively dissolved according to Eq. 1: which has a standard reduction potential E°~0.7 V (8,12). The fundamental reaction central to the overall process is Sorbed Fe 2+ from the aqueous phase is capable of reducing hematite Fe 3+ in this system (13)(14)(15), yielding an iron redox cycle in which no net reduction occurs. Introduction of dicarboxylic acids such as oxalate causes net dissolution by chelating surface Fe 3+ (ligand assisted dissolution); it also enhances reduction, possibly through the formation of ternary surface complexes with Fe 2+ (aq) , for example (16). This collective chemistry is a good test case for our main hypothesis because it involves a source of electron equivalents from Fe 2+ in solution, a range of potential-determining ions, electron transfer across hematite-solution interfaces, and the possibility of moving electron equivalents through the crystal bulk.
Development of a potential gradient Dy 0 of significant magnitude across the crystal requires selective interaction between potential-determining ions and specific hematite surfaces. We focus here on roles of protons (low pH) and oxalate as a representative dicarboxylic acid. The hematite (001) basal surface is structurally distinct from any edge surfaces. In water, the (001) surface is terminated predominantly by doubly coordinated hydroxyls (17)(18)(19) that are relatively inert to the protonation and deprotonation reactions needed for charge accumulation. Smaller populations of more reactive singly coordinated and triply coordinated hydroxyls, capable of positive charge accumulation, are associated with terminal Fe groups (19). Terminal Fe groups with low coordination to the underlying surface can be easily chelated by oxalate anions to form negatively charged mononuclear bidentate inner-sphere surface complexes (20,21). In contrast, many remaining low-index surfaces of hematite crystals such as (012) are dominated by higher-coordinated Fe (17,22). This higher-coordinated Fe is more difficult to chelate yet bears singly and/or triply coordinated hydroxyls for charge accumulation. In general therefore, we expect that, relative to other hematite surfaces, the (001) surface should show weaker pH-dependent charge accumulation, an observation increasingly confirmed by recent data and theory (23,24), and stronger interaction with oxalate anions that increases with decreasing pH.
We performed several experiments to potentiometrically measure Dy 0 for specific hematite surfaces and to determine its effect on their behavior in Fe 2+ and oxalate solutions. We chose a large natural specular hematite crystal with well-defined surfaces that could be isolated for study. The crystal was low in impurity content (25), a natural n-type semiconductor (1), and had a room temperature electrical resistivity of 10 5 ohm·m as measured by the four-point probe method. Generating replicate samples required cutting specific crystallographic surfaces from the crystal as rectangular prism-shaped specimens, which also yielded a vicinal surface type along cut edges. For example, we prepared millimetersized oriented prisms exposing two (001) surfaces on the top and bottom of the prism and four orthogonal (hk0) vicinal sides (25). Annealing in air under conditions where hematite is the only stable iron oxide effectively cleans and organizes the surfaces without modifying the bulk electrical conductivity. This procedure yields highly organized (001) surfaces, with accompanying (hk0) vicinal surfaces that are microfaceted with stable edge terminations (Fig. 1, A and B). Similarly, prism specimens bearing (012) and (113) surfaces (Fig. 1, C and D) with accompanying vicinal surfaces were prepared.
To determine the magnitude of Dy 0 and the roles of Fe 2+ and oxalate solution components, we measured the open-circuit potential (E OCP ) in four solution types. The E OCP is the electrode rest potential relative to a standard reference electrode. Changes in the E OCP are directly related to changes in y 0 (26,27), which in turn is sensitive to surface complexation reactions with our potential-determining ions H + , Cl − , Fe 2+ , and oxalate species (28). The measurements were performed at room temperature at effectively constant ionic strength under anaerobic conditions (25). We focused these measurements on the (001) and the accompanying (hk0) vicinal surface type. The observed approximately linear pH dependence, with predominantly negative slopes, is consistent with the accumulation of positive surface charge with decreasing pH (Fig. 2). As expected, in pure electrolyte solution the (001) surface showed a less-negative slope relative to that of the (hk0) surface, consistent with a lower density of charge accumulation sites on the (001) surface ( Fig. 2A). In contrast, oxalate anions bind preferentially to the (001) surface with decreasing pH, even to the point of sign reversal in the slope (Fig. 2B). Addition of Fe 2+ to either solution shows that its primary effect is to lower the overall potential for both the (001) and (hk0) surfaces without substantially modifying the pH dependence (Fig. 2, A and B). Taking E (hk0) -E (001) as an estimate of Dy 0 , in the presence of oxalate and irrespective of the presence of Fe 2+ (aq) we found that the potential gradient is large and positive, on the order of tenths of volts below pH = 3 (Fig. 2C). Under these conditions, we expect that mobile electrons acting as majority carriers in hematite would be directed by Dy 0 from the (001) surface to the (hk0) surface. E OCP measurements directly between identical surface types [e.g., E (001) -E (001) ] (25) showed no significant voltage.
To examine the effects of Dy 0 of this sign and magnitude on the surface chemical behavior, we examined surfaces of the oriented prisms from the same sample by using atomic force microscopy (AFM) before and after anaerobic reaction with Fe 2+ -oxalate solutions. Thermostated batch vessels were used with temperatures ranging from room temperature to 75°C and pH ranging from 2 to 3 (25). Fe 2+ -oxalate concentrations consistent with previously published experiments that establish net dissolution in terms of Fe 3+ (aq) release on fine-grained powders were used (16,29,30). Collectively, these conditions were selected in keeping with Eq. 1 while also accelerating surface transformations into a more easily observable time frame. Light was excluded in all cases to avoid oxalate acting as a reductant. Equilibrium thermodynamic calculations along with Eh measurements at run conditions confirm that all our reaction conditions lie within the Fe 2+ (aq) stability field (25). Hematite was the limiting reactant; total dissolution would retain undersaturation with respect to any possible iron oxide phases.
AFM examination of (001) surfaces after reaction runs showed remarkable features. In every case, for both natural and synthetic samples, (001) surfaces were overgrown with a hexagonal pseudo-pyramidal morphology of uniform orientation. Images at early stages show the island growth of these features on the initially flat (001) surface (Fig. 1E). After 12 hours, the reaction yielded merged pyramid-covered (001) surfaces with peak-to-valley heights averaging 200 nm and pyramid bases approaching a micrometer in width, imparting a distinct matte appearance to the reacted (001) surface visible to the naked eye. Transmission electron microscopy (TEM) and se-lected area diffraction measurements along [001] transects of this sample type ( fig. S1), along with x-ray photoelectron spectroscopy, x-ray diffraction, and energy dispersive x-ray spectroscopy, confirmed that the grown material is structurally and compositionally a-Fe 2 O 3 of identical orientation as the underlying material without detectable impurities. The line of intersection of apparent pyramid "facets" with the (001) tion, but the interplanar angle varies with run duration and does not correspond to low-index planes in hematite. The large size, morphologic symmetry, and mutual orientation of these pyramids require homoepitaxy, that is, growth of additional hematite on hematite.
In contrast, all other surfaces examined show features characteristic of dissolution. For example, the four (hk0) vicinal sides of prism samples bearing (001) surfaces on top and bottom exhibited fine-scale pitting and roughening (Fig. 1F). The (012) and (113) surfaces of prism samples show development of etch pits at various length scales and with symmetry corresponding to crystallographic orientation (Fig. 1, G and H). We observed identical behavior under the same conditions with use of synthetic tabular hematite crystals bearing primarily (001) and (012) surfaces, in which case no surface preparation by annealing was required (fig. S2). The conclusion is that the hematite (001) surface grows under our conditions whereas all other surfaces sampled dissolve, as represented schematically in Fig. 3A.
We designed experiments using the prism samples to test whether or not pyramid islands are deposited on the (001) surface by precipitation of trace Fe 3+ from solution (16). In these experiments, two prism samples were used in the reaction vessel instead of one. Four (hk0) vicinal sides of one crystal were sealed with an inert epoxy (25), leaving two (001) surfaces exposed, whereas on the other crystal the two (001) surfaces were sealed, leaving four vicinal surfaces exposed. Collectively, the two crystals expose the same six kinds of surfaces to solution as in the runs above with one crystal, in the same relative proportion and surface area, but they involve (001) surfaces that are physically separated from the (hk0) surfaces (Fig. 3B). In this case, the results of reaction runs show only dissolution features on all exposed surfaces, including (001) (fig. S3A). The (001) pyramidal morphology does not develop in this separated two-crystal configuration. The same experiment performed on samples in which the pyramidal morphology had already been grown on (001) before sealing its (hk0) sides showed dissolution of the pyramidal over-growths on the (001) surface ( fig. S3B). Therefore, the (001) pyramidal overgrowths do not form by precipitation of ferric iron. Furthermore, we deduce that chemical processes at the (001)-solution interface causing pyramidal growth during reaction are facilitated by solid contact between the (001) and (hk0) surfaces; that is, these surfaces must be on the same crystal.
The behavior strongly suggests that bulk charge transport provides the link between the two types of surfaces. As a further test, we again prepared two crystals with partial exposure of (001) on one and (hk0) surfaces on the other, except this time with an electrical connection between them (Fig. 3C). A crystal exposing only (001) surfaces was connected to a crystal beneath exposing only (hk0) surfaces by electrically conductive colloidal Ag paste, which was subsequently cured, sealed off from contact with solution using additional epoxy, and tested for ohmic behavior by resistivity measurements. In this design, the crystals are effectively wired together by the (001)-Ag-(001) junction between them. Reaction in this wired two-crystal configuration proceeds as if the crystals were one; pyramidal hematite grows on the exposed (001) surface of the upper crystal ( fig. S3C), whereas the four (hk0) sides of the lower crystal dissolve (Fig. 3C). Therefore, the nature of the interaction between the (001) and vicinal surfaces that gives rise to the pyramidal growth of hematite (001) during reaction derives from bulk charge transport. Surface diffusion along the hematite-solution interface was ruled out by painting a ring of sealant on a (001) surface so that only bulk transport could access the circumscribed region, and within that region hematite island growth also occurred ( fig. S4).
The collective behavior of the system is therefore suggestive of two distinct but coupled interfacial processes: growth at (001) by and dissolution of edge surfaces, for example, (hk0) surfaces, by with coupling mediated by charge transport from (001) to (hk0) surfaces through the crystal bulk. The process involves preferential net oxidative adsorption of Fe 2+ (aq) at the (001)-solution interface and valence interchange with structural Fe 3+ at that surface (Fig. 4). At temperatures of interest (room temperature and higher), bulk charge transport is sufficiently facile to support a small current through the bulk. Net electron equivalents injected into the (001) surface follow an electrically biased random walk through the crystal to (hk0) surfaces. At (hk0) exit points, internal reduction of Fe 3+ to Fe 2+ solubilizes and releases iron into solution. This circuit is driven by the Dy 0 gradient generated across the crystal from divergent charge accumulation at structurally distinct surface types. The sign and magnitude of Dy 0 , the conductivity of the natural crystal, and the growth rates of the pyramidal islands are all mutually consistent. For example, taking Dy 0 = 0.2 V at pH = 2, a temperatureadjusted electrical resistivity = 10 4 ohm·m for 75°C (31,32), and an electron transport path length of 1 mm, the maximum amount of additional hematite expected on the (001) surface in 12 hours is a layer~100 nm thick, the same order of magnitude as that observed. Surface potentialdriven charge carrier diffusivity has been invoked qualitatively to explain microscopic oxide transformation processes before (13,33,34) but not on the length scale examined here nor with surface specificity. Given the observation that the (001) surface continues to grow beyond the coalescence of the pyramidal islands, at the atomic scale the pyramidal (001) morphology must retain the essential structural and therefore chemical characteristics that give rise to the potential of the initial (001) surface. Furthermore, the observed process does not preclude traditionally held spatially localized dissolution in the hematite system. Rather, the evidence suggests that the processes operate in parallel and that the behavior based on the electrical circuit through the crystal dominates when chemical requirements that establish a large enough surface electric potential gradient are met.
The finding provides insight into the reductive transformation of iron oxides, which is important in the biogeochemical cycling of iron in nature and the removal of iron oxide films in industry. Because this finding can be easily generalized to a host of naturally abundant semiconducting transition metal oxide and sulfide minerals capable of dominating the interfacial surface area in soils, sediments, and among atmospheric particles, its implications are fairly widespread. Of immediate impact is the concept that the reactivity of any given surface on such materials can be coupled to that of another surface, with a dependence on crystal morphology as a whole. This phenomenon should apply to natural crystals in the environment as well as those selectively cut, broken, or otherwise prepared for laboratory study. 4. Schematic diagram depicting the inferred coupled interfacial electron transfer process operative under our conditions for the hematite single crystals. The chemically self-induced surface potential gradient across the crystal directs current flow through the bulk. The current is facilitated by sufficiently low electrical resistivity in a process that is fed by net injection of electron equivalents at (001) surfaces and net release of electron equivalents at (hk0) surfaces. | 4,382 | 2008-04-11T00:00:00.000 | [
"Geology"
] |
Darboux polynomials, balances and Painlevé property
For a given polynomial differential system we provide different necessary conditions for the existence of Darboux polynomials using the balances of the system and the Painlevé property. As far as we know, these are the first results which relate the Darboux theory of integrability, first, to the Painlevé property and, second, to the Kovalevskaya exponents. The relation of these last two notions to the general integrability has been intensively studied over these last years.
INTRODUCTION AND STATEMENT OF THE MAIN RESULTS
The Painlevé property appears in studying the general solutions of the differential equations viewed as functions of complex time. More precisely, when the solutions are single-valued on its maximum domain of analytic continuation, we say that the system has the Painlevé property. In other words, a differential system has the Painlevé property if its general solution has no movable critical singularities, for more details see [3]. This property imposes strong conditions that, despite the fact that it has not been proved, one believes that in this case the system is integrable. However, there is no precise algorithm to decide whether a system has the Painlevé property, and only necessary conditions can be obtained, called the Painlevé test. Most of the systems do not satisfy the Painlevé test, but there is a lot of information concerning the global behavior of the system that we can obtain from the local analysis around the singularities in complex time and the lack of meromorphicity can be used to prove the nonintegrability of the system with meromorphic first integrals.
For more than half a century after its development, the Painlevé theory for differential equations was considered an interesting and important, but perhaps slightly old-fashioned, part of the theory of special functions, and little attention was paid to it until the early 1980s when its relation to soliton theories was discovered. Since then there has been a huge amount of work relating the Painlevé property to different branches of differential systems such as the integrability of PDEs, the rational and polynomial integrability of ODEs etc. However, very little is known about its relation to the Darboux theory of integrability for polynomial differential systems. The main aim of this paper is to focus on the connections between the existence of Darboux polynomials, the Painlevé property and the Kovalevskaya exponents (introduced by Sophia Kowalevskaya to compute the Laurent series solutions of rigid body motion).
In order to state the main results of the paper, we consider a polynomial differential system of the form and P (x) = P 1 (x), . . . , P n (x) and P i ∈ C[x 1 , . . . , x n ] for i = 1, . . . , n. As usual, C denotes the set of complex numbers, and C[x 1 , . . . , x n ] denotes the polynomial ring over C in the variables x 1 , . . . , x n . Here t can be real or complex. The maximum of the degrees of the polynomials P i for i = 1, . . . , n is called the degree of the polynomial differential system (1.1). Assume that there exists a solution of the form where p = (p 1 , . . . , p n ), α = (α 1 , . . . , α n ) = (0, . . . , 0), α i ∈ C and p i ∈ R are given by one of the nonvanishing solutions of the algebraic equation For a given p there may exist different sets of values of α, called balances. The Kowalevskaya matrix associated to a balance α is where, as usual, DP (α) denotes the Jacobian matrix of P evaluated at α and diag(p 1 , . . . , p n ) denotes the matrix whose diagonal is equal to (p 1 , . . . , p n ) and zeroes in the rest. The eigenvalues of the matrix M are called the Kovalevskaya exponents of the balance α and are denoted by ρ = (ρ 1 , . . . , ρ n ). It can be shown that there always exists a Kowalevskaya exponent equal to −1 related to the arbitrariness of the origin of the parameterizations of the solution by the time. The eigenvector associated to the eigenvalue ρ 1 = −1 is pα = (p 1 α 1 , . . . , p n α n ). For more details see [3] or [2]. In Section 2 we recall how to compute the solutions of the form (1.2) when the polynomial differential system is quasi-homogeneous.
where K ∈ C[x 1 , . . . , x n ] is called the cofactor and has degree at most m − 1 if m is the degree of the polynomial differential system (1.1). As usual, ∇F denotes the gradient of the function F . We recall that F is a Darboux polynomial if and only if F (x) = 0 is an invariant hypersurface of system (1.1), i.e., if a solution of system (1.1) has a point on the hypersurface F (x) = 0, then the whole solution is contained in this hypersurface. A polynomial F is said to be weight-homogeneous if there exists d ∈ Q and s = (s 1 , . . . , s n ) ∈ Q n such that for an arbitrary positive real a we have Here d is called the weight degree of the polynomial F and s is the weight exponent of F .
The first result of this paper is the following.
Theorem 1.
Assume that the polynomial differential system (1.1) admits the particular solution , then ∇F (α) = 0 and its cofactor K cannot be constant.
The proof of Theorem 1 is given in Section 4. Theorem 1 is closely related to Theorem 5.4 of [3], which is due to Yoshida (see [7,8]) and states: Under the assumptions of Theorem 1, if I(x) is a weight-homogeneous first integral of weight degree d of system (1.1) satisfying that ∇I(α) = 0, then d is a Kowalevskaya exponent of the matrix M (α) given in (1.4).
For the second result we need some more definitions and notation. We can write the polynomial P i (x) for i = 1, . . . , n in the form i (x) is a weight-homogeneous polynomial of weight exponent p ∈ Q n with weight degree Assume thatẋ = P (0) (x) has the solution x = αt p where t = t − t * for some complex t * , and α ∈ C n with |α| = |α 1 | + · · · + |α n | = 0. Then we say that the polynomial differential system (1.1) admits a dominant balance {α, p}.
We note that any balance of system (1.1) is a dominant balance taking P (0) i = P i . An example of a polynomial differential system with a dominant balance can be found in Section 3. Now we study the relation between the Kovalevskaya exponents and Darboux polynomials.
Assume that the polynomial differential system (1.1) admits a dominant balance {α, p} such that the Kovalevskaya matrix diagonalizes. Then the following statements hold.
Moreover, we explore some connections between the Painlevé property and Darboux polynomials. The first result connecting the Painlevé property with the Darboux polynomials is the following.
Theorem 3. Assume that the polynomial differential system (1.1) satisfies the Painlevé property. Then if the system has a Darboux polynomial, its cofactor K must satisfy K(α) ∈ Z for all balances α of the system.
We also consider a kind of converse result of Theorem 3.
Theorem 4.
Assume that the polynomial differential system (1.1) admits a dominant balance {α, p} and it has a Darboux polynomial with cofactor K such that K (0) (α) ∈ Z. Then system (1.1) cannot satisfy the Painlevé property.
The proofs of Theorems 2, 3 and 4 are given in Section 5. See Section 6 for some examples of systems satisfying the conditions of Theorems 1, 2, 3 and 4 and the concluding section for some comments on our results.
QUASI-HOMOGENEOUS POLYNOMIALS DIFFERENTIAL SYSTEMS
The polynomial differential system (1.1) is quasi-homogeneous if there exists s = (s 1 , . . . , s n ) ∈ N n and r ∈ N such that for an arbitrary positive real a we have P i (a s 1 x 1 , . . . , a sn x n ) = a s i −1+r P i (x 1 , . . . , x n ) for i = 1, . . . , n. We call s = (s 1 , . . . , s n ) the weight exponent of system (1.1) and r the weight degree with respect to the weight exponent s. In particular, in the case that s = (1, . . . , 1) we say that system (1.1) is a homogeneous polynomial differential system of degree r. If a polynomial differential system (1.1) is quasi-homogeneous with weight exponent s and weight degree r > 1, then the system is invariant under the change of variables where This fact implies that in this case there exist solutions of the form (1.2) with p i = s i /(1 − r) for i = 1, . . . , n and the coefficients α i satisfying (1.3). The integrability of quasi-homogeneous polynomial differential systems has been investigated by several authors, see, for instance, [2,3,5,6,[8][9][10].
DOMINANT BALANCES
Consider the polynomial differential system (1.1). We select a suitable weight change of variables denoted by X = (X 1 , . . . , X n ) = a s x = (a s 1 x 1 , . . ., a sn x n ) where s i ∈ N for i = 0, . . . , n such that (1.1) becomesẊ where r m = degree(P ) and P i,m−j is a weight-homogeneous polynomial. In particular, i (x) for j = 1, . . . , r. In this case p = (s 1 , . . . , s n ). Consider the Lorenz systemẋ where s, r, b are real parameters and s = 0. We make the change of variables
PROOF OF THEOREM 1
We first introduce two auxiliary results. Proof. Let HF be the Hessian of the matrix of the polynomial F . To show thatū is a solution of the adjoint equation, we compute its time derivative. Note thaṫ This completes the proof of the lemma.
The following lemma will be the central tool in the nonintegrability results. Its proof uses Lemma 5. Proof. We compute the time derivative of I and show that it is zero. Indeed, by Lemma 5 This completes the proof of the lemma. Now we continue with the proof of the theorem. We proceed by contradiction. Consider system (1.1) which admits the particular solution x = αt p . Assume that it has a weight-homogeneous Darboux polynomial F (x) such that ∇F (α) ≡ 0 and with cofactor K = k 0 . We can apply Lemma 6. The existence of F (x) implies the existence of a first integral I = e −k 0 t ∇F (αt p )u ≡ 0 for the variational equationu = DP (αt p )u.
The general solution of the variational equation is of the form
is a polynomial in log t and can be expressed in terms of the generalized eigenvectors of the Kowalevskaya matrix K = Df (α) − diag(p 1 , . . . , p n ) and ρ i are the Kowalevskaya exponents, for more details see Section 3.8.2 of [3]. This general solution contains n arbitrary parameters. We can therefore evaluate I on this solution to obtain If K is semisimple (i. e., it can be diagonalized), then k = n and the eigenvectors β (i) form a set of n linearly independent vectors of which at least one, say i = j, is such that ∇F (α)β (j) = 0, otherwise I would be zero. Since I is constant in time, we get a contradiction.
If K is not semisimple, then there is a complete set of generalized eigenvectors γ (1) , . . . , γ (n) for K for which at least one of them satisfies ∇F (α)γ (i) = 0 and the contradiction follows. This completes the proof of the theorem.
PROOF OF THE REMAINING RESULTS
In this section we provide the proof of Theorems 2, 3 and 4. To prove Theorem 2, we will use the following theorem, which is Theorem 5.7 in [3]. We continue to decompose any polynomial as in (1.5).
Theorem 7. Assume that the polynomial differential system (1.1) has a Darboux polynomial F with cofactor K and a dominant balance {α, p} with Kovalevskaya exponents ρ = (−1,ρ) with ρ ∈ C n−1 . Then there exists a vector m = (m 2 , . . . , m n ) of positive integers such that Proof (of Theorem 2). The proof of both statements will be done by contradiction.
To prove Theorem 3, we will use again Theorem 7 and the following result proved in [3]. are integers. Now we apply Theorem 7 and from the notation and definitions introduced there we conclude that and the proof is completed.
Proof (of Theorem 4). It follows from Theorem 7 thatρ · m ∈ Z (we have used the notation and definitions introduced there). Therefore, there must exist at least one Kovalevskaya exponent which is not an integer number, and so it follows from Proposition 8 that system (1.1) cannot satisfy the Painlevé property.
Example for Theorem 1
Consider the systemẋ Note that it is a quasi-homogeneous polynomial differential system with weight exponent s = (1, 1, 2, 2) and weight degree r = 2, see Section 2.
System (6.1) has the Darboux polynomials which are both weight-homogeneous with weight degree 2. The cofactor K 1 of F 1 is −2(x 1 − ix 2 ) and the cofactor K 2 of F 2 is 2(x 1 − ix 2 ). None of these cofactors are constant and ∇F j (α) = 0 for j = 1, 2, as stated in Theorem 1. Note that it is not necessary to compute the α corresponding to p = −s because ∇F j = ( * , * , 1, −i).
Example for Theorem 2
Now consider the Lorenz polynomial differential system (3.1) of Section 3. We recall that a dominant balance for system On the other hand, for a nonnegative integer n the Lorenz system with s = −n/2 and b = 2s has the Darboux polynomial x 2 1 − 2sx 3 with cofactor K = K (0) (α) = n. This Lorenz system satisfies the contrapositive of statement (b) of Theorem 2.
Example for Theorem 3
Consider the polynomial differential systeṁ The general solution of this system is with c 1 and c 2 constant. Since this general solution is single-valued on its maximum domain of analytic continuation in C, system (6.2) satisfies the Painlevé property.
CONCLUSIONS
Theorem 1 provides two necessary conditions for the existence of a Darboux polynomial for the polynomial differential system (1.1), one of them related to the balances of the system. Theorem 2 provides two necessary conditions for the existence of a Darboux polynomial of system (1.1), but now one of these conditions is related to the balances of the systemẋ = P (0) (x) instead of the balances of system (1.1).
Theorems 3 and 4 provide necessary conditions for the existence of a Darboux polynomial of system (1.1) using the Painlevé property.
Theorem 7 due to Goriely provides different necessary conditions for the existence of a Darboux polynomial of system (1.1) distinct to the ones given in Theorems 1-4. | 3,739.4 | 2017-09-01T00:00:00.000 | [
"Mathematics"
] |
General mechanisms for stabilizing weakly compressible multiphase models and their applications
The present paper analyzes three representative weakly compressible multiphase models. It is found that these models contain some identical numerical dissipation terms, pressure diffusion, and bulk viscosity terms. Numerical investigations show that these identical numerical dissipation terms interpret general mechanisms for stabilizing computations. The generality of mechanisms is reflected in (a) they are likely to present in many weakly compressible multiphase models and (b) they represent interpretable physical mechanisms. Based on those general mechanisms, many weakly compressible multiphase models can be incorporated into a general theoretical framework. And a general weakly compressible solver for multiphase flows (GWCS-MF) is proposed. It is derived from standard governing equations and can easily be applied to nonuniform meshes. Detailed numerical tests demonstrate that it can achieve good numerical stability and accuracy for challenging conditions (inviscid fluids, large density ratios, high Weber numbers, and high Reynolds numbers) and can simulate complex interface evolution well. These good performances exhibit the advantages of GWCS-MF and further validate those general mechanisms.
Introduction
Incompressible multiphase flow is a common phenomenon in both nature and engineering.Compared with experiments, numerical simulation is a popular method to investigate complex phenomena in this field owing to its low cost, high efficiency, and generality.Numerical algorithms for incompressible multiphase flows can be roughly categorized into two groups.The first group is conventional solvers that directly solve macroscopic governing equations.The second group is mesoscopic lattice Boltzmann models describing discrete distribution functions' evolution.
Conventional solvers for incompressible multiphase flows fall into two categories according to approaches of updating the pressure.One is the exactly incompressible model [1,2] that updates pressure implicitly through the continuity equation.Without compressibility, it has no acoustic oscillation and, thus, is more likely to achieve good numerical stability.On the other hand, it mostly requires the solution of a Poisson equation for pressure or a pressure-correction equation through complicated iteration steps.Another one is the weakly compressible model [3][4][5] that updates pressure through an explicit pressure equation.However, it often requires additional treatments to stabilize computations.By combining them with methods to track the interface, such as volume of fluid [6], level-set method [2], front tracking algorithm [3], and diffuse interface method [4], conventional solvers can simulate nearly incompressible multiphase flows.
Multiphase lattice Boltzmann models are mesoscopic models based on the Boltzmann equation.They have been successfully applied to investigate practical engineering problems [7,8].These models can be roughly divided into four categories, color-gradient models [9], pseudopotential multiphase models [10,11], free energy models [12], and phase-field-based models [13].
These original models are unstable for multiphase flows with large density ratios (like water-air flow with a density ratio of about 1000) [14].Therefore, improved models based on different perspectives have been proposed to enhance the numerical stability at large-density ratios.
The first approach is the modification of these original models.For color-gradient models, the isotropic color gradient model [15] and the multiple-relaxation-time colorgradient lattice Boltzmann model [16] were proposed.For pseudopotential models, an improved forcing scheme for the multiple-relaxation-time model [17] was proposed to achieve thermodynamic consistency.It can handle large density ratios consequently.For free energy models, large density ratios beyond 1000 can be simulated by improving the Galilean invariance of higher-order terms [18].For interface tracking models, a stable discretization scheme based on directional derivatives [19] was proposed.All mentioned measures can be interpreted as modifications for higher-order terms.
The second approach is the simplification of lattice Boltzmann models.Two models, the multiphase lattice Boltzmann flux solver (MLBFS), and the simplified multiphase lattice Boltzmann model (SMLBM) [20], were proposed to improve the numerical stability at large density ratios.MLBFS [21] is a finite volume solver that reconstructs a simplified lattice Boltzmann model at the cell face to calculate pressure and momentum fluxes.The phase interface is captured by directly solving the macroscopic Cahn-Hilliard equation.Consequently, it can be applied to nonuniform meshes and was found to have good numerical stability for multiphase flows with high density ratios.The improved model [22] with a simplified computation of interface fluxes was proposed to remove the complex calculation of the compensation tensor and maintain numerical stability at a relatively thinner interface.Aiming to unify the different computational frameworks for velocity and phase fields, an interfacial lattice Boltzmann flux solver (ILBFS) [23] was proposed to solve the Cahn-Hilliard equation.Based on ILBFS, a simplified MLBFS with slight simplification in interface fluxes [24] and an improved MLBFS (IMLBFS) [25] based on the original phase-field-based multiphase lattice Boltzmann model (MLBM) [26] were proposed.
As SMLBM, it is a second-order approximation of the phase-field-based MLBM [26].The non-equilibrium distribution functions are approximated by interpolations of equilibrium distribution functions at different positions and moments.Therefore, its computational process involves only equilibrium distribution functions determined by macroscopic variables, which significantly decreases the memory size.SMLBM has also been proven to have good numerical stability at large density ratios, high Weber numbers, and high Reynolds numbers for 2D multiphase flows [20].
Although these MLBFS models and SMLBM have been successfully applied to simulate multiphase flows with large density ratios, the mechanisms for good numerical stability at large density ratios have not been well explained.The reason for the good numerical stability of the original MLBFS was thought to be using the lattice Boltzmann model [21].IMLBFS is believed to be more stable than the original MLBFS because it is derived from the multiphase lattice Boltzmann model and thus has a more robust physical basis [25].The reason for the good numerical stability of SMLBM is recognized as that SMLBM inherits the good stability feature of the reconstruction strategy [20].These hypotheses appear unclear and empirical, and clear evidence for the good numerical stability of these models has not been well established yet.
It should be noticed that the computational procedures of these MLBFS models and SMLBM involve only macroscopic variables, which indicates that they are intrinsically macroscopic models.Furthermore, the recovered governing equations of these models imply that they are weakly compressible models.From a macroscopic perspective, for both macroscopic weakly compressible models for single-phase [27] and multiphase flows [5], additional treatments [28,29] are needed, in general, to improve numerical stability.Therefore, it is reasonable to believe that mechanisms for the good numerical stability of MLBFS models and SMLBM can be established on the macroscopic scale.The present paper aims to reveal the general mechanisms that can incorporate these models into a general theoretical framework and then construct a general weakly compressible solver for multiphase flows (GWCS-MF).
In the present paper, macroscopic equations of a phase-field-based MLBM, IMLBFS, and SMLBM are derived first by approximating their actual computational process.Unlike continuous macroscopic equations recovered by using Chapman-Enskog expansion analysis, the key point of present studies is to recover the timediscretized macroscopic equations with numerical dissipation terms.Through detailed analyses, it is found that there are some identical numerical dissipation terms in these models, the pressure diffusion and bulk viscosity terms.The effect of these numerical dissipation terms on stabilizing computation is confirmed by numerical investigations, which indicate that these numerical dissipation terms indeed interpret general mechanisms for stabilizing computations.
GWCS-MF derived from standard governing equations is proposed based on the general mechanisms.Detailed numerical investigations confirm its good numerical stability and accuracy for multiphase flows with large density ratios, zero viscosities, high Reynolds numbers, and high Weber numbers.These results validate the general mechanisms further.
The remainder of this paper is organized as follows.Sections 2, 3, and 4 analyze the macroscopic equations of MLBM, IMLBFS, and SMLM, respectively.Section 5 summarizes these general mechanisms, proposes GWCS-MF, and makes numerical investigations to validate the general mechanisms.In Section 6, seven benchmark tests are simulated further to evaluate the numerical stability and accuracy of GWCS-MF and validate the general mechanisms.Finally, conclusions are given in Section 7.
Governing equations of the weakly compressible multiphase model
In the phase-field-based weakly compressible multiphase model, the following governing equations are adopted for simulations: where p is the pressure, ρ is the density, s c is the sound speed, u α is the velocity, µ is the dynamic viscosity, F α is the forcing term including the surface tension and body force.The popular phase field equation includes the Cahn-Hilliard equation, the Allen-Cahn equation, and their conservative forms [30][31][32].In the present paper, the Cahn-Hilliard equation is adopted, and it can be written as where C is the phase fraction of the heavier fluid, Once the interface width ξ and the surface tension σ are given, parameters λ and κ can be determined by The surface tension force can be evaluated by For a solid-fluid boundary [33], firstly, to ensure the mass conservation law, the boundary condition for C µ at a wall is wall 0 where n α is the unit outer normal vector.Secondly, to minimize the total free energy contributed by the specified wall free energy, the boundary condition for C is ( ) where ε is related to the equilibrium contact angle eq θ through For the interfacial area, the density is determined by ( ) Without a particular illustration, the kinematic viscosity is determined ( ) The subscripts H and L denote the parameters of the heavier fluid and the lighter fluid, respectively.
The phase-field-based MLBM
The phase-field-based MLBM has two evolution equations that describe the velocity and phase fields, respectively.The following model [26] is used for analysis, and the evolution equations are where i f and i g are pressure and phase fraction distribution functions of discrete velocity i e , respectively; eq i f and eq i g are the corresponding equilibrium distribution functions; f τ and g τ are dimensionless relaxation times for i f and i g , respectively; i H is the source term; x is the location; t is time; t δ is the time interval.For 2D and 3D situations, the D2Q9 model and the D3Q19 model are adopted, respectively.The discrete velocities of the D2Q9 and D3Q19 models are The equilibrium distribution functions eq i f and eq i g are ( ) ( ) ( ) where i w is the weight coefficient, p is the pressure, s c is the sound speed, u α is the velocity, and A is an adjustable parameter.The i w and s c of the D2Q9 and D3Q19 models are The equilibrium distribution functions satisfy The source term i H is where where Ma is the Mach number defined as s uc α .
The relaxation parameters f τ and g τ are respectively related to the kinematic viscosity υ and mobility Macroscopic variables p , u α , and C are respectively recovered by 2 0.5 0.5 Using the Chapman-Enskog expansion analysis, the phase-field-based MLBM can recover the governing equations [26], Eqs.(1) to (3), with second-order accuracy.
MEs-MLBM
Since MLBM is a discretized algorithm, the time-discretized macroscopic equations rather than continuous governing equations need to be investigated to explain mechanisms for good numerical stability.By using a second-order Taylor series expansion, ( ) , and i H can be expanded as , 0.5 , 0.5 , 0.5 Substituting Eqs. ( 28), ( 29), (30) to Eq. ( 13), we can obtain According to Eq. ( 13), we have ( ) ( ) which can also be rewritten as (35) In Eq. ( 35), the last three lines are small deviation terms, and the term Finally, the viscous terms in the recovered momentum equations are 2 22 0.5 33 It can be seen that it has a bulk viscosity term 2 2 0.5 3 By using a second-order Taylor series expansion, ( ) , , 0.5 , 0.5 Substituting Eqs. ( 38), (39) to Eq. ( 14), we can obtain According to Eq. ( 14), we have ( ) ( ) which can also be rewritten as With the aid of Eqs. ( 20), (27), and (42), and taking summation of the zeroth-order moment of Eqs.(40), we can recover the macroscopic equation Eqs. ( 34), (35), and ( 43) are labeled by MEs-MLBM.
On the other hand, the time-discretized governing equations with the explicit firstorder scheme are It can be seen that compared with the discretized governing equations, MEs-MLBM contain a pressure diffusion term in the pressure equation and an additional bulk viscosity term in the momentum equation.
Analysis of the IMLBFS
In this Section, IMLBFS is introduced first, and then MEs-IMLBFS is derived.
IMLBFS
Using a Chapman-Enskog expansion analysis, the MLBM mentioned above can be rewritten as a finite volume scheme.By using some approximations to calculate nonequilibrium distribution functions, IMLBFS [25] was constructed.
The space-discretized equations of IMLBFS are ( ) where R α , αβ (b) Calculation of the distribution functions at ( ) at the left cell at the right cell 0.5 at the interface where φ denotes an arbitrary variable, subscripts S , L , and R denote the variables at the cell face, left cell, and right cell, respectively.The partial derivatives at cell centers are calculated by the discrete Gauss theorem The macroscopic variables at the cell face in Eq. ( 51) are obtained by a linearized interpolation Once the macroscopic variables at 16) and (17), respectively.
(c) Calculation of the predicted variables at ( ) . The predicted variables at ( ) , , , Substituting Eq. (60) into Eq.(58) leads to the momentum flux Finally, substituting Eqs.(61) to (66) into Eqs.(47) to (49) leads to ( ) ( ) 1.5 0.5 where the viscous terms can be given as 2 22 33 It can be seen that MEs-IMLBFS also introduce a pressure diffusion term to the pressure equation and an additional bulk viscosity term to the momentum equation.
Analysis of SMLBM
SMLBM is a second-order approximation of MLBM.It has been proven to have good numerical stability for multiphase flow with large density ratios and high Reynolds numbers.
MEs-SMLBM
By using a second-order Taylor series expansion, ( ) ( ) ( ) As to macroscopic equations of the corrector step, using a second-order Taylor expansion leads to f Eqs. ( 89) to (94) are labeled as MEs-SMLBM.They contain a pressure diffusion term in the pressure equation.By combining Eqs. ( 90) and ( 93), all viscous-related terms can be summarized as It can be seen that MEs-SMLBM have a bulk viscosity coefficient
General mechanisms for stabilizing computations
Analyses in Sections 2 to 4 show that the two identical dissipation terms, the pressure diffusion and bulk viscosity terms, exist in these weakly compressible models for multiphase flows.It has been proven that the pressure diffusion term is efficient in stabilizing computations of both single-phase [34] and multiphase flows [5].The viscous terms also contribute to damping the pressure oscillations.Numerical experiments [35] confirm that sound waves experience the proper dissipation due to the intended bulk and shear viscosities.Therefore, it is thought that the two general dissipation terms provide general mechanisms for stabilizing computations of weakly compressible multiphase models.Generalities of mechanisms include: (a) they are commen for weakly compressible models for multiphase flows; (b) they represent physical mechanisms that can be applied for different discretization schemes, such as finite difference and finite volume schemes.
GWCS-MF
GWCS-MF adopts the computational procedures of IMLBFS, where numerical dissipation terms are introduced through a predictor-corrector step.The spacediscretized equations of GWCS-MF are ( ) The detailed procedures to calculate interface fluxes through reconstructing a local lattice Boltzmann model are introduced as follows: method [36] to obtain the predicted variables at ( ) Then Eqs. ( 103) to ( 105) can be solved by different time discretization schemes.The explicit first-order scheme is adopted in the present paper to clarify the general mechanisms.Substituting Eqs. ( 106), ( 107), (109), and (110) to Eqs. ( 103) and ( 104), the macroscopic equations of GWCS-MF are ( ) It can be seen that it introduces a pressure diffusion term and an additional bulk viscosity term implicitly.
Dissipative and non-dissipative models based on the finite difference scheme
The dissipative model is constructed here to show that generality applies to different discretization schemes.The governing equations of the dissipative model are ( ) , respectively.
For simplicity, the governing equations are discretized by an explicit first-order scheme, and the discretized governing equations are ( ) ( ) where the subscripts n and 1 n + denote variables at time steps n and 1 n + , respectively.Partial derivatives in Eqs. ( 117) to (119) are discretized by the LSFD method [36].More details can be seen in Ref. [37].Note that the weight coefficients in the LSFD method are set to be proportional to 4 r − and 6 r − for 2D and 3D situations, respectively, where r is the distance between the neighboring nodes to the local node.
Numerical stability test
This section investigates the effect of these dissipation terms on stabilizing computation.The numerical stability of GWCS-MF, the dissipative model, the nondissipative model, MLBM, SMLBM, MLBFS, and IMLBFS is tested by simulating a steady 2D droplet immersed in the gas.Because the spurious velocity in the interfacial area is very small for this case at the steady state, it decreases the influence of different solution methods for the phase field equation on numerical stability.
In the computational domain , a liquid droplet of radius 0 R is placed in the center, while the surrounding is gas.Initially, the computational domain has a uniform pressure and zero velocity.The initial phase field is The convergence criterion is while the time steps of the other three models are 1.The Neumann boundary condition of zero gradients is adopted for all variables on all boundaries, and a uniform mesh of size 200×200 is adopted for all test cases.
Figure 3 exhibits the convergence situations of the four models.It can be seen that without the general dissipation terms, the non-dissipative model is divergent in all test cases.On the contrary, with these general dissipation terms, the dissipative model has a noticeable improvement in numerical stability.The comparison proves that these general dissipation terms are effective in stabilizing computation.Like the dissipative model, SMLBM is divergent at large kinematic viscosities while convergent at small kinematic viscosities.As for MLBM, it has good numerical stability at large kinematic viscosities while limited numerical stability at small kinematic viscosities.Second, the numerical stability of GWCS-MF, MLBFS, and IMLBFS is compared.The MLBFS analyzed here combines the improved model [22] and ILBFS [23].The predictor and corrector time steps are set as 0.5 t δ = , and 0.5 t ∆= for all models.Other parameters are assigned the same as above.As shown in Fig. 4, the three solvers can achieve good numerical stability at small kinematic viscosities with those general numerical dissipation terms.It means that the good numerical stability of MLBFS and IMLBFS can be well explained by those general numerical dissipation terms rather than the mesoscopic theory of the lattice Boltzmann method.Note that a smaller t ∆ can be adopted for the three solvers to obtain convergent results at large kinematic viscosities.In summary, the general numerical dissipation terms effectively stabilize computations and can well explain the good numerical stability of SMLBM, MLBFS, and IMLBFS.As for MLBM, it has other intrinsic mechanisms to achieve good numerical stability at large kinematic viscosities.However, it is unstable at small kinematic viscosities.It has been proven that the numerical stability of MLBM can be improved by using the multi-relaxation time model [38] that aims to adjust the higherorder unphysical moments and bulk viscosity.Thus, the instability of MLBM at small kinematic viscosities may result from the complex deviation terms shown in MEs-MLBM.
Numerical validations
Seven benchmarks are simulated in this section to validate GWCS-MF.
Laplace's law in inviscid fluids
To further investigate the performance of GWCS-MF at small kinematic viscosities, this section tests Laplace's law in inviscid fluids.The model has been depicted in Section 5.3.The liquid and gas are inviscid, and the density ratio is 1000.Other settings are the same as those in Section 5.3.The velocity and pressure contours are shown in Fig. 5.It can be seen that these contours are smooth even at such a large density ratio of 1000, which implies that GWCS-MF effectively suppresses the acoustic oscillation.The pressure differences at different surface tensions are compared to verify the correctness of GWCS-MF. Figure 6 exhibits that the present results are in good accordance with the analytical solutions determined by Laplace's law 0 pR σ
∆=
. The good numerical stability and accuracy for inviscid fluids imply the potential of GWCS-MF in simulating high Reynolds number multiphase flows, which can be seen in Section 6.3.On the contrary, the problem is a big challenge for multiphase lattice Boltzmann models, and no successful simulations have been reported to the best of our knowledge.
Fig. 6.Comparison of pressure differences at different surface tensions.
Two-phase Taylor-Couette flow in an annular area
The two-phase Taylor-Couette flow in an annular area is simulated to show the flexibility of GWCS-MF in handling curved boundaries.The schematic diagram is depicted in Fig. 7.The area 01 R rR ≤≤ is filled with lighter fluid, and the remaining space is filled with heavier fluid.The inner boundary has a fixed angular velocity ω , while the outer boundary is stationary.The steady velocity field is determined by the dynamic viscosity ratio The corresponding analytical solution is 1 An O-type body-fitted mesh is adopted for simulations, and its size is 160×80.The fixed computational parameters include 0 1 R = , 1 15 R = .
2D Rayleigh-Taylor instability
The 2D Rayleigh-Taylor instability problem is simulated to validate GWCS-MF for multiphase flows with complex interface evolution.In this problem, the heavier
GWCS-MF
upward, and two secondary roll-ups appear at the tails.The interface droplet configurations at different moments agree well with those in Refs.[13,21,23].
The positions and velocities of the spike tip and bubble front obtained by GWCS-MF are compared to those given by Wang et al. [21] using MLBFS and He et al. [13] using a phase-field-based lattice Boltzmann model to quantify and validate the present results.Note that the time and interface velocity here are nondimensionalized by dg and dg , respectively.As shown in Fig. 10, good agreement can be observed between the present results and reference results [13,21].
As to the situation of Re=2048, Fig. 11 exhibits the transient phase fields at different moments.At the early stage, phenomena like primary and secondary roll-ups are similar to those at Re=256.After 2.5 T = , the interface exhibits more complex evolution, and interface breakup occurs.The transient interface configurations also match those given by a phase-field-based lattice Boltzmann method [13].Comparisons of positions and velocities of the spike tip and bubble front are shown in Fig. 12.It can be seen that GWCS-MF results are in good accordance with those given by a phasefield-based lattice Boltzmann method [13].Taylor instability problem at Re=2048, together with those given by He et al. [13] using a phasefield-based lattice Boltzmann method.
2D bubble rising with a large density ratio
The example of a bubble rising is simulated to further validate GWCS-MF for tracking complex phase interface evolution with a large density ratio.The schematic diagram of the problem is shown in Fig. 13 , where g is the gravitational acceleration.The computational parameters follow the settings in Ref. [30] for comparisons.
The fixed parameters are 100 The buoyance is ( ) , where ρ is the local density. .For a low Eo (Eo=10), which means a relatively larger surface tension, a smooth bubble droplet configuration like a semicircle is retained.As Eo increases, the constraint from the surface tension decreases.Consequently, at Eo=50 and 125, the bubble deformation becomes significant, and two symmetric tails form later.All bubble droplet configurations at different moments are in good accordance with those given by SMLBM [20].
As shown in Fig. 17, the present curves given by GWCS-MF agree well with the reference data [25,39].
Droplet splashing on a thin liquid film
In weakly compressible models, simulations of multiphase flows with large Weber numbers, large density ratios, and high Reynolds numbers are challenging owing to numerical instability issues [19].Therefore, the problem of a droplet splashing on a thin liquid film with a large Weber number of 8000, a large density ratio of 1000, and high Reynolds numbers up to 8000 is simulated to evaluate the performance of GWCS-MF in these challenging situations.The schematic diagram of this problem is depicted in Fig. 15.Initially, the liquid drop of diameter D is tangential to the liquid film surface.
It will impact the liquid film with a velocity ( ) Since the case is symmetric, only half the domain needs to be simulated.A uniform mesh of size 1000×500 is adopted for all simulations.Following the setting in Ref. the linear mean is adopted for the kinematic viscosity in the interfacial area.19 to 23, respectively.It can be seen that as the droplet hits the liquid film, the droplet deforms and tends to spread horizontally.The liquid film hinders the tendency.Thus, the drop periphery extends outward and is pushed upward by the surrounding liquid film simultaneously.The large viscous force prevents splashing when the Reynolds number is relatively low (Re=20).And an outward-moving surface wave can be observed in Fig. 19.For large Reynolds numbers (Re=100, 500, 2000, and 8000), the relatively small viscous forces cannot prevent the liquid from moving upward, and obvious splashing can be observed.These observations are consistent with those reported in Ref. [20].The transient dimensionless impact radius ( ) 2 rR is investigated to quantify the present results.The impact radius r is defined as the distance from the intersection of the unperturbed droplet and the original liquid film surface zH = [40].Theoretical analysis [40] indicates that for large Reynold numbers, the transient dimensionless impact radius approximately satisfies the power law ( ) ( ) , where A is a constant of about 1.0 [40].Many studies have also confirmed this regularity [19,20].As shown in Fig. 24, the present numerical results roughly satisfy the prediction of the power law, which verifies the correctness of GWCS-MF for multiphase flows with large Weber numbers, large density ratios, and high Reynolds numbers.
3D head-on collisions of binary microdroplets
Head-on collisions of binary microdroplets at different parameters are simulated to test GWCS-TF for 3D multiphase flows with large density ratios.The simulations mimic the experiments of tetradecane droplet collision in a nitrogen environment at 1 atm pressure conducted by Qian and Law [41] Only one-eighth of the computational domain is simulated owing to the model's symmetry.A uniform mesh of size 320×160×160 is adopted.Two cases are simulated, which correspond to We=32.8,Oh=0.615, and We=61.4,Oh=0.598.The liquid-gas density ratio is 666, and the dynamic viscosity ratio is 119 [42].Other parameters are set as 1 The transient phase fields of the two cases are depicted in Figs.(25) and (26).The droplet behavior is similar at the early stages of the two cases.The two droplets collide and merge into a larger one.And then, the larger droplet stretches in the x direction and forms a long liquid cylinder with two rounded ends.The later droplet behavior is different.For the case with We=32.8, which corresponds to a relatively large surface tension, the long droplet retracts inward and finally remains one droplet.In the case with We=61.4,the long liquid cylinder finally breaks into three droplets.The droplet evolution observed here matches the corresponding experimental results [41] well.The transient droplet configurations also agree with the numerical results [42] given by an axisymmetric multiphase lattice Boltzmann method.
Micro-droplet impacting a dry surface
Micro-drop impacting on surfaces exhibits various behaviors determined by many factors, including properties of the liquid, size and speed of the drop, and dropletsurface interaction characterized by static or dynamic contact angle(s).The complex outcomes and abundant experimental and numerical results of this problem make it a good benchmark test.In this section, GWCS-MF is applied to simulate it for further validation.
The present simulation mimics an experimental case made by Dong et al.HL µµ = , respectively.For the droplet impacting with such a high velocity, the gravity can be ignored in a short period.In the simulation, the computational domain is 22 Dx D − ≤≤ , 22 Dy D − ≤≤ , 02 zD ≤≤ .Since the model is symmetric, only a quarter of the domain needs to be simulated, and a uniform mesh of size 160×160×160 is adopted.Figure 27 exhibits the deformation process of the droplet.As the droplet hits the dry surface, it spreads to form a thin plate.At about t=10μs, the droplet deformation reaches the maximum, and then the droplet is pulled back by the surface tension and droplet-surface interaction.With time evolution, the droplet retracts to a cylinder and tends to jump.Furthermore, the dimensionless impact diameter D * and height H * given by GWCS-MF are compared with the experimental and numerical results.The impact diameter and height are respectively defined as the droplet diameter on the dry surface and the maximum height of the droplet, and they are nondimensionalized by D . Figure 28 shows that the present results match the experimental data [43] well and show better agreement than other numerical results [22,33].FIG.28.Comparison of impact diameter and height for the 3D droplet impacting a dry surface with Re=685 and We=103, θ eq =107°.
Conclusion
Many weakly compressible models based on lattice Boltzmann models have been proposed to achieve good numerical stability for multiphase flows with large density ratios.However, their mechanisms for stabilizing computation have not been well understood.The present paper aims to establish the general mechanisms that incorporate many weakly compressible multiphase models into a unified theoretical framework.
The present paper first derived the macroscopic equations of MLBM, IMLBFS, and SMLBM.Unlike the continuous equations recovered in previous references, the current derivation recovers the time-discretized macroscopic equations with numerical dissipation terms.It was found that the recovered macroscopic equations contain some common numerical dissipation terms.Numerical investigations prove that they provide general mechanisms for stabilizing computations.The generality of mechanisms applies to many weakly compressible multiphase models.They represent interpretable physical mechanisms that do not rely on specific discretization schemes.
GWCS-MF based on the finite volume scheme has been proposed based on the general mechanisms.The stable droplet immersed in gas was simulated by the dissipative model, the non-dissipative model, GWCS-MF, MLBM, SMLBM, MLBFS, and IMLBFS to compare their numerical stability.The results imply that the general mechanisms can well explain the good numerical stability of SMLBM, MLBFS, and IMLBFS.As to MLBM, it has other intrinsic mechanisms to achieve good numerical stability for large kinematic viscosities.However, it is unstable for small kinematic viscosities, which may result from complex deviation terms shown in MEs-MLBM.
Seven benchmark cases were simulated to evaluate GWCS-MF in detail.The results prove that it can achieve good numerical stability and accuracy for multiphase flows at challenging conditions, such as inviscid fluids, large density ratios, high Weber numbers, and high Reynolds numbers.It is also flexible in nonuniform meshes and can tackle complex interface evolutions well.These good performances prove the advantages of GWCS-MF and further validate the general mechanisms.
CM
is the constant mobility, and C µ is the chemical potential determined by the total free energy of fluid-fluid or fluid- wall interfaces.The total free energy is
( ) 0 E−
C is the bulk free energy defined as , λ and κ are two fixed parameters; ( )S C ϕis the wall free energy per unit area.For the equilibrium state, the total free energy is minimized, and the chemical potential C µ of the inside fluid can be determined by
Π
and Q α are interface fluxes, s c is the sound speed, V ∆ is the control volume, subscript k denotes the th k interface of the control volume, S α ∆ and k n α are the area and the outer normal vector of the th k interface, respectively.The detailed procedures to calculate interface fluxes through reconstructing a local lattice Boltzmann model are as follows: (a) Reconstruction of the local lattice Boltzmann model.As shown in Fig. 1, a unit lattice of the D2Q9 model is constructed at the finite-volume cell face, and S x is the position of the face center.The discrete velocities of the D2Q9 model and the D3Q19 model are given in Eq. (15).
Fig. 1 .
Fig. 1.The reconstructed unit lattice at the cell face.
respectively calculated by Eqs. ( Eq. (61) into Eq.(59) leads to the volume fraction flux )Eqs.(62) to (65) and Eqs.(68) to (70) are labeled as MEs-IMLBFS.To show the numerical dissipation terms included in MEs-IMLBFS, Eqs.(62) and (63) are substituted into Eqs.(68) and (69), and we can obtain (a) Reconstruction of the local lattice Boltzmann model.As shown in Fig. 3, a unit lattice is constructed at the cell face and S x is the position of the face center.In the present paper, the lattice size is set as the minimum value among SL − xx , SR − xx , SU − xx , and SD − xx .
Fig. 2 .
Fig. 2. The reconstructed unit lattice at the interface.
µ are the adjustable pressure diffusion coefficient, and bulk viscosity coefficient, respectively.Referring to Eqs. (112) and (113), p D and b The non-dissipative model is similar to the dissipative model, except that the p D numerical stability of four models based on the finite difference scheme, the non-dissipative model, the dissipative model, SMLBM, and MLBM, are investigated.Subscripts H and L represent the parameters of the liquid and gas, respectively.The fixed parameters include 0an extensive range from 1 to 1000.The predictor and corrector steps of the dissipative model are set as 0
Fig. 3
Fig. 3 Stability of the non-dissipative model (a), the dissipative model (b), SMLBM (c), and MLBM.The blue circle indicates stable, and the red cross indicates unstable solutions.
considered.The nonslip boundary condition is imposed on the inside and outside boundaries.
Figure 8 Fig. 8 .
Fig. 8.Comparison of the azimuthal velocity profiles at different dynamic viscosity ratios for the two-phase Taylor-Couette flows in an annular area.
1 L
fluid of density H ρ floats on a lighter fluid of density L ρ .Once a slight pressure or interface curvature perturbation occurs, the interface becomes unstable, and the two fluids would reverse. is the characteristic length.The nonslip boundary condition is imposed on the upper and lower boundaries, and the periodic boundary condition is set on the left and right boundaries.The Reynolds number Re and the Atwood number At are used to characterizing the problem.They are respectively defined as of size 200×800 is adopted.The Atwood number is set as 0.5, and Re has two values, 256 and 2048.Other simulation parameters are set as Re=256 are discussed first.Figure 9 shows the transient phase fields at different moments.At the early stage, the central heavy fluid falls to form a spike, and the lighter fluid at the two sides rises to form bubbles.At 2 T = , two symmetric roll-ups occur at the spike end.As the evolution continues, the two roll-ups stretch
Fig. 9 .
Fig. 9. Evolution of the fluid interface for the 2D Rayleigh-Taylor instability problem at Re=256.
Fig. 10 .
Fig. 10.Transient positions and velocities of the spike tip and bubble front for the 2D Rayleigh-Taylor instability problem at Re=256, together with those given by Wang et al. [21] using MLBFS and He et al. [13] using a phase-field-based lattice Boltzmann method.
. The computational domain is DxD −≤≤ , 3 Dy D −≤≤ .A stationary bubble of diameter D is initially immersed in the heavier fluid, and the bubble center is ( ) 0, 0 .The upper and lower boundaries are set as the nonslip boundary condition with zero velocity, while the left and right boundaries are set as the periodic boundary condition.The problem is characterized by three dimensionless parameters: the density ratio HL ρρ ,
Figures 14 to 16
Figures 14 to 16 depict the evolution of the bubble interface defined as the contour 0.5 C =.For a low Eo (Eo=10), which means a relatively larger surface tension, a smooth bubble droplet configuration like a semicircle is retained.As Eo increases, the constraint from the surface tension decreases.Consequently, at Eo=50 and 125, the bubble deformation becomes significant, and two symmetric tails form later.All bubble droplet configurations at different moments are in good accordance with those given by SMLBM[20].
center at Eo=125 are compared with the reference results[25,39] to validate the present results in quantity.The vertical position and velocity of the bubble mass center are defined as
Fig. 17 .
Fig. 17.Transient vertical position and velocity of the bubble mass center at Eo=125, togetherwith the reference data given by Aland and Voigt[39], and Yang et al.[25].
[ 19 ]
, the computational parameters are set as 0, 100, 500, 2000, and 8000, are chosen.The dynamic viscosity of the lighter fluid is fixed, and the viscosity ratio HL µµ is 40 at Re = 500.Referring to Ref.[19],
Fig. 18 .
Fig. 18.Schematic diagram of the droplet splashing on a thin film.
Fig. 19 .
Fig. 19.Evolution of the phase field for droplet splashing on a thin liquid film at Re=20.
Fig. 20 .
Fig. 20.Evolution of the phase field for droplet splashing on a thin liquid film at Re=100.
Fig. 21 .
Fig. 21.Evolution of the phase field for droplet splashing on a thin liquid film at Re=500.
Fig. 22 .
Fig. 22. Evolution of the phase field for droplet splashing on a thin liquid film at Re=2000.
Fig. 23 .
Fig. 23.Evolution of the phase field for droplet splashing on a thin liquid film at Re=8000.
Fig. 24 .
Fig. 24.The transient impact radiuses at different Reynolds numbers for droplet splashing on a thin liquid film.
boundaries, the Neumann boundary condition of zero gradients is adopted for all variables.Initially, the two droplets of Diameter D are placed at ( ) The problem is characterized by the Weber number We and the Ohnesorge number Oh defined as
[43] for comparison.Initially, the droplet of diameter 0 50.5μmD = is tangential to the dry surface with an equilibrium contact angle 107 eq θ = and will impact the dry surface with a vertical speed of 12.2m/s.The case corresponds to Re=685 and We=103, where Re and We are defined in Eq. (126).The density and viscosity ratio of the droplet and
FIG. 27 .
FIG. 27.Evolution of interface morphology for the 3D micro-droplet impacting on a dry surface with Re=685 and We=103, | 8,782.2 | 2023-09-01T00:00:00.000 | [
"Mathematics"
] |
Statistical Considerations in the Development of Injury Risk Functions
Objective: We address 4 frequently misunderstood and important statistical ideas in the construction of injury risk functions. These include the similarities of survival analysis and logistic regression, the correct scale on which to construct pointwise confidence intervals for injury risk, the ability to discern which form of injury risk function is optimal, and the handling of repeated tests on the same subject. Methods: The statistical models are explored through simulation and examination of the underlying mathematics. Results: We provide recommendations for the statistically valid construction and correct interpretation of single-predictor injury risk functions. Conclusions: This article aims to provide useful and understandable statistical guidance to improve the practice in constructing injury risk functions.
Introduction
Injury risk data are of the form (X 1 , Y 1 ), . . . , (X n , Y n ), where X i is a predictor-for example, an experienced force or deflection in a dummy test-and Y i is a binary outcome such as the occurrence or nonoccurrence of injury in a matched cadaver test. For example, Y i = 1 if force X i resulted in injury 0 if force X i did not result in injury.
The goal in the development of an injury risk curve is a model that accurately relates the probability of injury to the force experienced; in this article, we focus on single-predictor models without confounding variables (e.g., age or gender), but many of the ideas we present have extension to multiple variable problems. In the single-predictor literature, Petitjean and Trosseille (2011) provide a wide-ranging survey of available methods and their relative strengths and weaknesses. We focus on logistic regression and survival analysis, the 2 best performing approaches in their simulations.
Associate Editor Matthew Maltese oversaw the review of this article. Address correspondence to Timothy L. McMurry, University of Virginia Department of Public Health Sciences, P.O. Box 800717, Charlottesville, VA 22908. E-mail<EMAIL_ADDRESS>Further, the International Organization for Standardization (ISO) has developed a stepwise process for the development of injury risk functions (ISO 2014). Kent and Funk (2004) demonstrated that when information about the exact force or deflection experienced at the moment of injury is available, it is important to incorporate this knowledge into the analysis. Our recommendations are broadly complementary to each of these works but offer important clarifications and refinements.
The remainder of the article is divided into four sections, each addressing a misunderstanding commonly seen in the injury risk function literature or in personal communications with others involved in their development; the four sections are outlined below. Where possible, we provide examples to justify and better articulate our recommendations, and each section provides practical recommendations. Our examples are based on a data set provided by J. Crandall (University of Virginia, email communication, April 5, 2013) showing the survival of eggs dropped from various heights onto a padded surface (Table A1, see online suppement). We chose this data set to avoid the appearance of criticizing particular analyses; our wish is only to improve practice. The remainder of the article is structured as follows.
The next section addresses the relationship between logistic regression and survival analysis, the 2 most common techniques. These approaches are technically very similar, which explains why, as previous research indicates (Petitjean and Trosseille 2011), they produce results of similar quality. We use this section to make the mathematical connections and introduce notation that will be useful in subsequent sections, which focus on direct application.
The following section addresses the construction and interpretation of confidence intervals for injury risk functions. A practitioner must make several choices that on the surface seem immaterial but result in dramatically different interpretation and performance. These choices include whether or not the intervals should be horizontal or vertical, and the scale on which the intervals should be constructed. We provide recommendations on the best approach and justify our recommendations with simulations.
The next section discusses the difficulty in choosing a functional form for the regression based on model fit criteria. We demonstrate that with sample sizes typical in biomechanics and injury risk function development, the Akaike information criterion (AIC) does not reliably choose the optimal model form. As such, we are left to choose the functional form based on how realistic the resulting shape is likely to be and our best physical understanding of the mechanisms causing injury.
The final section focuses on the problem with using repeated measurements on the same test subjects. For example, it is not uncommon to test a subject first in a lower impact test that is not expected to be injurious and then retest the same subject with a more potentially injurious test. These repeated tests are a substantial violation of the assumptions underlying both survival analysis and logistic regression and need to handled carefully. We explain the concerns associated with repeated testing, their theoretical basis, and some thoughts on the way this problem might be handled.
Logistic Regression and Survival Analysis
Injury risk data can be seen as either binary outcome data or survival data. When the binary outcome viewpoint is taken, a force X either produced an injury, so Y = 1, or no injury, making Y = 0. When faced with binary data, most statisticians' instincts are to attempt a logistic regression model.
Logistic regression ignores an important feature of biomechanical data: zero impact corresponds to zero risk of injury. To solve this, much of the injury risk literature instead turns to survival analysis, treating all data as either left or right censored. For example if data (X, Y) = (15, 0) is observed, then the subject experienced an impact force of 15 and no injury resulted. These data can be treated as right censored because the force required for injury is now known to have been greater than 15, but it is not known how much greater the required force would need to be. Conversely, if the data (X, Y) = (15, 1) is observed, then a force of 15 was applied and an injury resulted. It is now known that the force required for injury was at most 15, and the threshold needed for injury could have been less. Such data can be viewed as left censored.
The remainder of this section demonstrates that from a model fitting perspective, the 2 approaches are strongly related; for this reason they tend to produce similar results.
Logistic Regression
In the broader statistical literature, logistic regression is the most common approach for modeling binary outcome data. It assumes a model of the form where we use P [Y i = 1|X i ] to denote the probability of injury in an impact of magnitude X i . The regression coefficients β 0 and β 1 are typically unknown and estimated from the data. The logistic model is flexible enough to provide a reasonable, although likely never perfect, model for P [Y i = 1|X i ] in cases where this probability either strictly increases or strictly decreases with an increase in X i . Furthermore, transformations of X i can be used to improve model fit, although in practice it is often difficult to know which transformation, if any, would be most appropriate. The coefficients β 0 and β 1 allow the logistic curve to be shifted left/right and to capture different rates of risk increase, analogous to changing the intercept and slope in linear regression. These coefficients are typically estimated by a process known as maximum likelihood. The idea is that the best estimates of these coefficients are the values that make the observed data most probable. A detailed discussion can be found in any text on mathematical statistics; see, for example, Hogg et al. (2004). The result is that the coefficients are chosen to maximize where I is the set of experiments in which injury occurred and N is the set of experiments with no injury. Defining Eq.
(2) can be rewritten as where for each i, the binary outcome Y i acts as a switch to choose one of the two potential terms in the product. The valuesβ 0 andβ 1 that maximize (3) do not have closedform solutions, but reliable algorithms to estimate them are implemented in all statistical software packages.
Survival Analysis
An alternative to logistic regression is to use survival analysis, treating all data as either left or right censored. Impacts resulting in injury are left censored because the force at which injury occurred is now known to be less than or equal to the applied force. Impacts not resulting in injury are right censored because the force required for injury is now known to have been greater than the applied force.
Though survival analysis takes a different view of the data than logistic regression, the end result is again a model relating the experienced force to the probability of injury, similar to (1) but with a different functional form. In fact, the ISO recommends comparing survival models with 3 different functional forms: Weibull, log-normal, and log-logistic. For simplicity and clarity we focus on the Weibull; the others are similar.
For example, if one assumes a Weibull model, the resulting injury risk function is of the form where the parameters λ > 0 and k > 0, also referred to as the scale and shape, are unknown and estimated from the data. Model (4) has several attractive features. First, the model always associates zero force with zero risk of injury. Second, because the parameters are constrained to being positive, the fitted curve always increases as the force increases. Finally, different values of k allow the fitted curve to take on different shapes, which might better model reality. In contrast, all potential logistic curves differ only in location and scale but essentially have the same shape.
As with logistic regression, the parameters in Eq. (4) are typically estimated by maximum likelihood. In order to describe the appropriate likelihood function, we let and we define to be the probability density function (pdf) associated with F W . With this notation, the appropriate likelihood function is (see Klein and Moeschberger 2003, p. 75) In (5), I is the set of experiments where injury occurred at an impact less than or equal to X i , N is the set of experiments where no injury occurred, E is the set of experiments where the exact force required to cause injury is known (as discussed in Kent and Funk 2004), and V is the set of interval-censored observations, where the force required for injury is known to be between a left endpoint, L i , and a right endpoint, R i . Note that in the case of a Weibull model F W (0, λ, k) = 0, so it is equivalent to treat an observation where injury occurred as being either left censored or interval censored with left endpoint 0.
Equation (5) highlights an additional feature of the survival approach: survival analysis is naturally designed to incorporate more detailed information about when the injury occurred, should that information be available. For example, it can correctly handle any cases where the exact force required for injury is known (set E) and cases where the force is known to be between two non-zero limits (set V ). Nonetheless, such extra information is often unavailable; in these cases, sets E and V are empty, and (5) reduces to
Comparison Between Logistic and Survival Approaches
A comparison of Eqs. (3) and (6) shows that when the outcomes are either censored injury/no injury, logistic regression and survival analysis attempt to maximize very similar likelihood functions. The only difference is the shape of the curves used to interpolate risk probabilities between 0 and 1. In fact, one could do survival analysis with a logistic distribution instead of a Weibull distribution and get results identical to those from logistic regression. For these reasons, in most situations it is expected that the 2 techniques will produce similar results.
Which of the 2 models is preferable then comes down to which functional form (e.g., (1), (4), or the corresponding form for log-normal and log-logistic) can take on a shape closest to the unknown true risk function. Later we discuss the limitations of statistical model choice.
Recommendations
Our first recommendation is with regard to the choice between logistic and survival analysis. Both approaches have merit. The survival approach produces a model that is more appealing on physical grounds. Logistic regression has some added flexibility in that the slope and intercept are both unconstrained, which means that the model is likely to produce reliable fits in the range of the data.
Based on these considerations, we recommend the following: 1. If exact uncensored experimental data exist, this information must be accounted for via likelihood function (5), although both traditional survival distributions and the logistic distribution may be considered. 2. Fit both a logistic regression and survival analysis. If they are qualitatively very different-for example, the logistic curve is fairly flat while the survival curve increases-any further results should be viewed as unreliable. 3. If the logistic and survival curves are similar, then the practitioner is free to use a survival approach. The rationale is that the logistic model's flexibility makes it a good benchmark in the middle of the data, but near x = 0 the survival model will be more accurate. As long as the 2 models agree in the middle of the data, the survival model is preferred. 4. Whichever model is used should be checked with a goodness of fit test, such as the Hosmer-Lemeschow test. This is particularly important with the survival approach because it may be more likely to produce a reasonable looking survival curve whether or not the curve reflects the data.
Confidence Intervals in Injury Risk Curves
Confidence intervals can be thought of as quantifying the uncertainty in an estimated injury risk function that could be expected if one were able to repeat the entire experiment. More precisely, a confidence interval should contain the unknown "truth" with a prespecified likelihood. For example, one could construct a 95% confidence interval for the probability of injury at a specified impact. In theory, this interval should contain the actual probability of injury 95% of the times the experiment is conducted and the interval constructed. In order to construct meaningful intervals, a decision must first be made about the appropriate direction for the interval. Then an appropriate procedure must be used.
Horizontal vs. Vertical Confidence Intervals
In practice, 2 types of confidence intervals are commonly seen in the injury risk function literature. There are horizontal intervals that fix a probability of injury and ask what the uncertainty is in the estimated impact resulting in that probability of injury. The other type of interval is vertical in that it fixes an impact force and asks how much uncertainty there is in the estimated risk of injury. Both horizontal and vertical intervals have important application for injury risk, but they are not the same and the direction must be chosen based on context. If the goal is to set regulation based on a (for example) 30% chance of injury, a horizontal confidence interval describes the uncertainty in the estimated force resulting in this level of risk. If the goal is to describe the likelihood of injury resulting from a known impact, then a vertical interval would be appropriate.
Whichever confidence interval direction is chosen, many confidence intervals can be combined to produce confidence bands, such as those shown in Figure 1. These bands can only be interpreted pointwise, not as a confidence band for the entire injury risk function.
Vertical Confidence Intervals in Logistic Regression
We focus on vertical confidence intervals in logistic regression. Vertical intervals for survival models are less frequently implemented in software, so this section takes on additional importance when such intervals are required. For notational convenience, let p x = P [Y = 1|X] denote the probability of injury associated with an impact X. With this notation, the (1) can be rewritten as The observed data (X 1 , Y 1 ), . . . , (X n , Y n ) are used to estimate the unknown coefficients β 0 and β 1 by maximum likelihood; we denote these estimates byβ 0 andβ 1 . Plugging these estimates into Eq. (7) and solving for p x gives an estimated probability of injurŷ A common and seemingly straightforward approach to constructing a 100 × (1 − α)% confidence interval forp x in the injury risk literature is an interval of the form where z 1−α/2 is the 100 × (1 − α/2) th percentile of the standard normal distribution, and SE{p} is the standard error of p x as produced by most statistical packages.
The assumption underlying interval (9) is that the sampling distribution ofp x is normal. For large samples with p x not near 0 or 1, this assumption is justified by the delta method (e.g., Resnick 1999, p. 261). However, for typical biomechanical sample sizes and many values ofp x the normal assumption is not adequate. This is easily seen because p x andp x are constrained to be between 0 and 1, but the resulting confidence intervals are not. See Figure 1.
Fortunately, there is a straightforward solution to this difficulty. The simple solution is to construct the confidence interval on the log-odds scale and then transform the endpoints to the probability scale. In other words, the confidence interval starts with the confidence interval for β 0 + β 1 X, where LL and U L denote the lower and upper limits of the interval. Interval (10) is then transformed to a confidence interval for p x by applying the logistic function to LL and U L, resulting in the interval Interval (11) is symmetric on the log-odds scale, but asymmetric and constrained to be between 0 and 1 on the probability scale (see Figure 2) and comes closer to achieving the desired 95% confidence, as will be shown in our simulations.
The justification for interval (11) is based on the asymptotic normality of the maximum likelihood estimatesβ 0 and β 1 . Under fairly general conditions, applicable in this setting, maximum likelihood estimates have an asymptotic normal distribution (e.g., Hogg et al. 2004, p. 325). Unlike p x , β 0 and β 1 are unconstrained, which tends to make the asymptotic normal approximation forβ 0 +β 1 X more accurate than that of p x . Because the inverse logit function is a monotonic oneto-one transformation, a 95% interval on the log-odds scale transforms into a 95% interval on the probability scale.
Horizontal Confidence Intervals for Survival Models
Survival models have an alternate representation to (4), which helps inform the construction of confidence intervals. The alternate form is (Klein and Moeschberger 2003, p. 46) log(Injury Force) = μ + σ W, where Injury Force is the exact force required to cause injury, and W is a random variable describing variation across the population. W can be assumed to have different probability distributions, with each distribution corresponding to a different class of survival model. For example, if W has an extreme value distribution, then model (12) produces a Weibull survival model. If W has a normal distribution, the resulting model is log-normal, and if W has a logistic distribution, then the resulting model is log-logistic. From the point of view of (12), the most natural confidence intervals are for percentiles of Injury Force. In other words, in (12), a 10% chance of injury corresponds to the 10th percentile of W , which for a given model (e.g., Weibull with W having an extreme value distribution) is known. The uncertainty in the required injury force then lies in the uncertainty of the maximum likelihood estimates of μ and σ . The result is that the most natural confidence interval is for the force corresponding to a fixed survival probability, which is a horizontal interval when viewed on a graph like Figure 1.
As a result of (12) there are again 2 plausible forms for the confidence interval. The first form, which we term data scale involves first estimating the required force and then constructing a confidence interval on the scale of the original data, resulting in the 100 × (1 − α)% confidence interval for the force that generates a 10% chance of injury of exp[μ +σ W 0.10 ] ± z 1−α/2 SE{exp[μ +σ W 0.10 ]}, where W 0.10 is the 10th percentile of the distribution of W (e.g., the 10th percentile of the extreme value distribution). Alternatively, because the maximum likelihood estimatesμ andσ have an approximate normal distribution, confidence intervals could also be constructed on the log scale and then transformed to the injury force scale; the resulting interval will be termed the log scale interval. The force required for a 10% chance of injury starts with a 100 × (1 − α)% confidence interval for the log of the injury force: (LL, U L) =μ +σ W 0.10 ± z 1−α/2 SE {μ +σ W 0.10 } , and the final confidence interval has the form Because injury force has much weaker constraints than the probability of injury modeled in logistic regression (i.e., probabilities are constrained to be between 0 and 1, whereas injury forces only need to be positive), the choice between (13) and (14) is less clear-cut. However, there is still some reason to think that (14) is preferable.
The reason we feel interval (14) is preferable is shown in the confidence intervals for the egg drop data in Figure 3. In this example, the intervals given by (13) curve to the left as the probabilities go up. This is counterintuitive because it seems reasonable to believe that the probability of injury should strictly increase as force increases. The interval (14) seems to better capture a physically realistic pattern. In our experience, the backwards curving shape is a frequent problem with interval (13) but less so with interval (14). In our simulations, the 2 intervals have similar coverage properties. Remark: If model (12) is modified to Injury Force = μ + σ W, where W is taken to have a logistic distribution, then the resulting survival model is identical to the logistic regression. As such, interval (13) can be used to produce horizontal confidence intervals for logistic regression; the necessary calculations are produced by many statistical packages. Because this model is directly fit on the scale of the data, there is no equivalent to interval (14).
Logistic Regression
In order to demonstrate the statistical improvement achieved by using interval (11), we conducted a small simulation study based on the egg drop data shown in Figures 1 and 2. The simulation works by generating data in a setting where we know the true injury risk function and evaluating each confidence interval to see whether or not it contains the true risk of injury.
In the simulated data, drop heights were chosen by drawing samples of size 50 with replacement from the drop heights shown in Figures 1 and 2. In order to assess a wider range of designs, these drop heights were then randomly perturbed by adding normally distributed mean 0 standard deviation 4 random numbers. Finally, injury/no injury data were simulated so that the probability of injury is given by the fitted risk function shown in Figures 1 and 2. For the simulated data set, Fig. 4. An example of a simulated data set and confidence intervals. The dashed risk function indicates the probability of injury used in the simulation. The solid risk function indicates the probability of injury as estimated from the data. At each of 10%, 20%, . . . , 90% probabilities of injury, the 2 types of confidence intervals are shown; the intervals given by (9) are solid and those given by (11) are dash-dots. a new injury risk function was estimated and the confidence intervals (9) and (11) calculated at the drop heights that correspond to true injury risks of 10%, 20%, . . . , 90%. Finally, we determined whether or not the confidence intervals contained the true probabilities of injury. An example simulation is shown in Figure 4. The sample size of 50 was chosen to minimize the impact of nonconverging model estimates.
The simulation experiment was repeated 10,000 times. The empirical proportions of times the confidence intervals contain the true injury risk are shown in Table 1; the target is intervals that contain the truth 95% of the time. The intervals given by (9) show substantial undercoverage for p close to 0 or 1. The intervals given by (11) are in this case mildly conservative but consistently close to the desired 95% coverage.
Survival Model
In this experiment, we compared the accuracies of intervals (13) and (14). In each iteration, the probability of injury was taken to be the fitted risk function shown in Figure 3. The drop heights were generated as in the preceding simulation. For each simulated data set, a new injury risk function was estimated and both types of horizontal confidence intervals were calculated for the necessary drop heights to produce injury risks of 10%, 20%, . . . , 90%. Finally, we looked to see whether or not the confidence intervals contained the true drop heights.
In terms of coverage, the two techniques performed similarly, as shown in Table 2. Interval (13) performed better in 3 cases, (14) performed better in 5, and there was one tie. However, as seen in Figure 3, the data scale intervals often curve backwards for high probabilities of injury. Though this did not manifest itself in decreased coverage rates in our simulations, it does seem physically unreasonable and potentially unreliable.
Recommendations
We recommend that horizontal or vertical confidence intervals be chosen based on the desired interpretation. Once a direction has been chosen, construct symmetric confidence intervals on Table 2. Coverage for the 2 types of nominal 95% horizontal confidence intervals over 10,000 simulations. The top row shows the empirical coverage for the confidence intervals given by (13) (11) and (14). Horizontal confidence intervals for logistic regression should be calculated via (13).
The Difficulty of Choosing the Functional Form
The ISO (2014) recommends using the AIC to choose between Weibull, log-normal, and log-logistic survival models. In our simulations, we demonstrate that with typical biomechanical sample sizes, the AIC makes a somewhat arbitrary choice. Before presenting our simulation results, we argue that the choice of survival model is strongly related to the choice of predictor variable, and we discuss some mathematical considerations that we feel, in the absence of additional information or a large sample size, favor the Weibull model over the other 2 survival models.
Biomechanical Considerations
In most biomechanical experiments, there is a choice of injury predictor. For example, when predicting egg breakage, one might consider any of drop height, momentum, or kinetic energy on impact as the x variable. These 3 quantities are strongly related, and to the extent that they are related, switching from one to another is simply a nonlinear rescaling of the predictor axis. From this point of view, the choice between these potential predictors is equivalent to a choice of the functional form for the statistical model. Because statistical methods offer little help with the choice of model, we also cannot expect them to adequately choose between strongly related predictors. We recommend choosing the predictor that best reflects available physical and mechanical understanding.
Aesthetic Considerations
The mathematical considerations favoring the Weibull model over the log-normal and log-logistic models revolve around the hazard function. Let the random variable X denote the exact force needed to cause injury to a randomly selected subject, and let x denote denote a fixed force. Then the hazard function is defined as In other words, h(x) x is the probability that a subject who is able to experience all forces up to x uninjured would be injured by a force between x and x + x for small x. It can be shown that is the injury risk function and f (x) = F (x) is the corresponding probability density. In the context of an injury risk function, it may be reasonable to expect that h(x) be increasing. The Weibull distribution with shape parameter k and scale λ has hazard function h w (x) = kx k−1 /λ k . h w (x) is easily seen to be be increasing for k > 1, constant for k = 1, and decreasing for k < 1. An injury risk model with nondecreasing hazard where the values of α and β can be derived from μ and σ in (12). When β ≤ 1, h l (x) is strictly decreasing. When β > 1, h l (x) has a single peak at x = α(β − 1) 1/β , which corresponds to an injury probability of 1 − 1/β. After this point, the hazard rate drops, which may be unnatural in our context. The log-normal hazard function is qualitatively similar to the log-logistic hazard function but does not have a simple closed form.
In practice, the hazard function is a derivative and is likely of secondary importance to the value of the fitted injury risk function itself. Therefore, we do not feel that these aesthetic considerations should be the deciding factor in choosing the functional form. However, in the absence of better understanding of the detailed mechanisms of biomechanical injury, we feel, this reasoning lends some favor to the Weibull model over the log-normal and log-logistic models.
Simulations
In order to demonstrate the difficulty in choosing the appropriate functional form of the survival curve, we conducted a small simulation experiment. We start off with the egg drop data to ensure a survival curve that represents an actual risk of injury in a realistic way. We then fit four injury risk functions to these data using logistic regression and survival analyses using the log-logistic, log-normal, and Weibull distributions.
We used the 4 fitted injury risk functions to simulate new data sets of size n = 50, large for a biomechanical injury risk function, where we know the true functional form. Finally, we use the AIC to choose the best functional form based on the simulated data. We simulated 10,000 data sets from each of the 4 fitted injury risk functions. The frequencies with which the AIC chose the different distributions are shown in Table 3.
The simulation shows that the AIC tended to choose the log-normal and logistic models no matter what the true risk function; this demonstrates that with small sample sizes the AIC cannot reliably select the best model. In order to ensure that this was a sample size problem, we reran the simulation instead using simulated data sets of size n = 5,000. With the larger sample size, the AIC was able to identify the correct functional form the majority of the time, with success rates ranging from 67% for the log-logistic model to 97% for the logistic model.
We should not view these small sample troubles as a shortcoming of the AIC. The fundamental cause is that the 4 families of models are each flexible enough to provide good (and usually similar) fits to the data, and other model fit metrics can be expected to have the same trouble.
Recommendations
1. Start with a logistic regression. Logistic regression provides a best linear fit in the log-odds space. This fit is typically reasonable near the middle of the data in the x direction for the same reason that one term Taylor approximations often work well over small domains. 2. If extrapolation to smaller risks is desired, fit a Weibull distribution in addition to the logistic regression. If the Weibull fit is similar to the logistic regression over the range of data, then the Weibull can reasonably be thought of as an improvement because it passes through (0, 0) and matches the logistic regression where the majority of data were collected. 3. If the Weibull and logistic fits differ substantially, the logistic regression should be taken as more reliable in the middle of the data, and the Weibull should not be used. The reasoning underlying this point is the same as the reasoning underlying Recommendation 1: logistic regression has enough flexibility to consistently produce a good fit in the middle of the data in the x direction. Because survival models pass through (0, 0), their flexibility is diminished and they should be viewed as less reliable in the middle of the data, even though they are typically more accurate for very small impacts.
Repeated Measurements on the Same Subject
It is not unusual to see repeated measurements on the same subject used as independent data points in an injury risk function development. For example, many injury risk functions are built on data where a cadaver is impacted once with a low and ultimately noninjurious impact and then impacted again at a higher, and potentially injurious, force. Doing this makes 2 fundamental mistakes: 1. It implicitly assumes that the first impact did not weaken the test subject in any way. 2. It ignores the difference between repeating tests on one subject versus conducting tests across the population.
Point 2 is the more subtle. Any particular subject has its own injury risk curve that is likely very steep. For example, each egg has a drop height at which it starts to crack. Below that drop height it survives, above that height it breaks. So, its injury risk curve is almost vertical at one height. Another egg has its own injury risk curve that is vertical at a different height, accounting for needing a different impact to crack. Population injury risk curves, like the ones we seek to fit, model the proportion of eggs that would have cracked when dropped from a given height. If we take multiple measurements on a single subject, we find out more about that subject's injury risk function, but we do not collect another data point on the population as a whole.
There are numerous statistical techniques for dealing with repeated measurements on the same subject. Logistic regression has been extended to correlated data through random effects models and generalized Estimating equations (GEE; Diggle et al. 2002), although only GEE produces a model with the desired population, level interpretation (termed a marginal model in the statistics literature). Survival analysis has been extended to handle correlation through frailty models (Klein and Moeschberger 2003, ch. 13), but these models again measure risk at the individual rather than population level. Lipsitz and Ibrahim (2000) discussed potential GEE-type extensions to parametric survival models.
In the context of repeated impacts on the same cadaver, we have found useful a third approach that better meshes with the survival interpretation of injury risk data. We illustrate our approach with an example. Suppose an egg survives a drop of 50 cm but is broken by a drop from 100 cm. We treat this as a single interval censored-observation, where the injury occurred at an unknown point between 50 and 100 cm. If the egg had also survived the 100-cm drop, we treat the egg as providing a single data point right censored at 100 cm. Mathematically, estimation is handled by the maximization of (5) with interval-censored terms. Interval censoring correctly handles the concern of testing the same subject at multiple impact levels. Like all other techniques for handling repeated measures, it does not do a good job accounting for potential damage that may accrue during noninjurious rounds of testing.
In other contexts, we would consider other models. For example, in a lower extremity test where both legs of a cadaver are impacted (either together or separately), the interval censoring approach is no longer reasonable. In this case, we would lean toward either a GEE model or a frailty model, although the latter would require some care in order to achieve a population-level interpretation of injury risk.
Recommendations
When modeling requires the use of repeated measurements on the same subject, a statistical method that correctly accounts for the correlation between these measurements must be used. The methods we describe are commonly used in other applications and widely implemented in statistical software. | 8,432.2 | 2015-03-12T00:00:00.000 | [
"Mathematics"
] |
Central Upwind Scheme for a Compressible Two-Phase Flow Model
In this article, a compressible two-phase reduced five-equation flow model is numerically investigated. The model is non-conservative and the governing equations consist of two equations describing the conservation of mass, one for overall momentum and one for total energy. The fifth equation is the energy equation for one of the two phases and it includes source term on the right-hand side which represents the energy exchange between two fluids in the form of mechanical and thermodynamical work. For the numerical approximation of the model a high resolution central upwind scheme is implemented. This is a non-oscillatory upwind biased finite volume scheme which does not require a Riemann solver at each time step. Few numerical case studies of two-phase flows are presented. For validation and comparison, the same model is also solved by using kinetic flux-vector splitting (KFVS) and staggered central schemes. It was found that central upwind scheme produces comparable results to the KFVS scheme.
Introduction
Multiphase flows are commonly observed in nature and science, from stand storms, to volcanic clouds, blood flow in vessels and motion of rain droplets. There are also numerous examples where multiphase flow occurs in industrial applications, for example energy conversion, paper manufacturing, food manufacturing as well as in chemical and process engineering. Due to their wide range applications, suitable models are required for the accurate prediction of the physical behavior of such flows. However, modeling and simulation of flows is a complex and challenging research area of the computational fluid dynamics (CFD).
Multiphase flow problems involve the flow of two or more fluids separated by sharp interfaces. The coupling of interfaces with the flow model is a challenging part in the simulation of such flows, as coupling miss-match may introduce large errors in the numerical simulations. It is important to mention that this work is only concerned with two-phase flow problem.
Several two-phase flow models exist in the literature for describing the behavior of physical mixtures. These models use separate pressures, velocities and densities for each fluid. Moreover, a convection equation for the interface motion is coupled with the conservation laws of flow models. In the literature such models are known as seven-equation models. One of such models for solid-gas two-phase flows was initially introduced by Baer and Nunziato [1] and was further investigated by Abgrall and Saurel [2,3], among others. The seven-equation model is considered as the best and established two-phase flow model. However, it inherit a number of numerical complexities. To resolve such difficulties researchers have proposed reduced models containing three to six equations [4][5][6].
The Kapila's five-equation model [4] deduced from Baer and Nunziato seven-equation model [1] is a well known reduced model and has been successfully implemented to study interfacing compressible fluids, barotropic and non-barotropic cavitating flows. The model contains four conservative equations, two for mass conservation, one for total momentum and one for total energy conservation. The fifth equation is a non-zero convection equation for the volume fraction of one of two phases.
Although, this five-equation model is simple, but, it involves a number of serious difficulties. For example, the model is non-conservative and hence it is difficult to obtain a numerical solution which converges to the physical solution. In the presence of shocks, the volume fraction may become negative. Another issue is related to non-conservative behavior of the mixture sound speed [7].
To overcome the associated difficulties of Kapila's five-equation model, Kreeft and Koren [5] introduced a new formulation of the Kapila's model. The new model [5] is also non-conservative containing two equations for the conservation of mass, one each for mixture momentum and total energy respectively. The fifth equation is the energy equation for one of the two phases and it includes source term on the right hand side which represents the energy exchange between two fluids in the form of mechanical and thermodynamical work. In the current model, the first four equations are conservative and the non-conservativity in the models is due to the energy exchange term in the fifth equation. Consequently, the implementation of finite volume type schemes are relatively convenient to such models.
Very recently, diffuse interface method, finite volume WENO scheme and discontinuous Galerkin method have been used to solve the multiphase flow models [8][9][10]. However, in this work, the central upwind scheme [11] is proposed to solve the same five-equation model [5]. The proposed scheme uses information of local propagation speeds and estimates the solution in terms of cell averages. Further, the scheme has an upwind nature, because it takes care of flow directions by means of one-sided local speeds. Moreover, this scheme can be extended to incompressible flow problems e.g. it can be extended to solve incompressible two-phase shallow flow model [12]. The suggested scheme is applied to both one and two-dimensional flow models. For validation, the results of central upwind scheme are compared with those obtained from the KFVS [13][14][15][16] and the non-oscillatory staggered central scheme [17,18]. The numerical results of the schemes are analyzed qualitatively and quantitatively. It was found that the proposed central upwind scheme has comparable results to the KFVS scheme and are more accurate than the staggered central scheme.
One-dimensional two-fluid flow model
In this section, the one-dimensional two-fluid flow model [5] is presented. The selected model is the reformulation of original five-equation model of Kapila et al. [4]. Here, it is assumed that both fluids are mass conservative and have same velocity and pressure on both sides the interface. Moreover, heat conduction and viscosity are not considered. In this model, first four equations describe the conservative quantities: two for mass, one for over all momentum and one for total energy. The fifth equation is the energy equation and it includes source term on the right hand side which gives the energy exchange between two fluids in the form of mechanical and thermodynamical work. The state vector q of primitive variables has the form q = (ρ, u, p, α) T . Here, the bulk mixture density is denoted by ρ, u = (u, 0, 0) are the bulk velocities along each characteristic direction, p denotes the bulk pressure and α represents the volume fraction of fluid 1. This means that a part α of a small volume dV is filled with fluid 1 and a part (1-α) with fluid 2.
For bulk quantities, such as mixture density ρ and mixture total energy E, we assume that α is a volume fraction of fluid 1 and (1-α) of fluid 2. Using these conventions, we can define and the total energies of each fluid as where e 1 and e 2 denote the internal energies of fluid 1 and fluid 2, respectively. The internal energies e 1 and e 2 are given in terms of their respective densities and pressure through equations of state In one space dimensions, the two-fluid flow model can be written as [5] w t þ f ðwÞ x ¼ s ; where w ¼ ðr; ru; rE; r 1 a; r 1 E 1 aÞ T ; ð4bÞ sðwÞ ¼ ð0; 0; 0; 0; 0; s 5 Þ T : Here, w represents the vector of conservative variables, f is the vectors of fluxes, s is a vector of source terms with only last non-zero term. The term s 5 represents the total rate of energy exchange per unit volume between fluid 1 and fluid 2 and is equal to the sum of rates of mechanical s 5 M and thermodynamic s 5 T works [5], i.e. s 5 ¼ s 5 M þ s 5 T , with The term b ¼ r 1 a r represents the mass fraction of fluid 1, while the relations t 1 ¼ 1 denote the isentropic compressibilities of both fluids. Here, c 1 and c 2 represent the sound speeds of fluid 1 and fluid 2. The bulk isentropic compressibility is defined as Assume that the equations of state in Eq (3) are the stiffened equations of state [19] r where γ i , π i are the material specific quantities. Therefore, the sound speeds in each fluid are given as The expressions for the sound speeds are normally obtained from the second law of thermodynamics. The total energies of fluids 1 and 2 can be given as Using Eqs (4), (10) and (11), we obtain the primitive variables as where Here, w i ; i = 1,. . .,5, represent the components of w, the vector of conserved variables. Moreover, in Eqs (12)-(14) the primitive variables are expressed in terms of conserved variables. While in Eq (13) the positive sign is chosen for (π 2 γ 2 −π 1 γ 1 ) > 0 and negative otherwise. Because of Eq (9) One dimensional Central upwind scheme In this section, the central upwind scheme of Kurganov and Tadmor [11] is derived for the one-dimensional five-equation two-fluid flow model Eq (1). Let N represents the total number of discretization points and ðx iÀ 1 2 Þ i 2 f1; Á Á Á ; N þ 1g denotes the divisions of the given domain [0, x max ]. A uniform width Δx for each cell is considered, while, x i represents the cell centers and x iþ 1 2 refer to the cell boundaries.
Further, we take Moreover, The cell average values of conservative variables w i are defined as Integration of Eq (4) over the interval O i gives where H iþ 1 2 ðtÞ is the numerical flux defined by The first four components of the source vector s i in Eq (21) are zero and the fifth non-zero component is given as The numerical derivatives w x i are approximated through a nonlinear limiter which guarantees the positivity of the reconstruction procedure Eq (24).
Here, MM denotes the min-mod non-linear limiter MMfx 1 ; x 2 ; :::g ¼ Moreover, a iþ 1 2 ðtÞ represents the maximal local which in the generic case could be To achieve the second order accuracy in time, a second order TVD RK-method is applied to the Eq (21). For simplicity if the right hand side of the Eq (21) is taken as L(w), then two stage TVD RK-method to update w is given as under where w n is a solution at previous time step and w n+1 is updated solution at next time step. Moreover, Δt represents the time step.
Two-dimensional two-fluid flow model
In two-dimensional space, the two-fluid flow model can be written as [5] w t þ f ðwÞ x þ gðwÞ y ¼ s ; where w ¼ ðr; ru; rv; rE; r 1 a; r 1 E 1 aÞ T ; ð29bÞ f ðwÞ ¼ ðru; ru 2 þ p; ruv; ruE þ pu; r 1 ua; r 1 E 1 ua þ puaÞ; ð29cÞ Here, w represents the vector of conservative variables, f, g are vectors of fluxes in x and y directions, s is a vector of source terms with only last non-zero term. The term s 6 represents the total rate of energy exchange per unit volume between fluid 1 and fluid 2 and is equal to the sum of rates of mechanical s 6 M and thermodynamic s 6 T work [5], i.e. s 6 ¼ s 6 M þ s 6 T , with , the total energies of fluids 1 and 2 are given as Since the energy equation is directional independent, therefore for one-and two-dimensional problems the procedure of calculating primitive variables are the same. In two-dimensional space, the primitive variables can be retrieved in the following manner. Using Eqs (29), (32) and (33), we obtain > > > > > < > > > > > : where Similarly, as in case of one-dimensional model, w i ; i = 1,. . .,6, represent the components of w, the vector of conserved variables. In Eq (35) the positive sign is chosen for (P 2 γ 2 −P 1 γ 1 ) > 0 and negative otherwise.
Two dimensional Central upwind scheme
In this section, the central upwind scheme [11] is extended to two-dimensional five-equation two-phase flow model Eq (1). To implement the scheme, first we need to discretize the computational domain.
Let N x and N y be the large integers in x and y-directions, respectively. We consider a cartesian grid with rectangular domain [x 0 , x max ] × [y 0 , y max ] and it is covered by the cells C ij Here, the representative coordinates in a cell C ij are denoted by (x i , y j ). Further, we take and The cell average values conservative variable w i, j at any time t are given as The following linear piecewise interpolant is constructed as under wðx; y; tÞ ¼ X Here, χ i, j is the characteristics function corresponding to the cell x iÀ 1 2 ; x iþ 1 2 Â y iÀ 1 2 ; y iþ 1 2 , ðwÞ x i;j and ðwÞ y i;j are the approximations of x and y-derivatives of w at cell centers (x i , y j ). In two-dimensional case to compute the derivative a generalized MM limiter is used The intermediate values in the present case are given as For details and complete derivation of the scheme, see [11].
Numerical test problems
This section presents some numerical test problems (both one and two-dimensional) to check the capability of central upwind and KFVS schemes to compressible two-phase reduced fiveequation flow model. The results are compared with those obtained from central scheme [18].
One-dimensional test problems
In this section six one-dimensional test problems are considered to verify the efficiency and accuracy of the proposed schemes.
Sod's problem. The Sod's problem [6] is the well known test problem in the single phase gas dynamics. In this problem, gases are separated by a very thin membrane placed at x = 0.5 and are initially at rest. The left side gas has high density and pressure as compared to right side gas. After removing the membrane, the gases evolution in time take place. The initial data for the problem are given as Here, γ L = 1.4 and γ R = 1.2, P L = 0 = P R , and CFL = 0.5. This problem is also considered in [14]. It is a hard test problem for a numerical scheme. From the solution we can see a left moving rarefaction wave, a contact discontinuity, and a right moving shock wave. The right moving shock hits the interface at x = 0.5. The shock continues to move towards right and a rarefaction wave is created which is moving towards left. The results are simulated on 400 mesh cells and the final simulation time is taken as t = 0.012. The solutions are presented in the Fig 3. All the schemes give comparable results. However, from the zoomed graph it can be noted that KFVS scheme gives better resolution of peaks and discontinuities.
No-reflection problem. The initial data are given as The ratio of specific heats are γ L = 1.667 and γ R = 1.2. Moreover, P L = 0 = P R and CFL = 0.4. We discretize the computational domain [0, 1] into 500 mesh cells and the final simulation time is t = 0.02. This is a hard test problem for a numerical scheme due to large jumps in pressure at the interface. The choice of pressure and velocity jump over the shock prevents the creation of a reflection wave. Therefore, a shock wave moves to the right. The results are depicted in Fig 4. Wiggles can be seen in the velocity and pressure plots of all schemes, representing small waves that are reflected to the left. However, unlike real velocity and pressure oscillations, these wiggles reduces on refined meshes. Similar type of wiggles are also reported in the results of [5].
Water-air mixture problem. This one-dimensional problem corresponds to the water-air mixture [5,20]. The initial data are given as 5. Although the initial composition of the mixture is constant, it evolves in space and time. It can be observed that the three schemes give comparable results. Moreover, our results are in good agreement with the results in [20]. Water-air mixture problem. Again a one-dimensional water-air mixture problem [5,20] is considered. However, this problem differs from the previous problem by allowing changes in the mixture composition. The initial data are given as The ratio of specific heats are given as γ L = 1.4 and γ R = 1.6. We have chosen 200 mesh cells and the final simulation time is t = 0.1. This problem is a contact discontinuity of water-air density ratio. The numerical results are shown in Fig 7. The same problem was also considered in [5,6]. In this problem, both pressure and velocity are the same. Therefore, the interface is moving to the right with uniform speed and pressure. The numerical results show that KFVS
Two-dimensional test problems
To check the performance of proposed numerical scheme in two-dimensional space, we considered two test problems. In these problems the impact of a shock in air on a bubble of a lighter and a heavier gas is studied. Initially, Haas and Sturtevant [21] investigated these problems. Later, Quirk and Karni [22], Marquina and Mulet [23], Kreeft and Koren [5] and Wackers and Koren [6] also discussed these test cases. A schematic computational setup for these two problems is sketched in Fig 8. A shock tube of length 4.5 and width 0.89 is considered. The top and bottom walls of the tube are solid reflecting walls, while both ends of the tube are open. A cylinder of very thin cellular walls filled with gas is placed inside the tube. A shock wave is generated in the right end of the shock tube and is moving from right to left. After hitting by shock, the walls of the cylinder ruptures and the shock interacts with the gas of the cylinder. Due to fast interaction both gases do not mix in large amount, leading to a two-fluid flow problem. As the shock approaches to the surface of the bubble a reflected shock is generated from the surface of the bubble which moves towards right back in the air. At later time, this interaction become more and more complicated. The shock continues to move towards right in the air after passing through bubble and produces secondary reflected waves in the bubble when it hits the surface of the bubble. The wave patterns generated by interaction are strongly depending on the density of the gas inside the bubble. However, some of the waves can be observed in almost all cases [5,6]. Here, a light helium gas and a heavy R22 gas are considered inside the cylindrical bubble.
Helium bubble. In this problem, we study the interaction of Ms = 1.22 planar shock, moving in air, with a cylindrical helium bubble contaminated with 28% of air. The bubble is The position of key features occurred during the time evolution are well explained in [5,6,23]. Therefore, we omit discussion on these features. The computational domain is discretized into 800 × 200 mesh cells. The contours for density, pressure and volume fraction are depicted in Figs 9, 10 and 11 at time: 0.25, 0.30, 0.35, 0.40. These results agree closely with the plots given in [5,6,21,22] at times: 32 μs, 52 μs, 62 μs, 82 μs. In Figs 10 and 11 the contours of pressure and volume fraction show a perfect splitting of the pressure waves and the interface. The shocks and interface are sharp during the simulation. As observed in [6], the last interface is slowly bending inwards in Fig 11. The phenomena will continue at later times until the bubble split in two vortices. The comparison between KFVS and central schemes can be clearly observed in Fig 12. R22 bubble. Here, the same Ms = 1.22 planar shock moving in air hits a cylindrical R22 bubble which has higher density and lower ratio of specific heats than air. This results in about two times lower speed of sound. For more details, the reader is referred to [5,6]. The initial data are given as The computational domain is discretized into 800 × 200 mesh cells. Due to the lower speed of sound, the shock in the bubble and the refracted shock lag behind the incoming shock. Moreover, due to the circular shape of the bubble the refracted, reflected and shock waves are curved. The results for density, pressure and volume fraction are displayed in Figs 13, 14 and 15 at times: 0.35, 0.60, 0.70, 0.84, 1.085, 1.26. These results shows good agreement with the results [5,6,21,22] at times: 55 μs, 115 μs, 135 μs, 187 μs, 247 μs, 318 μs. The splitting in pressure and interface is observed in flow pattern of density contours. Moreover, no wiggles are visible in our results and the pressure is continuous over the interface. Hence, the numerical results of our scheme reflect all key features as explained in [5,6,21].
Conclusions
A central upwind finite volume scheme was extended to solve the compressible two-phase reduced five-equation model in one and two-dimensional space. The suggested scheme is based on the estimation of cell averages by using the information of local propagation speed. In twodimensional space the scheme is implemented in a usual dimensionally split manner. The nondifferential part of the source terms are approximated by the cell averaged values, whereas the differential part terms are approximated similar to the convective fluxes. To preserve the positivity of the scheme a min-mod non-linear limiter is used. To achieve the second order accuracy in time a TVD Runge-Kutta method is utilized. For validation, the results of the proposed numerical scheme are compared qualitatively and quantitatively with those of KFVS and staggered central schemes. Good agreements were observed in the results of all three schemes. It was found that in some test problems central upwind scheme produced more accurate results, while KFVS scheme performed well in other problems. Perhaps, this is due to the reason that both the schemes are upwind biased. The staggered central scheme was found diffusive in all test problems. | 5,181.4 | 2015-06-03T00:00:00.000 | [
"Physics"
] |
Structure and Bonding of Trace Ni Catalyst in Carbon Nanotube Studied by Ni K-Edge XANES
We carry out Ni K-edge X-ray Absorption Near Edge Structure (XANES) analyses to study local electronic and geometric structure around Ni impurities after HCl treatment in carbon nanotubes (CNTs) by applying full multiple scattering calculations. For the trace Ni species in CNT after the treatment, we consider possible models consistent with Ni-C distance and coordination number estimated by previous Extended X-ray Absorption Fine Structure (EXAFS) analyses. The present analyses allow us to distinguish between two defect models for the Ni location; a crack-like defect and Stone-Wales defect. We also find that the curvature of CNTs affects the calculated XANES spectra, which can provide useful information about outer or inner adsorption on CNT walls. Ab initio density functional calculation supports the presence of Ni atoms at the outside of the nanotube. [DOI: 10.1380/ejssnt.2005.427]
I. INTRODUCTION
Carbon nanotubes (CNTs) and carbon nanofibers (CNFs) can be applied to electric devices, hydrogen reservoir, medical usages and so on. There are some preparation methods to produce CNTs/CNFs, and metal catalysts such as Ni are widely used. Although most of Ni in the CNTs/CNFs can be removed by an acid (e.g. HCl) treatment, a small amount of Ni impurities are left in the CNTs/CNFs. Therefore, small amount of toxic Ni may be a serious problem to use CNTs/CNFs as living materials. Information on the structure and the chemical state of trace Ni species are important to the medical application of CNTs/CNFs. However, we have had no definite conclusion for the location of the metal atoms due to the lack of direct experimental evidence about the local structure around the Ni impurities.
Recently, Asakura et al. reported the first Extended X-ray Absorption Fine Structure (EXAFS) and X-ray Absorption Near Edge Structure (XANES) spectra of Ni species in CNFs before and after HCl treatment. [1] Before the HCl treatment, the XANES spectrum is quite similar to that of Ni foil, whereas the spectrum after the treatment has two specific peaks in the absorption edge region and suggests that the Ni impurities (a few hundred ppm) should not be metal Ni particles or simple Ni oxides.
On the other hand, there are some theoretical work undertaken for the study of interaction of 3d transition metal atoms and dimers with a single-walled ideal (no defect) CNTs. A density functional calculation shows that outside adsorption sites are most favorable for Ni-doped (4,4) nanotubes at atop sites with Ni-C distance 1.87Å. [2] A different approach using pseudo potential plane wave method has predicted rather large binding energy for Ni coating on the CNTs. [3] These two studies are on the Ni adsorption on the ideal (no defect) CNTs. Andriotis et al. investigate catalytic action of Ni atoms in the growth of single-walled CNTs using tight-binding molecular dynamics in conjunction with ab initio total energy calculations. [4] Their simulations favor the Ni atom acting initially as defect stabilizers and subsequently diffusing to a defect positions on exterior ring.
XANES spectra provide longer range information (typically ≈5Å) than EXAFS spectra, and stereochemical information. [5,6] EXAFS analyses are rather easy compared with XANES analyses. The XANES, however, has an important advantage over the EXAFS: Even weak electron scatterers carbon still provides useful information in the XANES analyses. [6] Main purpose of this paper is to obtain some useful information on the local structure and bonding of trace Ni species in carbon nanotube by full multiple scattering XANES analyses [7][8][9][10][11] and ab initio molecular orbital theory. The molecular orbital theory gives us some useful information about the electronic structure and the optimized structure of a Ni atom trapped at a crack-like defect site of the graphene sheet.
II. THEORY
The XANES theory used in this paper is based on the short-range-order full multiple scattering theory proposed by Fujikawa et al. [7] Later, this theory was modified by a partitioning technique in order to reduce the computation time. [8][9][10][11] Here, we summarize the theoretical methods.
The X-ray absorption intensity σ from the core orbital φ c (r) = R lc (r)Y Lc (r), L c = (l c , m c ) at site A (X-ray absorbing atom) is given by Eq.(1) for photoelectron kinetic energy k = k 2 /2. We assume excitation by a linearly polarized X-ray in the z-direction, [8] where G(LL |L ) is Gaunt's integral and ρ c (l) is the radial dipole integral between the radial part R lc (r) of φ c (r) and the l-th partial wave of photoelectrons R l (r) at site A. The phase shift of the l-th partial wave at site A is represented by δ A l . We introduce the matrix X specified with site index α and angular momentum L and so on; it is defined as where t α l and G LL represent the T-matrix at site α and the Green's function in an angular momentum representation. The inverse matrix (1−X) −1 includes an infinite order of the full multiple scattering inside the cluster we are considering. The phase shift in t α l (= −[exp(2iδ α l )−1]/2ik) is one of the most important features and reflects the electronic structure of the surrounding atoms, which is calculated within the Hartree-Fock approximation. The Green's function G LL reflects the geometrical structure. The clusters used in the present work include all surrounding atoms up to about 7Å except the carbide model (up to about 5Å) around an X-ray absorption atom.
A. Multiple Scattering Analyses of Ni K-Edge XANES
At first we study the Ni adsorption structure on CNTs by use of the multiple scattering analyses. So far no experimental XANES for Ni impurities in CNTs has been reported, but those for Ni impurities in CNFs have been reported. [1,12] As shown later, the calculated XANES spectra for CNFs and CNTs are very similar, so that we can safely refer to the XANES for the Ni impurities in CNFs to investigate the local structure around Ni impurities in CNTs. Figure 1 shows the K-edge XANES spectra of Ni species in the CNFs before and after the HCl treatment together with that of Ni foil. [1] The XANES spectrum after the treatment shows a marked difference from that for before the treatment which is quite similar to that for Ni foil. Having two specific peaks in the absorption edge region, the spectrum after the treatment suggests that the trace Ni species should be different from metal Ni particles or simple Ni oxides. [1] In addition, this characteristic peaks should be the fingerprint for the calculated XANES spectra using multiple scattering theory. We investigate the following possible models for the structure of Ni in CNF and CNT.
• an edge model • a substitution model
• a Stone-Wales model
In these calculations, we use a flat graphene sheet for simplicity.
an edge model
Sharp has applied organometallic chemistry at the edge of polycyclic aromatic hydrocarbons to the step-growth synthesis of single-walled CNTs. [13] Ni atoms can form covalent bonds with carbon atoms at edges of graphene sheets because of having dangling bonds. We study the edge model shown in Fig.2 (a) where Ni-C distance is 1.8 A with coordination number 1. The bond distance is consistent with the EXAFS result 1.83 ± 0.05Å, though the coordination number is much smaller than the observed one, 2.4 ± 0.8. [1] The calculated Ni K-edge XANES spectrum is shown in Fig.3 (a) compared with the experimental spectrum (broken line). The calculated result shows too small shoulders at 7 and 16 eV compared with the specific peaks after the treatment, and too rapid decrease above 30 eV: This model fails to explain the observed features. We thus have no need to consider this model anymore.
defect models
Ni can be in a defect site in a graphene sheet. Meng et al. applied Hartree-Fock calculations by use of approximate exchange potential. [14] Their results show strong attractive interaction and bonding with CNTs due to the unfilled 3d shell for transition metals. A different theoretical work also supports the Ni adsorption at defect sites. [4] We thus investigate two types of defects, crack-like defects and Stone-Wales defects.
First we consider two crack-like defect models, substitution models, shown in Figs.2 (b) and (c). In the model (b), one Ni atom forms Ni-C covalent bonds with the Ni-C distance 1.8Å and coordination number 3. In the model (c), two Ni atoms bind to C atoms at crack-like defect sites of a graphene sheet with the same distance with coordination number 2, and Ni-Ni distance is 2.5Å with coordination number 1. Our recent EXAFS analyses show that the coordination number of C around Ni is 2.5. [12] Therefore the first model is not in contradiction to this results, but the latter can be ruled out as shown below. Figures 3 (b) and (c) show the calculated XANES spectra for the substitution models shown in Figs.2 (b) and (c) compared with the experimental spectrum after the treatment (broken line). Although the model (b) well explains the two characteristic peaks in the experimental data, the dimer model (c) gives too small peak at ∼ 7 eV and rapid decrease above 30 eV in comparison with the monomer model (b). The peak at 7 eV is located just at the beginning of the edge rise and should have a contribution from atomic bound state that can not be fully taken into account by the present method. We thus expect that the model (b) can be a good candidate for trace Ni species.
Some CNTs are composed of multilayer with interlayer distance 3.4Å. [15,16] We add another graphene sheet under the model (b) shown in Fig.2. This model gives quite similar XANES spectrum to Fig.3 (b): The next-layer gives rise to negligible effect on the calculated spectrum.
Another type of defect, the Stone-Wales defect, has a pair of 5-7 rings which can be created by rotating a C-C bond in the hexagonal network by 90 • . [17] Recent molecular orbital calculations show that the introduction of Stone-Wales defect would only benefit the adsorption capacity of B, N, F and Si among 10 foreign atoms (H, B, C, N, O, F, Si, P, Li and Na). [18] This result suggests that the Stone-Wales defects could be studied for the Ni adsorption site. We calculate Ni K-edge XANES spectra for four Stone-Wales models where one or two Ni atoms are adsorbed on the 5 or 7 rings of the Stone-Wales defect site. These models, however, fail to explain the two peaks at 7 and 16 eV because the calculated spectra of these models show structureless spectrum up to about 25 eV. We thus can rule out the Stone-Wales models. [12] 3
. influence of detailed CNT structures
CNTs can be classified by the rolling up way of the graphene sheet. In the graphene honeycomb lattice, the unit cell is spanned by the two vector a 1 and a 2 , and contains two carbon atoms at the position 1 3 (a 1 + a 2 ) and 2 3 (a 1 + a 2 ), where the basis vectors of length |a 1 | = |a 2 | = 2.46Å form an angle of 60 • . A graphene lattice vector c = n 1 a 1 + n 2 a 2 becomes the circumference of the tube, and it is usually denoted by the pair of integers called chiral vector (n 1 , n 2 ). In particular (n, n) tubes are called armchair tubes and (n, 0) tubes are called zig-zag tubes. Tubes of type (n, m) are called chiral tubes. [15] We thus investigate the substitution model adsorbed on outside of the CNT tube (see Fig.5 (b)). Figure 4 shows the calculated XANES spectra for three different models depending on three different structures of CNTs with 14Å in diameter: (a) armchair, (b) zig-zag, and (c) chiral models. They are compared with the spectrum Fig.3 (b) for the flat model shown in Fig.2 (b). Comparing the three spectra, we understand the differences of the tube structure have very small influence on the calculated XANES spectra. The calculated XANES spectrum for the flat model only shows small difference from those for these tube models. This may be because of the similar local structure around the Ni atom. As the overall XANES is not so sensitive to the curvature, the XANES for CNF and CNT can show similar spectra. Detailed analyses, however, can provide information on outer or inner adsorption as discussed below. We also check how the curvature of CNTs has some influence on the XANES spectra by using the multiple scattering theory. We thus postulate CNT models with 14Å in diameter shown in Figs.5 (a) and (b). In the model (a), one Ni atom is inside the tube, and in the model (b) outside. The calculated Ni K-edge XANES spectra are shown in Fig.5 (c). The "inside model" gives weaker peaks at 7 and 16 eV than the "outside model". This may be because the different distances to carbon atoms in the next neighbors in these models affect the XANES spectra. This result is quite interesting because any other experimental tools cannot provide such detailed information.
B. Molecular Orbital Analyses
In order to obtain useful information about the electronic structures, we apply the ab initio molecular orbital theory to these Ni adsorption models.
First, we optimize the structure of the substitution model by using Gaussian 03 code [19] for a NiC 41 H 16 cluster. In order to calculate the optimized structure, we use a density functional theory (DFT) method employed the B3LYP type exchange-correlation potential. The cal-FIG. 6: The optimized structure of substitution model, where a Ni atom is in a crack-like defect site of a graphene sheet, calculated by DFT method. [19] The Ni atom occupies the outside of CNT with Ni-C distance 1.85Å. In this figure a black ball is Ni atom and white (blue) balls are carbon (hydrogen) atoms. culation is performed by LanL2MB basis set. The optimized structure of Ni adsorption is obtained by use of the initial flat (CNF) structure as shown by Fig.2 (b). The optimized structure is shown in Fig.6, where the Ni occupies the outside of the curved CNT sheet. This result supports "outside model" shown in Fig.5 (b). The Ni-C bonds are 1.85Å, which is close to Ni-C covalent bond length (1.85Å) of Ni(CO) 4 obtained by the same calculation method. This result is also close to the observed Ni-C bond length of Ni(CO) 4 (1.82Å). [20] Therefore, Ni impurities is strongly bound to the sheet by the covalent bond. This result is consistent with the result of EXAFS analyses. [1] Second, we study the bonding character of the Ni-C bonds by using natural population analysis. [21] The natural atomic-orbital occupancies are tabulated in 4 . The bond order between the C atoms bound to the Ni atom and the next neighbor C atoms is 1.3, a little smaller than 1.5 in graphene sheets, which is related to the distortion from the flat carbon arrangement in graphene sheets. More detailed analyses will be give in the forthcoming paper.
IV. CONCLUSION
In this paper, we calculated XANES spectra to determine the local structure of trace Ni species in CNT. The substitution model well explains the observed two specific peaks in the absorption edge region after the HCl treatment. Any other models fail to predict the observed features in the XANES spectrum after the treatment. More detailed analyses show the curvature of a graphene sheet affects the XANES spectra, and the XANES analyses can provide useful information whether Ni adsorbs inside or outside of a CNT tube.
We also study geometric and electronic structures of the substitution model by using ab initio molecular orbital theory. The optimized structure favor the "outside model" of Ni adsorption on CNTs. The present MO calculation shows that the Ni and C atoms are almost neutral in the substitution model. This work demonstrates the remarkable usefulness of the XAFS (XANES+EXAFS) study combined with multiple scattering calculations for very dilute systems (about a few hundred ppm), Ni in graphene sheets. Other approaches would presumably be difficult to provide such detailed structural information around dilute Ni impurities. | 3,789.8 | 2005-01-01T00:00:00.000 | [
"Physics"
] |
The Multidirectional Effect of Azelastine Hydrochloride on Cervical Cancer Cells
A major cause of cancer cell resistance to chemotherapeutics is the blocking of apoptosis and induction of autophagy in the context of cell adaptation and survival. Therefore, new compounds are being sought, also among drugs that are commonly used in other therapies. Due to the involvement of histamine in the regulation of processes occurring during the development of many types of cancer, antihistamines are now receiving special attention. Our study concerned the identification of new mechanisms of action of azelastine hydrochloride, used in antiallergic treatment. The study was performed on HeLa cells treated with different concentrations of azelastine (15–90 µM). Cell cycle, level of autophagy (LC3 protein activity) and apoptosis (annexin V assay), activity of caspase 3/7, anti-apoptotic protein of Bcl-2 family, ROS concentration, measurement of mitochondrial membrane potential (Δψm), and level of phosphorylated H2A.X in response to DSB were evaluated by cytometric method. Cellular changes were also demonstrated at the level of transmission electron microscopy and optical and fluorescence microscopy. Lysosomal enzyme activities-cathepsin D and L and cell viability (MTT assay) were assessed spectrophotometrically. Results: Azelastine in concentrations of 15–25 µM induced degradation processes, vacuolization, increase in cathepsin D and L activity, and LC3 protein activation. By increasing ROS, it also caused DNA damage and blocked cells in the S phase of the cell cycle. At the concentrations of 45–90 µM, azelastine clearly promoted apoptosis by activation of caspase 3/7 and inactivation of Bcl-2 protein. Fragmentation of cell nucleus was confirmed by DAPI staining. Changes were also found in the endoplasmic reticulum and mitochondria, whose damage was confirmed by staining with rhodamine 123 and in the MTT test. Azelastine decreased the mitotic index and induced mitotic catastrophe. Studies demonstrated the multidirectional effects of azelastine on HeLa cells, including anti-proliferative, cytotoxic, autophagic, and apoptotic properties, which were the predominant mechanism of death. The revealed novel properties of azelastine may be practically used in anti-cancer therapy in the future.
Introduction
Drug resistance is a very big problem in most advanced cancers [1,2]. The biggest obstacle in cancer chemotherapy, including the treatment of cervical cancer, is resistance to cisplatin, among others, resulting from the induction of autophagy and inhibition of tumor cell apoptosis [3,4]. The process of programmed cell death can also be inhibited during oncogenesis. Cancer cells with multiple genetic and epigenetic alterations avoid apoptosis, which is initially triggered by the transformation process itself and then by the unfavorable tumor environment and the implemented therapy [5,6]. The resulting limitations in cancer therapy contribute to increased mortality. Therefore, recently, a new trend in worldwide research has become the search for alternative treatments, also inducing other types of cell death, especially among compounds already used in other therapies [5,[7][8][9], an example of which may be antihistamines (AHs). Antihistamines (AHs), due to their proven strong anti-inflammatory and anti-allergenic properties, are widely used worldwide as first-line drugs in the treatment of numerous allergic diseases [10]. Their mechanism of action involves stabilization of the inactive form of histamine H1 receptor, thereby blocking the action of histamine [11][12][13], which, as a major mediator of inflammatory response, not only underlies many allergic diseases [14], but is also directly involved in the regulation of biological processes during the development of various types of cancer, including cervical cancer [1,15,16]. Hence, in recent years, attention has been focused on the potential antitumor properties of antihistamines, both among the long-used and new second-generation representatives. Compounds have been identified, that alone or in combination with other drugs show significant activity against various types of cancer cells, confirmed both in vitro and in clinical trials. An example is astemizole, a second-generation drug, that has been described as an inhibitor of hepatocellular carcinoma proliferation [17] as well as an inducer of apoptotic death in various human melanoma cell lines [18,19]. In the case of terfenadine, the ability to induce apoptosis in prostate cancer cells has been demonstrated [20]. In turn, new representatives of AHs, more often used in practice, i.e., loratadine and its active metabolite desloratadine, improve survival in breast cancer [21][22][23] and skin melanoma [24]. Additionally, desloratadine has properties to induce apoptosis of T-cell lymphoma cells [25], and loratadine interferes with cell cycle progression of human colon cancer cells by increasing their sensitivity to radiation [26], and improves survival for ovarian cancer [7].
Azelastine hydrochloride (a phthalazinone derivative) is commonly used especially in the topical treatment of respiratory diseases, i.e., in allergic rhinitis (also in the course of asthma and COPD), vasomotor rhinitis, and as part of the prophylaxis and therapy of allergic conjunctivitis [27][28][29][30]. Furthermore, recent in vitro studies have demonstrated the ability of this compound to prevent and inhibit SARS-CoV-2 infection in nasal tissue [31]. Azelastine was also included in the list of compounds that exhibit lysosomotropic properties and have the ability to accumulate in the lungs when administered systemically, which creates the potential to achieve an effective drug concentration and was therefore recommended for use in patients with SARS-CoV-2 [32]. It should be emphasized that azelastine is a new representative of the second generation of H1 receptor antagonists, characterized by a different chemical structure than other preparations from this group [33] and high selectivity to the receptor, and thus low risk of side effects and very good tolerance both in adults and children [34][35][36][37][38]. It was also found that this group has an equivalent or faster onset of action compared to the first generation AHs [39]. Numerous scientific studies confirm that the biological properties of H1 receptor antagonists, including azelastine, also result from the possibility of non-receptor activity [13,[40][41][42], which offers a broad perspective for the discovery of new therapeutic properties of these compounds.
In recent years, azelastine has also been tested for anti-inflammatory [43], antibacterial [44], and antiparasitic properties [45]. In turn, little attention has been paid to research into the potential anticancer mechanisms of this compound. So far, the property of azelastine to induce apoptosis in human colorectal adenocarcinoma cells (HT-29 line) has been described, where the tested compound at concentrations of 10 µM-20 µM, independently of the receptor, decreased the expression of Bcl-2 protein and caused significant changes in mitochondria [46]. In another study [47], azelastine at a concentration of 5 µM sensitized KBV20C cells to the effects of vincristine (in combination administration), leading to decreased cell viability, arrest in G2 phase, and increased apoptosis. The results of the cited studies inspired the present study.
Therefore, due to the well-known resistance of HeLa cells to chemotherapy, which manifests itself by induction of autophagy and blockade of apoptosis, we decided to study the changes occurring in these cells under the influence of azelastine hydrochloride in the context of induction of apoptosis as well as other types of cell death as potential anticancer mechanisms of action of this compound.
Azelastine Induces Apoptosis in HeLa Cells
Exposure of cells to azelastine resulted in an increase in the frequency of both cells in early (Annexin V-PE+/7-AAD−) and late apoptosis (Annexin V-PE+/7-AAD+).
At a concentration of 15 µM, apoptotic cells were over 26% (p ≤ 0.0001) and at 25 µM over 34% (p ≤ 0.0001) ( Figure 1A,C). Azelastine at a concentration of 45 µM further increased the number of apoptotic cells to 60.13% (p ≤ 0.0001). Subsequent concentrations (60 µM and 90 µM) significantly increased the number of apoptotic cells, which were more than 93% and 98%, respectively (p ≤ 0.0001), with a clear predominance of cells with a late apoptotic phenotype. Moreover, microscopic analysis (DAPI staining) showed that azelastine induces typical nuclear changes for apoptosis i.e., chromatin condensation and nuclear fragmentation, especially at concentrations of 60 µM and 90 µM ( Figure 1E). The obtained results were dependent on the concentration of the test compound.
Azelastine Inhibits the Viability of HeLa Cells
The MTT assay showed a highly statistically significant (p ≤ 0.0001) reduction in the ability of the cells to reduce the dye compared to the control, which was taken as 100% ( Figure 1D). Already at the lowest concentration of 15 µM, the cell viability was 86% and at subsequent concentrations of 25 µM and 45 µM, it decreased significantly to 68.33% and 51.33%. However, the lowest percentage of living cells was obtained as a consequence of the highest concentrations of the test compound, i.e., 60 µM (8%) and 90 µM (about 4%). Azelastine inhibited mitochondrial metabolic activity to a concentration-dependent degree, which was also indicative of mitochondrial membrane damage.
Azelastine Generates ROS Inducing Changes in Mitochondrial Structure and Induces Endoplasmic Reticulum Stress
Compared to the control image ( Figure 2(A1)), mitochondria with clear matrix and an irregular arrangement of mitochondrial cristae were observed already at the lowest azelastine concentration (15 µM) (Figure 2(A2)). As a result of 25 µM concentration, mitochondria showed a significant enlargement, a highly clear matrix, and a reduction in the mitochondrial cristae. Swollen channels of the rough endoplasmic reticulum were also visible in their close proximity (Figure 2A(3,3a)). In turn, the cytoplasm of cells exposed to 45 µM azelastine (Figure 2A(4,4a)) was dominated by swollen mitochondria with a strongly clear matrix, with disorganization of the inner mitochondrial membrane and pronounced damage to the cristae. Mitochondria were also characterized by disruption of the mitochondrial membrane, resulting in leakage of matrix contents into the cytoplasm (Figure 2(A4)). In contrast, visible in the microphotographs the rough endoplasmic reticulum appeared as dilated channels (Figure 2A(4,4a)). At subsequent azelastine concentrations of 60 µM and 90 µM (Figure 2A(5-6a)), the mitochondria were characterized by increased structure disorganization indicating significant damage, and in the vicinity of these structures remained the rough endoplasmic reticulum in the form of strongly widened and swollen cisterns. Compared with the control group, in which ROS (+) cells constituted 3.29%, treatment of HeLa cell with azelastine resulted in concentration-dependent intracellular ROS production ( Figure 2C,D). The concentrations of 15 µM and 25 µM showed a small but statistically significant increase in ROS (+) cells to 12.4% and 24.8%, respectively (p ≤ 0.0001). Increasing the azelastine concentration to 45 µM resulted in increased generation of reactive oxygen species, ROS (+) cells accounted for more than 45% (p ≤ 0.0001). Significant levels of cellular ROS (+) were observed following azelastine treatment at concentrations of 60 µM (48.93%) and 90 µM (49.99%) (p ≤ 0.0001). The induction of reactive oxygen species generation correlated with a progressive decrease in mitochondrial membrane potential ( Figure 2E,F).The lowest percentage of cells with mitochondrial membrane depolarization was shown at a concentration of 15 µM (9.76%) (p ≤ 0.0001). At 25 µM and 45 µM, cells with reduced mitochondrial membrane potential were 14.05% and 15.89%, respectively (p ≤ 0.0001). The highest percentage of cells with mitochondrial membrane depolarization (being more than 50%; p ≤ 0.0001) was found for concentrations of 60 µM and 90 µM. These results were confirmed in the imaging of rhodamine 123-labeled mitochondria, as it was shown that with increasing azelastine concentration, there is a gradual extinction of green fluorescence emission, significant in the range 45-90 µM ( Figure 2B). The azelastine-induced increase in the level of reactive oxygen species, contributed to increased oxidative stress and stress of the rough endoplasmic reticulum and to the induction of apoptotic changes. (2); at 25 µM, visible enlarged mitochondria with reduction of mitochondrial cristae remaining in close proximity to the dilated channels of the rough endoplasmic reticulum (3,3a); at a concentration of 45 µM, mitochondria with enhanced damage characteristics are present, i.e., strongly enlarged with disruption of the mitochondrial membrane (4) and damaged mitochondrial cristae (4a) and altered rough endoplasmic reticulum in the form of dilated channels (4,4a); at concentrations of 60 µM (5,5a) and 90 µM (6,6a), visible mitochondria with severe disorganization of the structure indicating damage, and rough endoplasmic reticulum located in their vicinity with strongly enlarged and swollen cisterns. Explanation of abbreviations: N-nucleus, M-mitochondria, AG-Golgi apparatus, RER-rough endoplasmic reticulum, AV-autophagic vacuoles, Lp-primary lysosomes, Ls-secondary lysosomes. Images were taken at 11,500× magnification. (B) Gradual and azelastine concentration-dependent loss of green fluorescence derived from rhodamine 123-labeled mitochondria. (C) Generation of reactive oxygen species and (D) percentage of ROS (+) cells as a result of azelastine. (E) Changes of mitochondrial membrane potential (∆ψm) and the percentage of cells with mitochondrial membrane depolarization (F) at different azelastine concentrations. Each sample was analyzed in triplicate. The differences were statistically confirmed at: *** p < 0.001.
Azelastine Enhances Vacuolization and Apoptotic Changes in HeLa Cells-Morphological Evaluation
In cells exposed to 48 h of azelastine, a significant concentration-dependent increase in the number of cells with vacuolization changes in the cytoplasm was observed ( Figure 4D). Compared to control values (16 cells), the highest number of cells with enhanced vacuolization was observed at 15 µM (2119 cells) and 25 µM (2010 cells) (p ≤ 0.0001). Within the vacuole, a strong eosinophilic material destined for degradation was visible ( Figure 4A(2,2a,3)). In contrast, at higher concentrations of the test compound, vacuolization changes showed a decreasing trend ( Figure 4D). A lower but equally highly statistically significant result (1282 cells) was shown at a concentration of 45 µM ( Figure 4A(4,4a)). On the other hand, at concentrations of 60 µM and 90 µM ( Figure 4A(5-6a)), the presence of the lowest number of vacuolized cells was confirmed. It should be noted that at a concentration of 45 µM ( Figure 4A(4,4a)), there were cells with simultaneously observed features such as enhanced vacuolization of the cytoplasm and a pyknotic cell nucleus with partial chromatin condensation, indicating a gradual switch from vacuolization to apoptotic changes. The presence of phagocytosed apoptotic cells was observed within the cytoplasm of living cells ( Figure 4A(4a,5a)) and those that were directed into the apoptosis pathway ( Figure 4A(6,6a)), indicating induction of the efferocytosis process.
Azelastine Blocks Cells in S Phase and Reduces Mitotic Index
Cytometric analysis ( Figure 4B,C) showed a statistically significant (p ≤ 0.0001) increase in the number of cells arrested in the S phase of the cell cycle, progressing with azelastine concentration. At the concentration of 15 µM, these cells accounted for 34.23%. Slightly higher results were obtained at a concentration of 25 µM (40.64%) and at 45 µM (44.47%). However, at 60 µM and 90 µM, there was a 2-fold increase in the number of cells in the above-mentioned phase as compared to the control (28.77%). At the same time, in the concentration range of 25-90 µM, there was a significant reduction in the number of cells in the G0/G1 phase (p ≤ 0.0001).
Comparison of cells incubated with azelastine at all concentrations used (15-90 µM) with cells from the control group (considered as 100%) showed statistically significant (p ≤ 0.0001) decrease in mitotic index ( Figure 4E). Already at a concentration of 15 µM azelastine, the dividing capacity of the cells decreased significantly to 22% and this was also the highest recorded result, because at the other concentrations of the test compound, i.e., 25 µM, 45 µM, 60 µM and 90 µM, the mitotic index decreased to, respectively, 7%, 3%, 2%, and 1%. These changes demonstrate the antiproliferative properties of azelastine.
Azelastine Induces Mitotic Catastrophe
Morphological analysis showed that azelastine at 15 µM resulted in changes considered morphological markers of mitotic catastrophe ( Figure 5). These included multiple abnormalities occurring during mitotic division, such as the presence of anaphase bridges ( Figure 5(A2)), tripolar metaphase ( Figure 5(A1)), and pentapolar anaphase ( Figure 5(A3)). Azelastine also induced the formation of micronuclei (micronucleation) ( Figure 5B), which were present in the highest and also statistically significant numbers at a concentration of 15 µM ( Figure 5A(3-6)). Furthermore, the data obtained indicated clear multinucleation due to the action of the test compound ( Figure 5B). The highest results were recorded at a concentration of 15 µM with 372 binucleated cells, 132 multinucleated cells, and 23 giant cells (at p ≤ 0.0001). At the next concentration of 25 µM, the results remained high in the range of statistically significant values, there were 267 binucleated cells, 90 multinucleated cells, and 12 giant cells found. However, at 45 µM azelastine, the number of binucleated, multinucleated, and giant cells significantly decreased to 47, 20, and 9, respectively, while at high concentrations of 60 µM and 90 µM, it was further reduced to levels below control values ( Figure 5B).
Of note are the vacuolization ( Figure 5C) and apoptotic ( Figure 5D) changes observed simultaneously in cells with multinucleation. At low concentrations of azelastine (15 µM and 25 µM), vacuolization changes predominated over apoptotic ones, whereas at 45-90 µM, bi-and multinucleated cells were directed towards the apoptotic pathway.
The results indicate that azelastine induces mitotic catastrophe, which precedes the onset of apoptosis.
Azelastine Enhances Degradation Processes
Analysis of changes at the ultrastructural level revealed numerous autophagic vacuoles in the cytoplasm of cells with azelastine at a concentration of 15 µM ( Figure 6A(1-1b)); vacuoles were differentiated in size and content indicating different stages of degradation. In the studied cells, expanded Golgi apparatuses and dilated channels of the rough endoplasmic reticulum were present; these changes indicated the intensification of the process of synthesis of proteins crucial for subsequent stages of intracellular digestion. The presence of numerous mitochondria (Figure 6(A1a)) in the tested cells may result from the increased demand for ATP necessary for the macroautophagy process. Also at 25 µM concentration ( Figure 6A(2-2b)), numerous and highly enlarged autophagic vacuoles containing material at different stages of degradation were shown, and vacuoles at the formation stage were also present ( Figure 6(A2b)). In the lumen of these structures, large fragments of the cytosol with organelles were visible ( Figure 6A(2,2b)), and some vacuoles took the form of emptiness and clearly demarcated from the cytoplasm spaces ( Figure 6(A2a)). Swollen mitochondria (Figure 6(A2a)), dilated channels of rough endoplasmic reticulum, and reduced Golgi apparatus (Figure 6(A2)), whose membranes could be used for vacuole formation, were also observed in the cells. In contrast, in cells exposed to azelastine at 45 µM concentration ( Figure 6A(3-3b)), the number of autophagic vacuoles was reduced; however, they had different shapes and covered a large area of the cytosol. In addition, altered mitochondria and single, slightly dilated channels of rough endoplasmic reticulum were present within the cytoplasm of these cells. When cells were treated with high concentrations of azelastine 60 µM and 90 µM ( Figure 6A(4-5b)), the presence of secondary lysosomes was clearly marked alongside altered cell nuclei with local chromatin condensation ( Figure 6(A4a), an expanded nuclear envelope ( Figure 6A(4a,5,5a)), often with features of fragmentation ( Figure 6(A5b)). There were also single, damaged mitochondria ( Figure 6A(5a,5b)) and dilated channels of rough endoplasmic reticulum ( Figure 6A(4-5a)). The demonstrated changes were dependent on the concentration of azelastine and indicated the intensification of the degradation processes. The progressive degradation observed at high concentrations may indicate a switch of cellular metabolism with the possibility of triggering programmed cell death.
Azelastine Activates Cathepsin D and L
As shown in the study, the effect of azelastine compared to control (which was taken as 100%) resulted in concentration-dependent changes in cathepsin D and L activity (p ≤ 0.0001) ( Figure 6C). The highest increase in the activity of enzymes to 179.96% and 177.54% occurred at the concentrations of 15 µM and 25 µM, respectively. At 45 µM, the enzymes activity was found to be 173.89%. Further increase in the concentration of the test compound to 60 µM and 90 µM resulted in reduction of cathepsin D and L activities to 144.33% and 120.53%, respectively. The behavior of lysosomal enzymes is a reflection of the degradation processes activated by azelastine.
Azelastine Induces Autophagy by Increasing LC3 Protein Levels
According to the principle of the assay used, LC3 is a cytoplasmic protein involved in autophagosome formation during autophagy, which is translocated from the cytoplasm to the interior of autophagosomes, and its fluorescence is monitored cytometrically. According to the studies performed, azelastine induced autophagy depending on the concentration ( Figure 6B). The highest intensity of fluorescence in cells was observed at concentrations of 15 µM and 25 µM, it was 139.3% and 143.36%, respectively, compared to the cells of the control group (gray area) (48.9%). With increasing azelastine concentrations, a gradual reduction in dye emission in labeled cells was observed to 95.3% at 45 µM and to 75.2% at 60 µM. At the highest concentration used (90 µM), a further reduction of the fluorescence intensity to 46.9% was demonstrated.
Discussion
Despite continuous advances in anticancer therapy, low treatment efficacy with concomitant high side effects is still a major problem [16]. Therefore, in the search for potential chemotherapeutic agents, particular attention is paid to the safety of the drug and its good tolerability [18]. Such features may be possessed by the well-studied new-generation H1 antihistamines, which have almost completely displaced the old-generation drugs used in anti-allergic treatment [48]. Another important aspect in the search for new oncological treatment options is the complexity of the oncological disease. The success of cancer therapy is also influenced by the possibility of modulating molecular and cellular factors found in the tumor and its microenvironment [16]. Thus, the identification of compounds with multidirectional mechanisms of action is crucial for the further development of anticancer therapies [6], and azelastine, used in anti-allergy treatment, may be such a drug.
The results obtained from our study allow us to conclude that the studied compound induced in HeLa cells two important processes for anticancer therapy, namely autophagy and apoptosis ( Figure 4D).
At low concentrations (15 µM and 25 µM), azelastine clearly promoted autophagy while apoptosis remained low. The induction of autophagy is indicated by an increased number of cells with intensified vacuolization of the cytoplasm ( Figure 4A(2-3a),D). An important role in this process is played by the maintenance of an acidic pH inside the vacuole, which was documented by the presence of a strongly eosinophilic content within the large vacuoles of the studied cells ( Figure 4A(2,3)). Adequate pH is necessary for the activity of lysosomal enzymes required to digest cellular material [49,50]. In our study, we showed that azelastine treatment caused a marked increase in the activity of lysosomal enzymes, i.e., cathepsin D and L ( Figure 6C). The revealed concentration-dependent increase in lysosomal hydrolases activity was correlated with ultrastructural changes of studied cells, indicating an increase in degradative processes. The numerous autophagic vacuoles seen in the microphotographs ( Figure 6A(1-2b)), which are very large and contain fragments of cytosol with organelles, indicate the presence of macroautophagy. This was also confirmed by examining the autophagy-specific marker, LC3 protein, where the highest fluorescence intensity (i.e., 139.3% and 143.36%) was found at the lowest concentrations of azelastine (15 µM and 25 µM, respectively) ( Figure 6B). Figure 4D, cells with morphological features typical of apoptosis clearly gained advantage at 45 µM azelastine, and with increasing concentrations of 60 µM and 90 µM, they constituted more than 90% of all analyzed cells. The switch of autophagy to apoptosis is documented in Figure 4A(4,4a), where cells with characteristics of both types of cell death are seen. Such a condition can be associated with progressive degradation of organelles, confirmed by the presence of giant autophagic vacuoles in cells loaded with 45 µM of azelastine ( Figure 6A(3,3a)), as well as the presence of increased numbers of primary and secondary lysosomes at high concentrations (60 µM and 90 µM) of the test compound ( Figure 6A(4-5b)). In the studied cells, enlarged mitochondria were visible next to the vacuoles ( Figure 6A(5a,5b)), which according to the literature could be related to the increasing demand for ATP, necessary for enhanced autophagy as well as for triggering programmed cell death [51]. The nuclei of the cells also showed altered morphology, including chromatin condensation and fragmentation ( Figure 6A(4a,5b)), which was confirmed by DAPI staining (Figure 1E). The pro-apoptotic effect was additionally confirmed by the cytometric method; it was shown that azelastine, depending on the concentration, significantly induced the number of apoptotic cells with the dominance of the late-apoptotic phenotype. These values increased as follows: up to 60% at a concentration of 45 µM, 93% at 60 µM, and 98% at 90 µM ( Figure 1A,C).
As shown in
Autophagy and apoptosis are interconnected and can occur in the same cell in response to a given stimulus simultaneously or separately [52,53]. According to Fimia and Piacentini [54], induction of apoptosis is often associated with increased autophagy. In the presence of apoptotic stimuli, autophagy may be an adaptive response or a distinct type of cell death [55]. The tendency to change the regulation of both processes demonstrated in our studies was dependent on the concentration of azelastine. The targeting of cells to the apoptotic pathway was likely the result of a failed attempt to restore cellular homeostasis as a consequence of increased cellular stress [56]. During excessive autophagy, mitochondria tend to show accelerated production of reactive oxygen species due to increased oxidative phosphorylation. A slight, but statistically significant increase in the ROS level ( Figure 2C,D), was noted already at the lowest concentrations of the tested compound (15 µM and 25 µM). The ROS values increased significantly at the concentration of 45 µM, which could have triggered the apoptosis process in the tested cells. Similar results were obtained by the team of Nicolau-Galmés [55] in a study on terfenadine, an old generation antihistamine, which enhanced autophagy and consequently led to the induction of apoptosis.
The oxidative stress activated by azelastine in HeLa cells was correlated with the simultaneous increase in the level of phosphorylated H2A.X ( Figure 3B,C). The results obtained in this study indicate the participation of ROS in inducing DNA damage, which could have been a signal to trigger apoptosis. The significantly reduced division capacity of HeLa cells ( Figure 4E) and their arrest in the S phase of the cell cycle ( Figure 4B,C) may be associated with the DNA damage response. As shown in the literature, cell proliferation may be crucial for tumor development and progression, and histamine may be the main mediator of this process in various types of cancer [16]. On the other hand, DNA damage and inhibition of cell proliferation are among the important mechanisms of action of anticancer drugs [57]. The various properties of azelastine demonstrated in this study can also be used in anticancer therapy.
The antiproliferative properties of azelastine are also confirmed by the mitotic catastrophe induction capacity demonstrated in studies, documented in Figure 5A showing the abnormal course of mitosis. This process was most likely induced by DNA damage, and resulted in demonstrated changes such as multinucleation and micronucleation ( Figure 5A(1-6),B). Of particular note is the presence of giant, multinucleated cells ( Figure 5(A4)). At high concentrations of the tested compound, cells with significant nuclear changes eventually underwent apoptosis ( Figure 5D), which is considered one of the necessary final steps in the course of mitotic catastrophe. The mitotic catastrophe shows a strong mechanistic relationship with the cellular and molecular changes accompanying carcinogenesis and therefore seems to be a preferentially stimulated process in cancer cells [58][59][60]. Compounds promoting mitotic death, such as azelastine, may be a promising therapeutic alternative for apoptosis-resistant cancer cells.
In cells, the functions of "stress sensors" are performed by mitochondria and they are the central executors of apoptosis [61] as well as the course of mitotic catastrophe [59]. As our results showed, the induction of apoptosis by azelastine was also associated with the mitochondrial pathway. At the level of submicroscopic studies, already at low concentrations of azelastine, mitochondria were enlarged with a clear matrix (Figure 2A(2,3,3a)). However, under the influence of high concentrations, enhanced changes were demonstrated, with significant mitochondrial damage (Figure 2A(4-6a)) and disorganization of the inner membrane. At the same time, the cytometric analysis determined the highest percentage of cells with depolarization of the mitochondrial membrane (over 50%) for the concentrations of 60 µM and 90 µM ( Figure 2E,F). The violation of the mitochondrial membrane integrity was confirmed by the concentration-dependent gradual quenching of green fluorescence emissions from labeled mitochondria ( Figure 2B), which was also associated with the demonstrated inactivation of the Bcl-2 protein ( Figure 3A,D) and activation of the executive caspases ( Figure 1B,C). We also demonstrated the cytotoxic effect of azelastine related to the reduction of metabolic activity of mitochondria using the MTT test. Depending on the concentration, this compound reduced the viability of HeLa cells ( Figure 1D), which at the highest concentration (90 µM) was only 4%. According to the studies by Cornet-Masana [62] conducted on leukemia lines, mitochondria in cancer cells are characterized by numerous changes, which, according to Pathania's team [63], makes them more susceptible to therapies aimed at the metabolism of cancer cells.
The analysis performed at the level of submicroscopic changes revealed that mitochondrial disorganization is also accompanied by significant changes in the profile of the rough endoplasmic reticulum. It has been reported that even at low concentrations of azelastine, there is significant dilatation of reticulum channels (Figure 2A(3,3a)). These changes intensified as a consequence of the action of increasing concentrations of the tested compound, and at its high concentrations, they became significantly swollen (Figure 2A(5-6a)), which can be explained by the stress of the reticulum. The revealed changes in the endoplasmic reticulum homeostasis may be induced by an increased level of ROS [64,65], which is also confirmed by the results of our research.
Cao and Kaufman [66] emphasize in their works the importance of spatial and functional distribution in cells of organelles such as mitochondria and endoplasmic reticulum. Also in the analyzed electronograms, close proximity of altered mitochondria and expanded channels of rough endoplasmic reticulum was demonstrated (Figure 2A(5a-6a)), indicating a functional relationship between these organelles and which may be relevant to the processes regulating apoptotic cell death. Pro-apoptotic factors derived from mitochondria induce signals from the rough endoplasmic reticulum, which in turn cause changes in the mitochondria. On the other hand, reticulum stress can lead to mitochondrial dysfunction and consequent oxidative stress, followed by impaired homeostasis and apoptosis [67][68][69]. Apoptosis involving endoplasmic reticulum stress has attracted a lot of attention in recent years [64]. Mild stress of cancer cells can lead to the activation of adaptive mechanisms, however, therapeutic benefits of compounds that induce endoplasmic reticulum stress and put cells on the apoptosis pathway have been confirmed for certain types of cancer cells [70,71]. In our studies, azelastine induced in HeLa cells oxidative stress, stress of rough endoplasmic reticulum, and mitochondrial dysfunction, which by reinforcing each other, disrupted cellular functions and activated proapoptotic signals [65,66,68]. A similar mechanism of action has been reported for terfenadine, an old generation antihistamine in relation to A375, HT144, and Hs294T cell lines [55]. However, in studies on the action of rupatadine, ebastine, and loratadine in relation to acute myeloblastic leukemia cells, the cytotoxicity of these compounds consisted of bidirectional, mitochondrial-lysosomal action, ROS generation, and reduction of mitochondrial metabolic activity, which led to the activation of caspase 3 and 7 and induction of the apoptosis pathway [62].
Efferocytosis-phagocytosis of dead cells was also observed under the action of azelastine ( Figure 4A(4a,5,6,6a)). According to the literature, this process under certain conditions can be performed by "non-professional phagocytes" [72][73][74]. In the context of carcinogenesis, efferocytosis suppresses the body's natural immune response, then facilitates the immune escape of tumor cells while promoting the tumor microenvironment [50]. This process not only affects the proliferation, invasion, metastasis, and angiogenesis of cancer cells, but also regulates adaptive responses and decreases the positive response to radiotherapy and to many commonly used anticancer antibodies and chemotherapeutic agents [75]. The data obtained in our study indicate, that initially azelastine at low concentrations (Figure 4(A4a)) induced efferocytosis in the context of an adaptive response of HeLa cells, and then cells with phagocytosed apoptotic cells were directed to the apoptotic pathway ( Figure 4A(6,6a)). Such action is in contrast to that of traditional therapies, which induce apoptosis of tumor cells and increase subsequent efferocytosis, which suppresses the inflammatory response [76]. Thus, the demonstrated property of azelastine indicates an additional possibility of the interference of the compound in the tumorigenesis, and at the same time, fits in the current view of combining traditional therapies with therapies targeting the efferocytosis process in order to improve their effectiveness [50]. According to the literature data, the tested concentrations are used in research on antihistamine drugs conducted on cancer cell lines. Control cells were cultured in complete maintenance medium without the addition of the test compound.
Assessment of Cell Viability-MTT Test
The level of cytotoxicity of azelastine against HeLa cells was determined using MTT (3-(4,5-dimethyl-2-yl)-2,5-diphenyltetrazolium bromide) reduction assay. Cells seeded in Falcon 96-well plates (Fisher Scientific, Waltham, MA, USA) after azelastine treatment were stained with MTT solution (1 mg/mL) (Sigma Aldrich, St. Louis, MO, USA). After 2 h of incubation of the cells with the dye, dimethylsulfoxide (DMSO) was applied to solubilize the formed formazan crystals. Optical density was measured at 570 nm using a Synergy 2 multi-detector microplate reader (BioTek, Winooski, VT, USA). Cell viability was calculated in comparison with the control group using Gen5 software.
Visualization of Apoptotic Cells under A Fluorescence Microscope
Morphological evaluation of nuclei of control and tested cells was performed using 4 ,6-diamidino-2-phenylindole (DAPI) staining. Cells cultured in dishes (Falcon, Fisher Scientific, Waltham, MA, USA) were stained with 2.5 µg/mL DAPI solution (Sigma Aldrich, St. Louis, MO, USA) for 15 min and then washed with PBS. The preparations were analyzed using a Nikon Eclipse Ti epi-fluorescence inverted microscope (Nikon Instruments Inc., Melville, NY, USA).
Detection of Apoptosis
Phosphatidylserine externalization in azelastine-exposed cells was assessed using Annexin V and Dead Cell test kit (Merck Millipore, Burlington, MA, USA). Control and azelastine treated cells were detached using 0.25% trypsin-EDTA (Corning, New York, NY, USA), centrifuged and washed with PBS. Then, cells were stained with annexin V (100 µL) for 20 min at room temperature in the dark. The fluorescence intensity was analyzed using a Muse analyzer (Merc Millipore, Burlington, MA, USA).
Activity of Caspase-3/7
The level of caspase-3/7 activation was measured using a caspase-3/7 assay kit (Merck-Millipore, Burlington, MA, USA). After 48 h of incubation with azelastine, cells were harvested by trypsinization and incubated at 37 • C with 5 µL of Caspase-3/7 working solution (as per protocol). Then, 150 µL of Caspase 7-AAD working solution was added to the cells. Detection of caspase-positive cells was performed using a Muse analyzer (Merck-Millipore, Burlington, MA, USA).
Analysis of Ultrastructural Changes
Cells for electron microscopy were fixed in 3% glutaraldehyde (Serva Electrophoresis GmbH, Heidelberg, Germany) followed by 2% OsO4 (Spi, West Chester, PA, USA) in cacodyl buffer. The material was then dehydrated in an ascending series of ethanol solutions (10-99.8%) and embedded in Epon 812 epoxy resin (Serva Electrophoresis GmbH, Heidelberg, Germany), followed by polymerization at 40 • C and 60 • C. The epoxy blocks were cut into ultra-thin sections on a Leica EM UC7 ultramicrotome (Leica Biosystems, Wetzlar, Germany), and the obtained preparations were further contrasted with uranyl acetate and lead citrate. Analysis was performed using a Tecnai G2 Spirit transmission electron microscope (FEI Company, Hillsboro, OR, USA) equipped with a Morada camera (Olympus, Soft Imagine Solutions, Münster, Germany). The interpretation of the changes in azelastine-exposed cells was based on the image of control cells.
Measurement of the Mitochondrial Membrane Potential (∆ψm)
The decrease in ∆ψm was analyzed using the Muse MitoPotential Assay kit (Merck Millipore). Cells after incubation with azelastine were resuspended in 95 µL of Muse MitoPotential working solution and incubated at 37 • C for 20 min. The cells were then stained with 7-AAD dead cell marker (5 µL) at room temperature for 5 min, and the cell suspension was analyzed by flow cytometry.
Microscopic Evaluation of Changes in the Potential of Mitochondrial Membrane
After 48 h incubation with azelastine, cells were fixed in 4% paraformaldehyde and then incubated for 30 min with rhodamine 123 (Sigma Aldrich, St. Louis, MO, USA) at a concentration of 5 µg/mL ethanol. The used fluorochrome binds to metabolically active mitochondria, so the fading of fluorescence is proportional to the decrease in mitochondrial membrane potential. The cells were then washed with PBS and analyzed under a Nikon A1 confocal microscope based on a Nikon Eclipse Ti inverted microscope (Nikon Instruments Inc., Melville, NY, USA) and equipped with Nikon Nis Elements AR software (Nikon Instruments Inc., Melville, NY, USA).
Oxidative Stress Analysis
The Muse Oxidative Stress Assay kit (Merck Millipore, Burlington, MA, USA) based on intracellular detection of superoxide radicals was used to investigate the level of reactive oxygen species. As according to the manufacturer's instructions, cells were treated with Muse Oxidative Stress Reagent working solution (190 µL) after 48 h incubation with azelastine. Samples were then incubated at 37 • C for 30 min and the percentage of gated ROS (−) and ROS (+) cells with ROS activity were analyzed.
Assessment of Bcl-2 Protein Phosphorylation
Changes in Bcl-2 phosphorylation in HeLa cells were assessed using the Muse™ Bcl-2 Activation Dual Detection Assay kit (Merck-Millipore, Guyancourt, France) according to the manufacturer's instructions. Two direct conjugated antibodies were used in the kit, i.e., phospho-specific anti-phospho-Bcl-2 (Ser70)-Alexa Fluor ® 555 and a conjugated anti-Bcl-2-PECy5 antibody to measure total Bcl-2 expression levels. The degree of activation of the Bcl-2 pathway was assessed by measuring Bcl-2 phosphorylation relative to total Bcl-2 expression in the tested cells.
DNA Damage Assessment
To determine whether azelastine causes DNA damage, cells were fixed and permeabilized with Muse Fixation Buffer and Permeabilization Buffer reagents, followed by staining with anti-phospho-Histone H2A.X (Ser139) and anti-phospho-ATM (Ser1981) antibodies according to the instructions for the Muse H2A.X Activation Dual Detection kit (Millipore, Darmstadt, Germany).
Cell Cycle Analysis
Cells were analyzed using the Muse Cell Cycle Assay Kit (Merck Millipore, Burlington, MA, USA). Cells were trypsinized and centrifuged, and the obtained cell pellet was fixed in 70% ice-cold ethanol. Cells were then treated with Muse Cell Cycle Reagent (Merck Millipore, Burlington, MA, USA) for 30 min and then analyzed with a Muse analyzer (Merck Millipore, Burlington, MA, USA).
Visualization of Morphological Changes and Assessment of the Dividing Capacity of HeLa Cells
Cells were cultured on sterile coverslips in Falcon dishes (Fisher Scientific, Waltham, MA, USA) in DMEM medium supplemented with azelastine (test cells) or without test compound (control cells). Methanol-fixed cells were stained with Harris hematoxylin and eosin, then dehydrated in an ascending series of ethanol solutions and immersed in xylene. Each preparation was analyzed based on a control image, taking into account changes mainly concerning cell nucleus (presence of bi-and multinucleated cells, giant cells, cells with micronuclei, with chromatin condensation, with pyknotic nucleus), cytoplasm (increased or decreased pigmentation, vacuolization changes, presence of apoptotic bodies), and mitotic division (presence of cells in particular phases of division and abnormal mitotic figures). Quantitative and qualitative analysis of morphological changes in the studied cells and photographic documentation were performed using a Nikon Eclipse 80i microscope with Nikon NIS Elements D 3.10 software (Nikon Instruments Inc., Melville, NY, USA). The mitotic index was evaluated by determining the number of cells in each phase of mitotic division, and the result was expressed as a percentage. In preparations, 3000 cells each were analyzed in three independent experiments (9000 cells/concentration), and the final score for a given trait was the mean value.
Evaluation of Cathepsin D and L Activity Levels
After 48 h of incubation with azelastine, cells were trypsinized, resuspended in 0.25 M sucrose solution and homogenized using a Potter S homogenizer (Sartorius, Gottingen, Germany). The homogenate was initially centrifuged at an overload of 700× g, for 10 min. The extranuclear supernatant was then centrifuged at 20,000× g overload for 20 min, and the obtained lysosomal pellet was resuspended in Triton X-100 (Sigma-Aldrich, St. Louis, MO, USA). The activities of degradative enzymes, cathepsin D and L, were determined in the lysosomal fraction according to the modified Langner's method. According to the procedure, 2% azocasein (Sigma-Aldrich, St. Louis, MO, USA), 0.2 M acetate buffer pH = 5.0, and 10% TCA (+4 • C) were used. After incubation at 37 • C, samples were centrifuged, and enzyme activity was measured by colorimetric method at 366 nm using a Spekol 1500 spectrophotometer (Analytik Jena GmbH, Jena, Germany). Simultaneously, the total protein content (at 680 nm) was determined using the Lowry's method modified by Kirschke and Wiederanders. Enzyme activity was expressed as µmol/mg protein/hour.
LC3-Antibody Detection
The level of azelastine-induced autophagy was assessed by cytometric assay using Autophagy LC3 antibody (Merck Millipore, Burlington, MA, USA). The kit includes reagent to selective membrane permeabilization (Autophagy Reagent A) that allows to distinguish between cytosolic and autophagic LC3. This is accomplished by extracting the cytosolic protein while protecting the LC3 that is translocated to and remains intact in autophagosomes. Addition of Anti-LC3 Alexa Fluor ® 555 and Autophagy Reagent B to the cells allows quantification of LC3 by measuring fluorescence using flow cytometry. According to the protocol, cells were seeded in Falcon 96-well plates. Autophagy A reagent in EBSS medium (Corning, Corning, NY, USA) was then added to the cells and incubated for 4 h under CO 2 atmosphere, followed by washing with HBSS (Corning, Corning, NY, USA), trypsinization, and centrifugation. The supernatant was removed and anti-LC3 Alexa Fluor ® 555 and Autophagy Reagent B were added to the cells and incubated on ice for 30 min in the dark. The samples were then analyzed using flow cytometry technique. Cells that were treated with serum-free medium for 4 h were used as a positive control.
Statistical Analysis
Statistical analysis of the study results was performed using one-way analysis of variance (ANOVA) with multiple post-hoc comparisons using Tukey's test. Differences were considered statistically significant at p < 0.05. Statistica 10.0 software (StatSoft, Krakow, Poland) was used for data analysis.
Conclusions
In our study we demonstrated potential anticancer properties of azelastine based on autophagic, proapoptotic, cytotoxic, or antiproliferative activity, which, taking into account safety of its application and potent anti-inflammatory properties, can be regarded as features of a compound that is part of the current canon of fight against cancer. Azelastine may be therefore an alternative method of oncological treatment, which requires further research.
Data Availability Statement:
The data that support the findings of this study are available from the corresponding author upon reasonable request. | 9,809.8 | 2022-05-24T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Improving the zT value of thermoelectrics by nanostructuring: tuning the nanoparticle morphology of Sb2Te3 by using ionic liquids
A systematic study on the microwave-assisted thermolysis of the single source precursor (Et2Sb)2Te (1) in different asymmetric 1-alkyl-3-methylimidazolium- and symmetric 1,3-dialkylimidazolium-based ionic liquids (ILs) reveals the distinctive role of both the anion and the cation in tuning the morphology and microstructure of the resulting Sb2Te3 nanoparticles as evidenced by X-ray diffraction (XRD), scanning electron microscopy (SEM), energy dispersive X-ray analysis (EDX), and X-ray photoelectron spectroscopy (XPS). A comparison of the electrical and thermal conductivities as well as the Seebeck coefficient of the Sb2Te3 nanoparticles obtained from different ILs reveals the strong influence of the specific IL, from which C4mimI was identified as the best solvent, on the thermoelectric properties of as-prepared nanosized Sb2Te3. This work provides design guidelines for ILs, which allow the synthesis of nanostructured thermoelectrics with improved performances.
Introduction
Thermoelectric generators (TEG) directly convert heat fluxes into useable electrical energy and therefore provide a wearand noiseless power source. 1 The efficiency of a thermoelectric material is defined by the dimensionless figure of merit zT ((α 2 σ/κ)T ), where α is the Seebeck coefficient, σ the specific electrical conductivity, κ the thermal conductivity as the sum of the electronic κ el and the lattice κ L contribution and T the absolute temperature in Kelvin. It is assumed that at least a zT ≅ 1.5 is necessary for most technical applications to become efficient and commercially viable. 2 Unfortunately, the electrical and thermal transport coefficients are interrelated and cannot easily be optimized independently from each other. Metals naturally show high electrical and thermal con-ductivities, whereas both of these are small for insulators such as ceramics. The best choices of materials for technical applications in thermoelectric devices are semiconducting materials which contain heavy elements. This inherently minimizes the thermal conductivity due to a low speed of sound of such materials, while still a sufficiently high electronic conductivity is obtained. For technical applications near room temperature, Sb 2 Te 3 and Bi 2 Te 3 as well as their solid ternary solutions (Sb x Bi 1−x ) 2 Te 3 are currently the most efficient materials due to their high electrical conductivities and high Seebeck coefficients combined with low thermal conductivities. 3 Sb 2 Te 3 is a tetradymite-type layered material, which has been investigated for decades since it is a narrow band-gap (E gap 0.26 eV) semiconductor with good thermoelectric characteristics near room temperature. 4 More recently, interest in Sb 2 Te 3 increased due to its capability to serve as a topological insulator. 5 Nanostructuring has been demonstrated theoretically and experimentally to greatly improve the figure of merit by effectively reducing the lattice contribution to the thermal conductivity 6 while the electrical conductivity of the material is mostly unaffected. Different types of scattering centres for the heat carrying phonons such as nanoscale precipitates or grain boundaries and other interfaces have been employed for optimizing thermoelectric materials this way. 7 Even a hierarchical design of the nanoand microstructure was developed to effectively scatter the a Inorganic Chemistry III -Materials Synthesis and Characterization, Ruhr-Universität Bochum, DE-44801 Bochum, Germany. E-mail<EMAIL_ADDRESS>b broad spectrum of phonon wavelengths, which led to recordhigh zT values. 8 Our general interest in thermoelectric materials prompted us to investigate the synthesis of binary (Sb 2 Te 3 , Bi 2 Te 3 ) and ternary ([Sb x Bi 1−x ] 2 Te 3 ) materials both in solution 9 and via gas phase based processes such as atomic layer deposition (ALD) 10 and metal organic chemical vapour deposition (MOCVD) 11 using single-source and dual-source precursor approaches. The microwave-assisted decomposition of the single source precursor (Et 2 Sb) 2 Te 1 in an ionic liquid (IL) had been shown to produce highly stoichiometric Sb 2 Te 3 nanoparticles, 12a while Bi 2 Se 3 , Bi 2 Te 3 and (Sb x Bi 1−x ) 2 Te 3 nanoparticles were synthesized by using specific reactive ILs. 12b,c The Sb 2 Te 3 nanoparticles showed exceptionally high figures of merit of up to 1.5 at 300°C, without the need of alloying or electronic doping. This new synthetic strategy allowed an effective decoupling of electronic and phononic transport properties. 12a In our studies we made the observation that the Sb 2 Te 3 particle morphology changed depending on the chemical identity of the ionic liquids, which prompted us to study their influence on the microwave-assisted decomposition of 1 in more detail and look for correlations with the thermal and electronic transport properties in the obtained material.
We herein report on our systematic study on the decomposition of 1 in different ILs, in which both the anion and the cation were systematically varied, using microwave-assisted techniques. In addition, the results from detailed transport measurement of the resulting Sb 2 Te 3 nanoparticles are reported that allow for a structure-property analysis.
Results and discussion
We have recently developed a synthetic protocol that enabled us to access Sb 2 Te 3 nanomaterials with a record-figure of merit by the decomposition of 1 in the ionic liquid C 4 mimBr (C 4 mim = 1-butyl-3-methylimidazolium) under microwave (MW) irradiation. 12a As the IL acted in this reaction not only as the solvent but also as the heat transfer medium, we herein study the specific role of the IL as the nanotemplating agent by investigating a set of ILs based on 1,3-dialkylimidazolium cations. Starting from the most prominent IL cation, 1-butyl-3methylimidazolium (C 4 mim + ), first the counter anion was varied from Cl − , Br − , I − to NTf 2 − (NTf 2 − = bis(trifluoromethanesulfonyl)amide). Variation of the IL anion not only leads to a change in fundamental physical properties of the IL such as the melting point or viscosity but also its solvation properties such as polarity. Moreover, the chosen anions range from relatively strongly coordinating (Cl − ) to quite weakly coordinating (NTf 2 − ) anions. In the context of nanomaterial synthesis the capabilities of the IL ions to interact with the as-formed nuclei and crystal seeds is especially important as this allows for the morphology 13 and even the phase control 14 of nanomaterials. The Lewis basicity of these ILs decreased in the order of Cl − , Br − , I − to NTf 2 − . 15 Similarly, variations of the cation influence the overall IL properties.
Generally an increase of the melting point with increasing chain length of the alkyl group is observed for imidazolium based ILs. Symmetrically substituted imidazolium ILs typically exhibit higher melting points than asymmetrical ILs. 16 Again, in the context of tuning the nanostructure of a material through the templating effect of the IL, the interaction of the IL cation with the nanomaterial needs to be considered. Imidazolium cations can interact not only electrostatically, but, as they bear acidic protons (the 2H proton of the imidazolium ring is especially acidic) and an aromatic π-system, can also undergo secondary bonding interactions such as hydrogen bonding and π-bonding. This has been found especially important in the synthesis of nanosized oxide materials. 17 However, the cation size can critically influence these bonding capabilities. 18 For this reason, the alkyl-chain of the C 4 × C 1 mim + imidazolium cation was varied from three to eight carbon atoms. In addition to the set of 1-methyl-n-alkylimidazolium bromides, the corresponding set of symmetrically substituted cations (C n C n mim + ) with n = 4, 6 and 8 were explored. Ionic liquids are known to be highly structured solvents, 19 which can impact nanoparticle formation critically. 20 In particular, for imidazolium cations with longer alkyl chains a highly ordered structure of the IL can be expected, 21 i.e. imidazoliumbased ILs with more than eleven carbon atoms in the side chain tend to form thermotropic liquid crystalline phases. The use of ordered phases as the template in nanoparticle synthesis has already been reported. 22 To obtain Sb 2 Te 3 nanoparticles from various ionic liquids, in a typical reaction, 1 was added to the respective IL at 90°C and stirred for 5 min until a homogeneous dispersion or solution was formed, which was then heated in a laboratory microwave oven first for 30 s at 100°C, then for 5 s at 150°C and finally for 5 min at 170°C. The resulting colloidal solution was centrifuged (2000 rpm), washed with 10 mL of acetonitrile (7×) to completely remove the by-product SbEt 3 (Scheme 1) and dried at ambient temperature under reduced pressure. Black precipitates were obtained, which were characterized by powder X-ray diffraction (PXRD), energy dispersive X-ray analysis (EDX), scanning electron microscopy (SEM) and X-ray photoelectron spectroscopy (XPS).
General product characterization
PXRD measurements confirmed the formation of phase-pure Sb 2 Te 3 in all ILs (see Fig. 1 for a representative PXRD pattern). All observed diffraction peaks can be indexed to the database pattern of Sb 2 Te 3 (JCPS file 015874) and the lattice parameters were refined to a = 4.266(9) Å and c = 30.456(6) Å.
A small texture effect was observed since the intensity of the 1010 reflex was somewhat smaller compared to the reference. Our samples show an intensity ratio of the 015 (28.3°) : 1010 (38.5°) : 110 (42.5°) reflex of 1 : 0.26 : 0.35, whereas this ratio in the reference was 1 : 0.35 : 0.33. A size determination of the nanoparticles typically yielded sizes of >300 nm, but these values should be taken with care due to their plate-like structure (see Fig. 4 and 5).
EDX analysis confirmed within standard deviations the stoichiometric composition of the products. In addition, no signals originating from the IL or contaminations, i.e. oxidation or hydrolysis products, were detected. These results were confirmed by infrared (IR) spectroscopy, showing no absorption band of the respective ILs on the particle surface. In contrast, the nanoparticles were shown to be partially oxidized at the surface by XPS, which is a much more surface sensitive analytical method compared to EDX and IR. Fig. 2 exemplarily shows the XPS spectra of Sb 2 Te 3 nanoparticles prepared in C4mimI, while Fig. 3 displays XPS spectra of a sample obtained in C 4 mimNTf 2 .
The XPS spectra of the sample prepared in C 4 mimI ( Fig. 2) and C 4 mimNTf 2 (Te and Sb spectra at the top right and bottom right in Fig. 3) clearly showed that both Sb and Te are partially oxidized, as is clearly visible from the metal oxide peaks at 530.1 eV binding energy for the Sb 3d 5/2 and at 575.9 eV for the Te 3d 5/2 peaks. These findings are in good agreement with the literature values. 12b,c,23,24 However, while only around 3% of the Te are present as an oxide in the case of Sb roughly 40% ( prepared in C 4 mimI) to 60% ( prepared in C 4 mimNTf 2 ) of the Sb is oxidized. The ratio of elemental Sb to elemental Te gives exactly the expected ratio of 2 : 3. This means that there is an excess of Sb at the surface and that this Sb is present as an oxide. Comparable surface oxidation reactions have been very recently observed for binary and ternary bismuth chalcogenide nanoparticles, in which Bi 2 Te 3 and Bi 2 Te 2 Se were found to easily oxidize upon exposure to air while Bi 2 Se 3 was significantly more stable toward oxidation. 12b,c,31 In addition, Sb 2 Te 3 thin films were found to be easily oxidized after exposure to atmosphere and a post-deposition treatment was therefore suggested by the authors as an effective method to promote the formation of the Sb-Te bond and prevent oxidation of the thin film surface. As a consequence, the nanoparticles have to be stored and handled under inert gas conditions to avoid surface oxidation reactions. In addition, N, S, F, C and O (Fig. 3) are also found on the surface, which can be attributed to the residues of the ionic liquid (C 4 mimNTf 2 ) and the washing solvent (CH 3 CN), which can also coordinate to the nanoparticle surface.
Morphology of Sb 2 Te 3 nanoparticles synthesised in different ILs
Role of the anion (An) of 1-alkyl-3-methyl-imidazolium based ionic liquids C 4 mimAn. The role of the anion (Cl − , Br − , I − , NTf 2 − ) of the 1-butyl-3-methyl-imidazolium based ionic liquid in tuning the composition and the morphology of the resulting nanoparticles was investigated by SEM. All four samples show the formation of hexagonally shaped Sb 2 Te 3 nanoplates with diameters ranging between 300-2000 nm and varying in thickness between 65-120 nm (Fig. 4). These platelets form larger agglomerates. Both the dimensions of the nanoplates and the type of agglomeration are strongly influenced by the IL anion. The thickness of the individual hexagonal platelets increased while changing the IL anion from Cl − , Br − , I − to NTf 2 − . Also, the association of these platelets changed from individual sandrose-type spherical aggregates over more aggregated spheres of platelets to less spherical, less extended aggregates. This observation could be correlated with the coordination ability of the IL anion.
Chloride is a strongly Lewis basic, coordinating anion whereas the NTf 2 anion has a weak coordination ability. Thus, ionic liquids with rather strongly coordinating anions force the Fig. 1 Representative powder X-ray diffraction pattern of Sb 2 Te 3 nanoparticles (with Cu Kα radiation) including Rietveld refinements. formation of thinner platelets as the vertical particle growth is hindered through the interaction of the IL anion with the particle surface. An IL with a less coordinating anion not only hinders the particle growth less, resulting in the formation of thicker platelets, but also stabilizes the particles less against further agglomeration and, in consequence, larger agglomerates are found in C 4 mimNTf 2 .
To investigate the influence of the IL cation on the morphology of the Sb 2 Te 3 nanoparticles, a set of 1-n-alkyl-3-methylimidazolium bromides was synthesized and explored as the reaction medium in the synthesis of Sb 2 Te 3 nanoparticles through a microwave reaction.
The chain length of the 1-n-alkyl-3-methylimidazolium cation was systematically varied from three to eight carbon atoms. Bromide was chosen in these experiments as the anion in order to be comparable with the results of previous studies. 12a In C 3 mimBr exclusively isolated spherical aggregates of small platelets with diameters of 2-5 μm were formed. By increasing the side chain length of the alkyl group of the 1-n-alkyl-3-methylimidazolium cation, the size and number of these aggregates shrink. At the same time individual larger hexagonal plates of Sb 2 Te 3 are formed, which have a smaller tendency to aggregate. When C 8 mimBr is used in the synthesis, almost exclusively hexagonal plates are observed (Fig. 5).
It is obvious that the IL cation has a strong influence on the nanostructure of the obtained material and two factors appear to be important: solubility of the precursor in the IL and structural order of the IL. The solubility of the precursor increases with increasing alkyl-chain length of the cation, which can be correlated to the decreasing polarity of the IL. Whilst in ILs with short alkyl chains such as C 3 mimBr and C 4 mimBr only dispersions of (Et 2 Sb) 2 Te in the IL were obtained, a full solubility of the precursor was observed for C 8 mimBr. As a conse- quence, the tendency of the formation of inhomogeneously distributed micro-drops of 1 in the IL increases with decreasing alkyl chain length of the IL, which obviously facilitates the formation of ball-like agglomerates upon thermolysis. In contrast, thermolysis of a homogeneous solution of 1 in the IL containing long alkyl chains leads to a steady growth of the Sb 2 Te 3 nanoparticles, which consequently form large sheets. In addition, it is known for 1-alkyl-3-methylimidazolium bromides that an increasing alkyl chain length of the cation leads to an increasing structural order, which may lead to the formation of lamellar, smectic liquid crystalline structures which could act as a template. 21b,c Therefore, a set of symmetrically substituted 1-n-alkyl-3-n-alkylimidazolim bromides was tested as the reaction medium.
Influence of the alkyl chain length of symmetrical 1,3-n-alkylimidazolium bromide ionic liquids C n C n imBr. The synthesis of Sb 2 Te 3 from 1 in C n C n imBr with n = 4, 6 and 8 yielded in all cases a phase pure material. However, while carrying out the synthesis in C 4 C 4 imBr, only a dispersion of the single source precursor was obtained, whilst in C 6 C 6 imBr and C 8 C 8 imBr homogeneous solutions were formed (Fig. 6). The nanostructures of the material obtained from the different ILs show distinct differences. The trend in the change of the morphology, however, is similar to the observations made for asymmetrical imidazolium bromides.
The nanoparticles synthesized in C 4 C 4 imBr (Fig. 7A) consist of strongly agglomerated Sb 2 Te 3 nanoplates. Predominately sandrose-like structures with sizes between 1 and 4 µm are formed by the aggregation of individual Sb 2 Te 3 particles, whose diameters range from 300 to 1200 nm. The diameter of the individual Sb 2 Te 3 platelets was found to increase with increasing alkyl-chain lengths of the IL cation. Individual nanoplates with diameters between 300 and 1500 nm were found in C 6 C 6 imBr (Fig. 7B), while those obtained from C 8 C 8 imBr (Fig. 7C) range from 300 to 2500 nm. In addition, the SEM images of the resulting nanoparticles clearly prove a decreasing agglomeration tendency of the hexagonal Sb 2 Te 3 nanoplates with increasing chain length and hence increasing steric demand and coordination strength of the IL as were observed for the Sb 2 Te 3 nanoparticles obtained from unsymmetrical ILs (see Fig. 5). While compact ball-like agglomerates were formed with C 4 C 4 imBr, the nanoparticles obtained in C 6 C 6 imBr show loosely agglomerated card structures, and nanoparticles synthesized in C 8 C 8 imBr consist of single Sb 2 Te 3 sheets and to some extent slightly crooked card structures (Fig. 7). With increasing alkyl chain length of the cation, the tendency of the formation of sandrose-like structures decreases. Instead, 3D agglomeration increases until finally in C 8 C 8 imBr predominately large extended plates are formed. This confirms that an interplay of the precursor solubility and microstructure and the coordination ability of the IL strongly influence the microstructure formation.
Whenever the single source precursor 1 has poor solubility in the IL, sandrose-like aggregates are formed. This potentially occurs due to the formation of micro-droplets, which can act as individual micro-reaction compartments. In contrast, thermolysis of homogeneously dissolved solutions of 1 in ILs of higher hydrophobicity, which increases with increasing alkyl chain length, leads to a steady growth of the Sb 2 Te 3 nanoparticles. Finally the microstructure of the IL can help to guide the particle growth. C 8 C 8 imBr prefers the formation of a lamellar structure and thus favours the sheet-like growth of Sb 2 Te 3 nanoplates. Thermoelectric transport properties. To investigate how the nanostructure of the obtained Sb 2 Te 3 material is correlated to the thermoelectric transport properties these samples were investigated in detail. For the characterization of thermoelectric transport properties, the Sb 2 Te 3 nanoparticles were cold pressed to macroscopic pellets and subsequently annealed at 300°C. After the determination of the thermoelectric transport properties, we re-investigated the material composition by EDX and XRD. According to these results we can exclude any change of the material composition as well as the formation of any additional crystalline phase during the processing process. Fig. 8 exemplarily shows two powder X-ray diffractograms of a Sb 2 Te 3 sample before and after processing.
Variation of the different alkyl-chain lengths of symmetric imidazolium-based ILs C n C n imBr. Since the influence of the alkyl chain lengths was observed for both the unsymmetrically and symmetrically substituted imidazolium derivatives, detailed transport characterization was performed with the nanoparticles obtained from the symmetrically substituted ILs. Fig. 8 shows the cross-section SEM images of the three pellets as-obtained from samples synthesized in C 4 C 4 imBr (C 4 ) (Fig. 8A), C 6 C 6 imBr (C 6 ) (Fig. 8B) and C 8 C 8 imBr (C 8 ) (Fig. 8C), respectively. Distinct differences between the characteristic microstructure of the three samples after the cold pressing compaction can be seen, which can be directly correlated to the morphology of the Sb 2 Te 3 nanoparticles from the IL.
In C 4 C 4 imBr the formation of sandroses (Fig. 7A) prevailed and this microstructure is maintained in the cold pressed samples where individual spheres can be made out (Fig. 9A). In C 6 C 6 imBr random three dimensional aggregations of these particles occurred (Fig. 7B) and this also shows in the compacted sample (Fig. 9B). In C 8 C 8 imBr the formation of large, extended nanosheets took place (Fig. 7C) and the SEM image of the cross section pellets shows still individual sheets that are stacked parallel (Fig. 9C). The microstructure evoked by the individual particle morphology and aggregation impacts directly the densities of the compacted samples. The density of the samples is 5.3 g cm −3 (82%) for C 4 C 4 imBr, 5.7 g cm −3 (86%) for C 6 C 6 imBr, and 4.9 g cm −3 (75%) for C 8 C 8 imBr, respectively. Fig. 10 shows the thermoelectric transport properties of the three samples between room temperature and 573 K. Table 1 summarizes the thermoelectric transport data of these three pellets at room temperature. The Seebeck coefficients range from 140 µV K −1 to 180 µV K −1 . The decomposition of 1 was shown to produce Sb 2 Te 3 nanoparticles with a highly stoichiometric composition and low anti-site defect concentration, resulting in high values of the Seebeck coefficient as observed in our previous study. 12a This is observed here, too.
From the electrical conductivity and the Hall carrier concentration, we obtained the Hall mobility of the charge carriers, µ H , which was corrected for the electrically active volume of the material (Fig. 11).
For this, the value was normalized to the relative density of the samples. 25 With this correction for the density, the Hall mobilities µ H of 64 cm 2 V −1 s −1 (C 4 C 4 imBr), 41 cm 2 V −1 s −1 (C 6 C 6 imBr) and 39 cm 2 V −1 s −1 (C 8 C 8 imBr) were found. There is no evident trend of the Hall mobility and the electrical conductivity with respect to the varying densities of the three samples, instead the mobility decreases with increasing chain length. Due to the nanostructure of the samples, the thermal conductivity could be reduced from 5.6 W m −1 K −1 parallel ∥ and 1.6 W m −1 K −1 ⊥ perpendicular to the c-direction 26 for a single crystalline Sb 2 Te 3 in the range of 0.49-0.72 W m −1 K −1 , comparable with values previously reported by Mehta et al. for Sb 2 Te 3 nanoparticles. 27 At 490 K the thermal conductivity exhibits a minimum in all samples and 9 Cross section SEM images of the three cold pressed Sb 2 Te 3 bulk samples synthesized in the ionic liquids C 4 C 4 imBr (C 4 ) (A), C 6 C 6 imBr (C 6 ) (B) and C 8 C 8 imBr (C 8 ) (C); insets: SEM micrographs for the respective non-pressed samples. 28 was used considering a temperature independent L. Fig. 12 clearly shows that the lattice thermal conductivity still increases. This is most likely caused by the bipolar effect known to appear in this temperature range for semiconductors with a small band gap (Sb 2 Te 3 : band gap E g = 0.28 eV 29 ): at a certain temperature electron-hole-pairs are generated and an additional contribution for the thermal conductivity κ from the bipolar thermal conductivity κ b is given. While the thermal conductivity data points towards a contribution of the bipolar effect, in principal this effect should also influence the other transport coefficients, i.e. decrease the Seebeck coefficient and Fig. 10 Thermoelectric transport properties of three Sb 2 Te 3 bulk samples synthesized in C 4 C 4 imBr, C 6 C 6 imBr and C 8 C 8 imBr. increase the electrical conductivity due to minority carriers, which is not seen here. The most promising property combination of the transport properties is found for samples synthesized in C 4 C 4 imBr, which exhibited the highest charge carrier concentration, the highest charge carrier mobility and the lowest lattice thermal conductivity. The figure of merit zT reaches a maximum value of 0.72 at 550 K for the C 4 C 4 imBr sample (Fig. 10).
Dalton Transactions Paper
From this we conclude that the formation of individual sandrose nanostructures of Sb 2 Te 3 that can be maintained in the compacted samples, gives the best combination of properties leading to high zT values. Thus, short chain length IL cations are beneficial for this. To check this hypothesis, the thermoelectric transport properties of samples obtained from ILs with short chain imidazolium cations (C 4 mim) in combination with various anions that gave sandrose-like nanostructures are investigated.
Role of the anion of 1-butyl-3-methyl-imidazolium based ILs in thermoelectric properties. In order to investigate the role of the anion in the thermoelectric properties of the resulting Sb 2 Te 3 nanoparticles in more detail, four Sb 2 Te 3 samples were prepared under analogous reaction conditions in C 4 mimCl (A), C 4 mimBr (B), C 4 mimI (C) and C 4 mimNTf 2 (D), respectively, and then compacted to Sb 2 Te 3 pellets using the same protocol. Cross section pictures (Fig. 13) of the resulting cold pressed pellets clearly demonstrate that the agglomerate structure as observed in the SEM pictures of the nanoparticles is preserved within the microstructure of the pellets (Fig. 13A-D). Sb 2 Te 3 synthesized in C 4 mimNTf 2 shows only a few agglomerates in the microstructure (Fig. 13D), whilst for those observed from C 4 mimCl shows that those rose-like structures are still preserved. Table 2 summarizes the thermoelectric transport properties for the four samples synthesized in C 4 mimCl, C 4 mimBr, C 4 mimI and C 4 mimNTf 2 at 300 K. The density of the compressed pellets are 5.1 g cm −3 (79%, C 4 mimCl), 5.2 g cm −3 (80%, C 4 mimBr), 5.5 g cm −3 (85%, C 4 mimI) and 5.3 g cm −3 (82%, C 4 mimNTf 2 ), respectively.
In Fig. 14 the thermoelectric transport properties are presented. The Seebeck coefficient for all samples ranges from 130 to 170 µV K −1 in the temperature range between room temperature and 573 K, which is comparable to the values of the samples discussed before. The highest electrical conductivity of 870 S cm −1 at room temperature was found for the sample synthesized in C 4 mimI, whereas that prepared in C 4 mimNTf 2 (397 S cm −1 ), C 4 mimCl (293 S cm −1 ) and C 4 mimBr (264 S cm −1 ) showed significantly lower values. The thermal conductivity is 1.1 W m −1 K −1 for the sample obtained from C 4 mimNTf 2 , 0.89 W m −1 K −1 for that from C 4 mimI, 0.72 W m −1 K −1 for that from C 4 mimBr and 0.56 W m −1 K −1 for that from C 4 mimCl. The electrical and thermal conductivities show a dependence on the density of the samples. The highest values for σ and κ are measured for the samples with densities of 85% (C 4 mimI) and 82% (C 4 mimNTf 2 ) and are smaller for the Sb 2 Te 3 pellets with 80% (C 4 mimBr) and 79% (C 4 mimCl). The highest zT value of 0.93 at 260°C is reached for C 4 mimI, and for the other samples the zT values are between 0.35 (C 4 mimBr) and 0.44 (C 4 mimNTf 2 , C 4 mimCl).
By correlating the thermal transport data with the particle morphologies it is evident that the concept of controlling the thermal conductivity through phonon phase boundary scattering by a nanotemplating effect of the IL has been successful: in the case where small, individual nano-sandroses could be obtained by using an ionic liquid of high polarity with a strongly coordinating anion, the compacted material exhibited the lowest thermal conductivity. The electrical Hall mobility shows a clear trend for the samples synthesized in the ionic liquids C 4 mimCl, C 4 mimBr, and C 4 mimI, with increasing µ H from 24 cm 2 V −1 s −1 to 112 cm 2 V −1 s −1 . This correlates with the trend found in the morphology of the respective nanoparticles which show an increasing thickness of the nanoparticle platelets with increasing atomic number of the halide anion (compare Fig. 4). It is assumed that the nanoparticle platelets orientat least partlyduring the compaction process perpendicular to the pressing direction. All transport properties are characterized in the pressing direction of the pellets. Therefore, with increasing thickness of the platelets, there are less scattering events for both, electrons and phonons, and consequently the highest values for the electrical Hall mobility and also the thermal conductivity are reached. However, looking at the ionic radii of the IL anions used for the synthesis, it becomes clear that the ionic radius of an I − ion (220 pm) is very similar to that of a Te 2− ion (221 pm). 30 Thus, it appears possible that small amounts of I − can replace Te 2− in the structure of Sb 2 Te 3 , which could also influence the electronic transport properties. More theoretical and experimental evidence will be needed to further substantiate this hypothesis.
Comparison of zT values. Within the following paragraph we compare our results with the literature state of the art. Table 3 shows the zT of nanostructured Sb 2 Te 3 samples for different synthesis routes.
Within this comparison, Snyder and Toberer 31 report the zT data by Marlow Industries that reach zT ≅ 0.8 at 400 K for Sb 2 Te 3 -based alloys (not further specified). By co-doping Sb 2 Te 3 with sulphur, Mehta et al. demonstrate zT ≅ 0.92 at 400 K. 27 Phase pure Sb 2 Te 3 , without any alloying or co-coping, was investigated by Heimann et al. within an earlier work of this group. 12a Hereby, the microwave-assisted decomposition of the SSP (Et 2 Sb) 2 Te in ionic liquids enhanced the zT value Fig. 15.
Conclusions
The morphology of Sb 2 Te 3 nanoparticles synthesized in 1-alkyl-3-methylimidazolium-and 1,3-dialkylimidazoliumbased ILs strongly depends on the chain length of the alkyl group of the IL cation (Fig. 16) and the Lewis basicity of the IL anion (Fig. 17). An increasing chain length resulted in better solubility of the single source precursor (Et 2 Sb) 2 Te, which enhanced the formation of less aggregated nanoparticles. In addition, the role of the anion is mainly attributed to its basicity and its capability to bind to the growing nanoparticle surface. Stronger bases were found to more effectively block the surface, resulting in the formation of thin Sb 2 Te 3 nanoplates, while the formation of thicker nanoparticles was observed with decreasing basicity. As a consequence, the thermoelectric properties of the resulting Sb 2 Te 3 nanoplates strongly differed. Identification of the distinctive roles of the IL anion and cation may help to further improve the figure of merit for these types of materials in the near future.
Materials and methods
Microwave synthesis of Sb 2 Te 3 nanoparticles. 1.18 g (2.42 mmol) of 1 was added to 13.7 mmol of the respective ionic liquid. The room temperature solid ILs C 4 mimCl, C x mimBr (x = 3-5) and C x C x mimBr (x = 4, 6, 8) were molten by heating to 90°C before adding 1. The reaction mixture was stirred for 5 min and heated in a laboratory microwave oven (Discover, CEM) for 30 s at 100°C, subsequently for 5 s at 150°C and finally for 5 min at 170°C. The heating was performed with a maximum power of 100 W until the desired temperature was reached and kept at that temperature with a power of 5-12 W. The reaction container was cooled with compressed air with a pressure of 100 kPa. The resulting colloidal solutions were centrifuged (2000 rpm), washed with 10 mL of acetonitrile (7×) and dried at ambient temperature under dynamic vacuum.
Material characterization Electron microscopy. The particle size and morphology as well as the elemental composition of the nanoparticles and of cross section samples of the Sb 2 Te 3 pellets, which were prepared using a Jeol Cross Section Polisher (IB-09010CP), were analysed by scanning electron microscopy (SEM) using a Jeol JSM 6510 microscope equipped with a Bruker Quantax 400 unit (EDX). Powder X-ray analysis. PXRD patterns were collected on powder filled Lindeman capillaries on a Huber 670 powder diffractometer with Mo Kα radiation (λ: 0.71073 nm, 40 kV and 40 mA) or a Bruker D8 Advance powder diffractometer with Cu Kα radiation (λ: 1.5418 Å, 40 kV and 40 mA) using a Si single crystal as a sample holder to minimize scattering. For better homogenization, the dried powder samples were re-dispersed in ethanol on the Si surface and investigated in the range from 10 to 90°2θ with a step size of 0.01°2θ (counting time 0.6 s). Rietveld refinements were performed with the program package TOPAS 4.2 (Bruker) to determine the lattice parameters and average crystallite size by using the Scherrer equation. 26 The structure model of Sb 2 Te 3 (#192780) from the ICSD database was used. For each Rietveld refinement, the instrumental correction as determined with a standard powder sample LaB 6 from the NIST (National Institute of Standards and Technology) as the standard reference material (SRM 660b; a(LaB 6 ) = 4.15689 Å) was taken into account.
Photoelectron spectroscopy. The XPS measurements were done at a VersaProbe II by Ulvac-Phi. Monochromatic Al-Kα light with hν = 1486.6 eV was used and the electron emission angle was 45°. All spectra were referenced to the position of the main carbon peak at 284.8 eV binding energy. The Sb 3d signal was fitted by first fitting the 3d 3/2 peaks and constraining the position and intensity of the 3d 5/2 components to these values. This is done in order to estimate the O 1s signal which overlaps with the Sb 3d 5/2 peaks. The samples were transported under an inert gas atmosphere to the XPS machine and were exposed to air for roughly 3 minutes prior to their insertion into a vacuum.
Thermoelectric properties. The nanocrystalline Sb 2 Te 3 powder was compressed to a pellet with a diameter of 5 mm applying a pressure of 815 MPa for 30 min. A pressing tool from Atlas Power 25T from SPECAC made from stainless steel was used. The density of the pellets was determined from the mass to volume ratio. Annealing was performed at 573 K in vacuum (10 −5 mbar) with a ramp of 5 K min −1 and a dwell time of one hour. All thermoelectric coefficients were measured in the z-direction corresponding to the pressing direction of the pellet in a temperature range from room temperature to 573 K. The Seebeck coefficient α and the electrical conductivity σ were measured by using a commercial device ZEM-3 provided by Ulvac Technologies, Inc. The thermal conductivity κ is calculated with κ = D T ρc p with D T the thermal diffusivity, ρ the density and c p the heat capacity. The thermal diffusivity was measured with an LFA 457 Microflash from NETZSCH-Gerätebau GmbH. For the calculation a literature value for the heat capacity was used. 40 Hall measurements were done at room temperature in the van-der-Pauw geometry with a Physical Property Measurement System (PPMS, Dyna Cool Series) provided by Quantum Design, Inc. From the measured Hall coefficient R H , the Hall carrier concentration n H is estimated, which is assumed to be isotropic and temperature independent. The Hall mobility was derived from the dependence σ = eµ H n H . | 8,555.4 | 2017-01-17T00:00:00.000 | [
"Materials Science"
] |
DANIO RERIO ( ACTINOPTERYGII : CYPRINIFORMES : CYPRINIDAE ) : A NEW RECORD FROM ANDAMAN ISLANDS , INDIA
Danio rerio (Hamilton, 1822) is reported herewith for the first time from Andaman group of islands, which is a new addition to the freshwater fish fauna and also a significant insular record other than its known distribution range. The morphological description of D. rerio collected from North and Middle Andaman Island and the mainland, India topotypes are provided along with the molecular genetic comparison.
The Andaman and Nicobar Islands in the Bay of Bengal, stretching between 6°45′-13°45′N and 92°10′-94°15′E, consist of 572 islands.These islands are characterized by a rich diversity of flora and fauna with a high level of endemism (Rao et al. 2013).There has been only few records of freshwater fishes in the faunistic studies concerning those islands (Day 1870, 1876, 1878, Annandale and Hora 1925, Mukerji 1935, Herre 1939, 1940, 1941, Koumans 1940, Starmühlner 1976, Rao et. al 2000, Palavai and Davidar 2009, Devi 2010, Rajan and Sreeraj 2013, Rajan et al. 2013, Rajan and Sreeraj 2014a, 2014b, 2014c, Arun Kumar et al. 2016).An attempt to document the freshwater fishes was initiated by CIARI (Central Island Agricultural Research Institute), Port Blair during 2015, resulted in recording the occurrence of Danio rerio (Hamilton, 1822) from Middle and North Andaman.This paper describes this finding as a new record to the Andaman Islands.
Four specimens of Danio rerio, three from the Kalpong River, one from a marsh near Kalipur beach, Diglipur, North Andaman and six specimens from Rangat, Middle Andaman were collected (Fig. 1).Specimens were preserved in 90% ethanol.All were compared with the topotypes collected from Kolkata, India by Soutrik Ghosh.Counts and measurements are based on Hubbs and Lagler (1964) and expressed in the percentage of standard length (SL) and head length (HL).Numbers in parenthesis after a count denotes the frequency of that count.DNA was extracted from one D. rerio specimen from North and two specimens from Middle Andaman using the standard protocol of Bruce et al. (1993).Partial mitochondrial gene 16S ribosomal RNA (16S) (Palumbi et al. 2002) was sequenced in ABI 3500 DNA analyser.The edited and trimmed sequences of D. rerio were submitted to the NCBI database (KY945239, KY945240, and KY242364).The homology of the generated sequence was analysed using the Basic Local Alignment Search Tool (BLAST) program in the National Centre for Biotechnology Information.Additional sequences of Danio spp.(AY788011, AY054970, AB741876, KT835295, KT624625-26, AY707452, and AY707455) were downloaded from the NCBI database to infer phylogenetic tree.All the specimens examined were deposited in the museum of the National Bureau of Fish Genetic Resources (NBFGR), Cochin (CH), Kerala, India and in the freshwater fish collection of the Central Island Agricultural Research Institute, Port Blair (CIARI/FF).
North Andaman, India (13°14.40′N,92°58.39′E), 1 specimen, 29.96 mm SL, Kalipur Marsh, Diglipur, North Andaman, Praveenraj and Raymond Jani Angel, 26 Oct 2015 NBFGR-CH-1180, 1 specimen, 29.96 mm SL, Kalipur Marsh, near Kalipur beach, Diglipur, North Andaman, India, (13°13.52′N,93°02.67′E),Praveenraj and Raymond Jani Angel, 26 Oct 2015.CIARI/FF-01, 6 specimens, 23.7-35.5 mm SL, Rangat, Middle Andaman, India (12°43.22′N,92°53.11′E),Sailesh Kumar, 19 Apr 2016.Comparative material.Danio rerio (Fig. 2C): NBFGR-CH-1179, 6 specimens, 18.7-21.1 mm SL, Alipurduar, Kolkata, India, Soutrik Ghosh, 15 Nov 2016.Description.Body slender, laterally compressed, ventral portion more arched than dorsal.Head small and oval with small mouth, obliquely directed upward; lower jaw longer than upper jaw.Eyes moderate with circular pupil.Two pairs of barbels present, rostral pair longer than eye diameter and maxillary pair extending to pectoral fin base.Dorsal fin oval; anal fin marginally truncate; caudal fin forked and its anterior and posterior edge oval; pectoral fin sharp at anterior region; pelvic fin small; lateral line incomplete.Coloration: body pale olivaceous with four metallic-blue lines originating from head to caudal fin base and three golden lines running parallelly between blue lines.Caudal and anal fin barred with four blue stripes and interrupted by yellow stripes running parallel.Dorsal fin yellowish, bordered with faint blue to white stripe.Eyes silvery with melanophores scattered over it anteriorly.Comparative morphometric and meristic data of D. rerio specimens from the Middle and North Andaman and the mainland India, (topotypes) are provided in Table 1.Molecular characterization.All the Danio rerio sequences generated under the accession number KY945239, KY945240 (Middle Andaman) and KY242364 (North Andaman) matched 99%-100% identity with existing 16S rRNA sequences of D. rerio (GenBank: KT624624, KT624625, KT624626).Maximum Likelihood method based on the Kimura 2-parameter model (Kimura 1980), was conducted using MEGA (Molecular Evolutionary Genetics Analysis) version 7 (Kumar et al. 2016), to provide a phylogenetic tree representing patterning of divergences (Fig. 3) Distribution.Danio rerio has a wide distribution from northern Myanmar, Nepal, Bangladesh, Ganges and Brahmaputra rivers basins in north-eastern India and southern India (Barman 1991, Talwar and Jhingran 1991, Spence et al. 2006) Remarks.Danio rerio was described as "Cyprinus rerio" from the Kosi River, a tributary of the Ganges River in Bengal, India, and was placed under the genus "Danio" as a division of "Cyprinus" (Hamilton 1822).After the proposal of the genus "Brachydanio" by Weber and de Beaufort (1916), the smaller species with seven branched dorsal rays and incomplete or absent lateral line were placed under Brachydanio (including D. rerio) and the larger bodied forms under Danio (see Chu 1981) .Chu (1981) synonymised Brachydanio with Danio because of overlapping characters.Subsequently, Barman (1991) lumped all 'Danio, Brachydanio and Devario' in a single genus "Danio".After a thorough phylogenetic and morphological analysis by Fang (2003), the genus Danio and Devario were found to be distinct, and the genus Brachydanio was also synonymised with Danio, and the name D. rerio came into usage.
The occurrence of D. rerio from the Andaman Islands is an interesting geographic record from the insular 6) n = number of specimens, SD = standard deviation, SL = fish standard length, HL = head length; Numbers in parenthesis after a count denote the frequency of that count.
freshwaters.Recently the distributional record of D. rerio from peninsular India has been reported (Whiteley et al. 2011), and the natural history of various populations from peninsular India, northern part, eastern and north-eastern parts of India were studied (Arunachalam et al. 2013).
Hence the distributional records are from the Cauvery River basin, the Krishna River basin, and the Sharavati River basin from peninsular India, tributaries of the Ganges River basin, the Mahanadi River basin and tributaries of the Brahmaputra River basin.In the presently reported study, no significant variations in the meristic and morphometric characters between D. rerio specimens from the Andaman Islands, the topotypes, and the data from Day (1876Day ( , 1878) ) and Barman (1991) were observed.Overall, the morphological and molecular data confirmed the taxon identity.However, the new distributional record of D. rerio from the Andaman Islands is surprisingly far from its supposed native range, which suggests an accidental introduction through freshwater aquaculture.It is reported that alien fish introductions at the Andaman and Nicobar Islands are believed to be done during the British rule (Rajan and Sreeraj 2014c).History on carp introduction in the Andaman Islands is known from the records of Annandale and Hora (1925) and Mukerji (1935).
Five species of air-breathing fishes are well established in the native waters; three species of Gangetic carps, one species of barb, and four species of exotic carps are being cultured in ponds (Sen 1975, Mohanraj et al. 1999, Palavai and Davidar 2009, Rajan and Sreeraj 2013, Rajan and Sreeraj 2014c).However, there are no reports of introduced carps from native waters.North and Middle Andaman islands have a numerous population of local Bengali settlers, involved in aquaculture of Gangetic and exotic Chinese carps.To a large extent, carp spawns are bought from mainland Kolkata and there is an ample chance of weed-fishes introduction, coming along with it as a contaminant.Hence, the presently reported findings of D. rerio from the Andaman Islands may probably be a result of a deliberate release or escape from fish farms.Danio rerio is generally omnivorous, and their natural diet consists of zooplankton, insect, and phytoplankton, filamentous algae, invertebrate eggs, arachnids and detritus (McClure et al. 2006, Spence et al. 2007).The establishment of D. rerio population in the naturals waters of Middle and North Andaman may pose a serious ecological threat in the form of competition for food and space with the native fishes.Comprehensive studies might impart more knowledge on their behaviour and feeding ecology in the insular freshwaters of Andaman Islands.
Table 1
Morphometric and meristic characters of Danio rerio from Andaman Islands and topotypes | 1,943.2 | 2017-12-31T00:00:00.000 | [
"Biology"
] |
Differential Effects of Oleic and Palmitic Acids on Lipid Droplet-Mitochondria Interaction in the Hepatic Cell Line HepG2
Fatty acid overload, either of the saturated palmitic acid (PA) or the unsaturated oleic acid (OA), causes triglyceride accumulation into specialized organelles termed lipid droplets (LD). However, only PA overload leads to liver damage mediated by mitochondrial dysfunction. Whether these divergent outcomes stem from differential effects of PA and OA on LD and mitochondria joint dynamics remains to be uncovered. Here, we contrast how both fatty acids impact the morphology and interaction between both organelles and mitochondrial bioenergetics in HepG2 cells. Using confocal microscopy, we showed that short-term (2–24 h) OA overload promotes more and bigger LD accumulation than PA. Oxygen polarography indicated that both treatments stimulated mitochondrial respiration; however, OA favored an overall build-up of the mitochondrial potential, and PA evoked mitochondrial fragmentation, concomitant with an ATP-oriented metabolism. Even though PA-induced a lesser increase in LD-mitochondria proximity than OA, those LD associated with highly active mitochondria suggest that they interact mainly to fuel fatty acid oxidation and ATP synthesis (that is, metabolically “active” LD). On the contrary, OA overload seemingly stimulated LD-mitochondria interaction mainly for LD growth (thus metabolically “passive” LDs). In sum, these differences point out that OA readily accumulates in LD, likely reducing their toxicity, while PA preferably stimulates mitochondrial oxidative metabolism, which may contribute to liver damage progression.
INTRODUCTION
Non-alcoholic fatty liver disease (NAFLD) is the most common liver disease, affecting 15-30% of the world population. It has a direct association with obesity, insulin resistance, and cardiovascular diseases (1) and characterizes by excessive fat accumulation in the liver (>5% in hepatocytes) (2). Therefore, its development tightly relates to fatty acid (FA) intake. On this regard, evidence supports the idea that saturated FA predisposes to hepatic lipid accumulation (termed steatosis), while unsaturated FA could be protective (3).
The most abundant saturated and monounsaturated FA in the Western diet are palmitic (PA, C16:0) and oleic acid (OA, C18:1 n-9) (4). During intestinal absorption, they are esterified into triglycerides (TG), and then delivered to the liver, which subsequently distributes them to other organs. However, excessive levels of either PA and OA lead to steatosis but with distinct cellular outcomes (5,6). On the one hand, PA treatment causes liver lipotoxicity via oxidative stress, resulting in endoplasmic reticulum (ER) and mitochondrial dysfunction, and ultimately cell demise (5)(6)(7). In contrast, primary cultures of mouse hepatocytes treated with OA do not display either increased generation of oxygen radicals or signs of mitochondrial dysfunction or apoptosis. Moreover, in the hepatocyte-derived cell line HepG2, OA even prevented PA-induced liver lipotoxicity (6,7).
Acting as protection against lipotoxicity, lipid droplets (LD) serve as TG deposits, thereby preventing fat accumulation in other cell compartments (8). Structurally, they comprise a nucleus of neutral lipids, mostly TG, surrounded by a monolayer of phospholipids and specific coating proteins, such as those belonging to the perilipin (PLIN) family. LD are highly dynamic organelles, varying in number and size according to storage requirements. Conversely, LD also hydrolyze TG, thus releasing free fatty acids, which serve as energy sources through their degradation (9). Fatty acid degradation takes place at mitochondria, which produce ATP through oxygen-driven oxidation (10). Like LDs, mitochondria vary in number and size to cope with varying nutritional scenarios, such as fasting and physical activity (11). In this sense, studies suggest that smaller mitochondria are more oxidative and thus, synthesize ATP more efficiently (8).
Apart from individual dynamics, mounting evidence shows that mitochondria and LD physically interact, especially in tissues with a high capacity for fatty acid oxidation and storage, such as the liver, heart, brown adipose tissue, and skeletal muscle (8,12). For instance, PLIN5 mediates LD-mitochondrial interaction in the mouse liver cell line AML12 (13) and cardiac tissue of mice, resulting in LD expansion and a decrease in fatty acid oxidation (14). Likewise, in adipose tissue, PLIN2 binds the mitochondrial protein MIGA2, thus bridging mitochondria and LD (15). Furthermore, PLIN5 and PLIN2 form a complex, which favors mitochondrial recruitment to the surface of LDs in cardiomyocytes (14). In adipose tissue, PLIN1 reportedly promotes LD interaction with mitochondria by binding to the proteins MFN2 and OPA1 at the mitochondrial surface (16,17). Interestingly, Benador et al. attained similar results about LD-mitochondria association in brown adipose tissue (8). In hepatocytes, the physiological role of LD-mitochondria interaction is yet to be unveiled. Also, despite being the most abundant dietary fatty acids, little is known about the differences between the effect of PA and OA on LD morphology and contact with mitochondria in hepatocytes ( Figure 1A). Therefore, the objective of this work was to contrast how PA and OA impact LDmitochondria dynamics and mitochondrial bioenergetics and how these processes are associated with the development of hepatic steatosis.
ATP Measurements
Intracellular ATP content was determined using the luciferin/luciferase-based ATP detection kit CellTiter-Glo Luminescent Cell Viability Assay (Promega) following the manufacturer's instructions as described in (19). Briefly, HepG2 cells were cultured in 96-well Petri dishes and washed 3 times with PBS before incubation with the reagent. Sample luminescence was quantified in a Synergy 2 microplate reader (BioTek Instruments). Data were normalized as fold of changes over control. Treatment with oligomycin (5 µg/mL) for 3 h was used as a negative control.
Oxygen Consumption
Cells were seeded in 60 mm Petri dishes at 80% confluence and treated according to the experiment. After the different treatments, measurements were performed as previously described (20)(21)(22). In brief, cells were washed with PBS, trypsinized for 3 min, and centrifuged at 200 G for 5 min. Then, the cells were resuspended in PBS and placed in the chamber of a Clark electrode (Oxygraph Plus, Hansatech). Basal respiration, proton leak, or ATPunlinked respiration (Oligomycin, 400 µM) and uncoupled respiration (FCCP 20 µM) were measured sequentially for 3 min. Data obtained were standardized to basal control respiration.
Microscopy
Cells were plated on 12 mm Petri dishes and treated according to the experiment. To label LDs and mitochondria, cells were stained with BODIPY 493/503 (2 µM; D3922, Invitrogen) and MitoTracker Orange (400 nM; M7510, Invitrogen) for 25 min at 37 • C. Then, the cells were fixed with 4% paraformaldehyde and 0.01% Hoechst in PBS, and mounted in Dako Fluorescence Mounting Medium (S3023, Dako-Agilent). Fixed cells were imaged using a Nikon C2 Plus-SiR confocal microscope. 6-20 cells were registered for 3 independent experiments.
Image Analysis
Images acquired were deconvoluted, background-subtracted, thresholded, and analyzed with ImageJ software (NIH). LDs and mitochondria number and individual volume were quantified using the 3D Object Counter plugin, as previously described (20)(21)(22)(23). LD-mitochondria colocalization was determined within one focal plane using the JACoP plugin (20,22,23). The mitochondrial potential was defined in relation to MitoTracker Orange fluorescence intensity, and was analyzed within a single plane at the cell equator with the Analyze Particles function (20,22). The mitochondrial potential of bound LD and non-bound-LD was performed by constructing a compartment consisting of 10 z-planes of the sites where mitochondria colocalize with LDs. Then, the intersected or excluded mitochondrial fluorescence of bound LD and non-bound-LD was quantified, respectively. Image intersections were obtained using the Image Calculator command of ImageJ ("AND" operator) (22).
Statistical Analysis
All statistical analyses were performed using GraphPad Prism software, version 6 (San Diego, CA, USA). Data are expressed as mean ± SEM of at least three independent experiments. Data were analyzed by one-way ANOVA, and, when appropriate, comparisons between groups were performed using Tukey's or Dunnett's post-hoc tests. A two-tailed Pearson's coefficient was used for correlation analysis. Differences were considered significant at P < 0.05 (that is, a confidence level of 95%, α = 0.05).
Oleic and Palmitic Acids Have Differential Effects on LD and Mitochondrial Dynamics
To evaluate whether OA or PA affects the morphology of LDs, HepG2 hepatocytes were treated with OA or PA between 0-24 h. Whereas OA ( Figure 1B) stimulated the appearance of big and bright droplets in hepatocytes, mainly in the central region of the HepG2 cells, PA triggered the emergence of evenly distributed smaller droplets. To quantify this effect, LD morphology was also assessed according to the number and volume of individual isolated elements through 3D reconstitution of confocal stacks (20)(21)(22)(23) and the total fluorescence of BODIPY. Compared to BSA, OA significantly increased the number of LDs per cell at all the times analyzed, reaching a ∼30-fold increase as early as 2 h after treatment ( Figure 1C). Subsequently, OA also led to a gradual increase in the volume of the LDs, reaching ∼4 and ∼6-fold, compared to controls after 18 and 24 h, respectively ( Figure 1D). In terms of total BODIPY fluorescence, only OA triggered a significant increase in the fluorescence after 6 h of treatment; moreover, this increase was maintained at later times of 18 and 24 h ( Figure 1E). These results agree with the rapid nucleation of new LDs, which later grow over time. On the other hand, treatment with PA led to a slower increase in the number of LD per cell, which was significant (∼20-fold) at 6 and 18 h of treatment ( Figure 1C). In contrast to OA, PA did not change the volume of the LDs, suggesting slow nucleation of new LD without a growing phase ( Figure 1D).
We next evaluated mitochondrial morphology in HepG2 hepatocytes using the mitochondrial-specific MitoTracker Orange probe (400 nM, 25 min) and 3D-reconstruction imaging (Figures 2A-C), as we have previously reported (21,23). In contrast to the steep changes in LD morphology, OA only caused a slight but significant increase in the number of mitochondria per cell, concomitant with a decrease in their volume after 24 h, indicative of the induction of mitochondrial fission (Figures 2B,C). Meanwhile, PA treatment triggered a faster and more intense process of mitochondrial fragmentation, noticeable after 2 h, reaching a ∼2-fold increase in the number of mitochondria per cell and a significant ∼2-fold reduction in mitochondrial volume (Figures 2B,C). The magnitude of these differences reveals maintenance in the total size of the mitochondrial network, mainly suggesting changes in the mitochondrial fusion/fission equilibrium. Our results with PA are consistent with their reported effects on mitochondrial morphology in other cell types (24)(25)(26).
To evaluate whether the changes in organelle dynamics affect their physical coupling, we then assessed LD-mitochondria proximity, evaluated as Mander's colocalization coefficients between LD and mitochondria (20,22,27) (Figures 2D,E). Both fatty acids increased LD-mitochondria proximity starting at 2 h. However, OA treatment induced a significantly higher increase at 2 h (>2-fold), which later decreased to values similar to PA (∼1.5-fold). Taken together, these results suggest that OA treatment triggers the early appearance of more and bigger LDs than PA, which are in closer contact with mitochondria in HepG2 hepatocytes.
Oleic and Palmitic Acids Evoke Different Profiles of LD-Mitochondria Interaction Proteins
PLINs are the main structural proteins of LDs, and determine their structure, metabolism, and interaction with other organelles (28,29). While PLIN2 is ubiquitously expressed, PLIN5 is especially enriched in tissues with high levels of mitochondrial oxidation, such as the liver (29). PLIN5 reportedly regulates mitochondrial recruitment to LD, thereby regulating mitochondrial metabolism (8,29,30). On the other hand, MFN2 not only promotes mitochondrial fusion and boosts mitochondrial metabolism (31,32), but also binds to PLIN1 to facilitate the LD-mitochondria interaction in adipose tissue (17,33). Due to their importance, we measured their relative abundance in our experimental model through Western blot analysis ( Figure 3A). As shown in Figures 3A,B, PA triggered a significant acute increase in PLIN2 at 6 h of treatment, without altering MFN2 or PLIN5, compared to control. On the other hand, OA increased both PLIN5 and MFN2 (Figures 3A,C,D) at later times (18-24 h). These results suggest that OA promotes PLIN5-MFN2-mediated LD-mitochondria interaction, while PA favors the presence of PLIN2 at the surface of LDs.
Oleic and Palmitic Acids Differentially Modulate Mitochondrial Oxidative Function
Mitochondria act as oxidative centers for different metabolites, such as fatty acids, which ultimately fuel a chain of redox reactions driven by oxygen-mediated oxidation. The protein complexes that catalyze this redox process (termed the electron transport chain, ETC) pump protons across the inner mitochondrial membrane. This creates an electrochemical gradient, which entails the generation of a mitochondrial transmembrane potential ( ψm). The resulting proton-motive force drives ATP production by transporting the protons back across the mitochondrial inner membrane through the ATP synthase enzyme (23,34). Of note, this mechanism implies that ATP production dissipates the ψm ( Figure 4A). Thus, to characterize the differential effects of OA and PA treatments on mitochondrial metabolism, we analyzed three defining parameters of mitochondrial bioenergetics (34): ψm, O 2 consumption, and ATP levels.
We first evaluated the ψm by using the mitochondriaspecific MitoTracker Orange potentiometric probe and confocal microscopy. Because PA treatment triggers mitochondrial fragmentation and a concomitant decrease in mitochondrial volume, to quantify the ψm, we evaluated both the mean mitochondrial fluorescence and the total fluorescence per cell of MitoTracker Orange. The first fluorescence parameter addresses the bioenergetic state of individual mitochondria, and the second parameter evaluates the metabolic state of the cell as a whole. Figures 4B,C shows that both OA and PA treatments increased the mean and total fluorescence levels of MitoTracker Orange, starting at 2 h, indicating a boost in the ψm. In the case of OA, the ψm peaked at 6 h and remained high for all the analyzed times. On the other hand, PA treatment led to a subsequent decrease in the ψm to baseline levels after 6 and 18 h, followed by a second rise in the ψm, noticeable at 24 h, thus suggesting a two-phase response for this fatty acid.
Next, we assessed the O 2 consumption rate (OCR) as a measure of mitochondrial oxidative activity. Treatments with OA or PA increased basal OCR but with different increment profiles ( Figure 4D). In the case of OA, the increment was slower, becoming significant at 6 h. PA triggered a faster increase in the OCR, as early as 2 h, which continued at 6 h ( Figure 4D). We also measured OCR in the presence of oligomycin, an inhibitor of ATP synthesis, which reveals the amount of mitochondrial respiration not associated with ATP production (Figure 4D). Neither OA nor PA changed the OCR in the presence of oligomycin, thereby suggesting that neither treatment affects the mitochondrial coupling between respiration and ATP production. In other words, it appears that mitochondria remain similarly "efficient". Additionally, we also measured OCR in the presence of FCCP, which dissipates the proton gradient at the inner mitochondrial membrane (i.e., ψm). Given that mitochondrial respiration is a process that works against the proton gradient, the elimination of the ψm allows respiration to reach its maximum levels. As with oligomycin, neither OA nor PA changed the magnitude of FCCP-induced OCR (Figure 4D), implying that mitochondria maintain a similar "maximal performance" across treatments. Taken together, these results suggest that the changes in basal OCR would not be due to differences in mitochondrial functional capacity (for example, increases in mitochondrial functional units or damage of existing ones), but rather to regulatory changes, such as differential substrate availability or upstream signaling cascades.
Similar to OCR, PA treatment triggered a significant increase in ATP production as soon as 2 h, which remained high at least until 6 h. However, OA treatment led to a slower ATP increase, which reached significance at 6 h ( Figure 4E). As an operational indicator of the mitochondrial functional profiles, we used the relationship between ATP production and the ψm (Figure 4F). At 2 h, the ATP/ ψm ratio remained relatively constant for both conditions. Nonetheless, at 6 h there was a divergence: the mitochondrial network from PA-treated cells appeared more ATP production-oriented, while the network from OAtreated cells seemed to prefer ψm build-up. Altogether, these results suggest that both treatments stimulate mitochondrial bioenergetics, but with different functional profiles.
Oleic and Palmitic Acids Stimulate Divergent Specialization of Mitochondrial Populations
In sum, both PA and OA fatty acids induce distinct changes in mitochondrial morphology, LD-mitochondria interaction, and their bridging proteins and at the level of mitochondrial functionality. However, our previous studies have shown that the mitochondrial network and its interactions are highly heterogeneous (22,23). Thus, the observed changes probably involve a subpopulation of mitochondria as a sort of "specialization mechanism". Moreover, mounting evidence underscores an association between mitochondrial size and function, although in a cell type-dependent fashion (35). In brown adipocytes, which avidly oxidize FA, Benador et al. showed that LD-associated mitochondria are larger in size and mainly dedicated to ATP production and LD expansion, while smaller mitochondria associate mainly with FA oxidation (8). To address whether OA or PA replicate this behavior in our hepatocyte-derived cell line, we first compared the ψm level of the LD-bound mitochondrial network with the rest of the mitochondrial network (hereafter termed "bulk mitochondria"). In Figures 5A,B, we show that although both OA and PA increase the ψm of bulk mitochondria starting at 2 h, only OA has a steady effect. In contrast, PA again displays a biphasic response, which faded at 6 h, reappeared at 18 h, and finally disappeared after 24 h. In terms of LD-bound mitochondria, OA caused a minor accretion in their ψm, which became significant after 24 h of treatment ( Figure 5C). On the other hand, PA augmented the ψm after 6 h, which faded at 18 h, but reappeared at 24 h. Curiously, the ψm increments in bulk mitochondria interleaved with those of the LD-bound mitochondria. Taken together, these results suggest that OA induces steep nucleation and gradual expansion of LDs, which strongly interact with mitochondria exhibiting a relatively "weak" ψm. Contrarily, PA triggers the biogenesis of multiple but small LDs, which show a moderately increased interaction with mitochondria harboring a relatively "strong" ψm, associated with higher OCR and ATP production.
Finally, to assess the association between mitochondrial bioenergetics and LDs dynamics, we correlated the size of each individual LD with the ψm of the associated mitochondria. We chose the temporality of 2 h because this is our earliest determination of LD nucleation, where the differences between OA and PA are most marked. In agreement with Figure 1, the histogram of Figure 5D shows that OA favors the emergence of larger LDs (cross-sectional area > 0.2 µm 2 ), while PA leads to the accumulation of smaller ones with smaller areas (< 0.2 µm 2 ). Then, we analyzed each LD from 18-20 cells in both conditions and correlated the size of each LD with the associated total MitoTracker Orange fluorescence (Figure 5E). That is, we evaluated how active were the mitochondria associated with each LD (in terms of the ψm). We found that the different fatty acids promoted the appearance of distinct populations of LDs, respective to their size and the activity of the attached mitochondria. In either case, the "bulk LDs" consisted of units with smaller size (< 1 µm 2 ), associated with mitochondria with lower ψm. As initially noted, OA led to the emergence of larger LDs, characterized by their colocalization with mitochondria with lower ψm. On the contrary, PA treatment did not promote the enlargement of LDs but triggered the appearance of a subpopulation of small-sized LDs associated with mitochondria with higher ψm. Altogether, these observations indicate that the unsaturated fatty acid OA promotes lipid accumulation in larger LDs proximal to mitochondria that build-up lower ψm and that the saturated fatty acid PA favors smaller LDs in close apposition to mitochondria with higher ψm.
DISCUSSION
Within the cell, FA have structural and functional roles, and serve as an energy reservoir stored in the form of TG. The latter is essential to ensure a continuous FA supply, independent of external nutrient availability (36). However, high-fat diets associate with obesity and other related diseases (37), with several studies showing that the monounsaturated FA OA is less toxic than the saturated FA PA. Even more, OA can prevent PA-induced toxicity in hepatocytes (5-7), although the differential impact in LD-mitochondria dynamics and mitochondrial bioenergetics of each FA was not explored. Here, we report that PA and OA elicited a differential cellular response in the hepatocyte cell line HepG2. While both fatty acids led to massive TG accumulation, OA promoted the formation of more and bigger LDs, which were in closer contact with mitochondria compared to PA. Instead, the latter triggered a fast and intense process of mitochondrial fragmentation associated with increased OCR and ATP levels. Interestingly, the large LDs promoted by OA treatment were proximal to mitochondria with a baseline ψm ("passive LDs"). In contrast, PA promoted the formation of small-sized LDs proximal to mitochondria with enhanced ψm ("active LDs") ( Figure 6). This feature may contribute to explain the toxic effects of saturated FA, like PA, on hepatocytes.
The effect of OA promoting bigger and more abundant LDs compared to PA (Figure 1) (5) has also been observed in other cell types, such as pancreatic β-cells (38), chondrocytes (39), and H9C2 cardiomyoblasts (40). This greater capacity of OA to be esterified to TG and accumulated into LDs likely explains its reduced lipotoxicity compared to PA. Furthermore, OA reportedly ameliorates PA-induced toxicity in hepatocytes (5,7,41), although it is unclear whether OA promotion of PA redirection to TG is the primary protective mechanism (6). For instance, OA also induces the redistribution of ceramide synthases to LDs. These enzymes catalyze the storage of ceramide as acylceramide, thereby decreasing the toxic effect of ceramide accumulation (42).
LDs participate in controlling energy metabolism and thus communicate with other organelles, relying on regions of close contact, including mitochondria, ER, and peroxisomes (9). The LD-mitochondria association has been reported particularly in tissues with a high TG oxidation and storage capacity, such as liver, heart, brown adipose tissue, and skeletal muscle (8,13,30). Notably, starvation-induced LDs in cardiomyocytes require a fused mitochondrial network to ensure mitochondrial oxidation of FA (43), suggesting that LD and mitochondrial dynamics and function are highly interrelated. Our study, like others, shows that PA triggers rapid mitochondrial fragmentation. On the other hand, OA treatment induced a slower increase in mitochondrial fragmentation, which was not as marked as the one induced by PA (Figure 2). These results support the idea that saturated and unsaturated FA differentially affect the mitochondrial fusion/fission equilibrium.
LD-mitochondria physical contacts are observable from yeast to mammalian cells (9), and their extent varies according to lipolytic stimuli (17,44), exercise (45) or starvation (43,46). Under starvation conditions, LD-mitochondria contacts increase, which supports FA transfer to mitochondria for β-oxidation (43). Similar results were observed in brown adipocytes, where cold exposure increased LD-mitochondria interaction, supporting thermogenesis (14,40). More recently, Benador et al. reported in brown adipocytes that mitochondria in close contact with LDs maintain their oxidative capacity but have low levels of FA oxidation, thereby supporting LD growth by providing ATP for TG synthesis (8). Hence, two models of LD-mitochondria coupling have been so far described: the interaction that favors LD consumption and energy production, and the interaction that prompts LD expansion (30,47). In the hepatocyte in vitro model presented here, we found that OA promotes a steeper increase in LD-mitochondria contacts than PA (Figure 2), suggesting that in HepG2 cells, OA feeding evokes the type of LD-mitochondria that mediates LD expansion.
Remarkably, only OA treatment increased PLIN5 and MNF2 protein levels, which putatively support the observed increment in LD-mitochondria contacts (Figure 3). Reportedly, PLIN5 is present in multiple cells and tissues, including hepatic and heart muscle cells (48). In accordance with our results, hepatocytes from liver-specific PLIN5 KO mice display fewer LD-mitochondria contacts, reduced hepatic TG synthesis and FA oxidation, and are more susceptible to develop hepatic insulin resistance (49). On the contrary, mice overexpressing liverspecific PLIN5 fed with a high-fat diet exhibit severe steatosis without worsening glucose homeostasis. Surprisingly, these animals have lower fasting insulin levels, suggesting preservation of insulin sensitivity (50). Similar to PLIN5, MFN2 is needed for LD-mitochondria interaction, as shown in brown adipose tissue, where it supports FA-fueled thermogenesis (51). Accordingly, MFN2 ablation protects against high-fat diet-induced insulin resistance in brown adipose tissue (17). On the other hand, in our study model, PA treatment only raised PLIN2, but not PLIN5 nor MFN2 protein levels. This observation agrees with another study showing that PLIN2 promotes liver steatosis in mice (52). However, how PLIN2 participates in LD-mitochondria coupling remains to be unveiled. Altogether, our observations indicate that OA and PA treatments induce different sets of LDmitochondria tethers, which underlie the distinctive functional coupling between both organelles.
Additionally, our study showed that PA treatment triggers a faster and more pronounced mitochondrial fragmentation than OA, concomitant with markedly higher OCR, ψm and ATP levels (Figures 2, 4). These results support the idea that saturated and unsaturated fatty acids differentially regulate the mitochondrial fusion/fission equilibrium and bioenergetics, generating two mitochondrial networks with distinct metabolic profiles. This agrees with the study of Benador et al. in brown adipocytes, in which smaller mitochondria contributed to fatty acid oxidation more actively, while larger mitochondria mainly FIGURE 6 | Oleic and palmitic acids favor different lipid droplet and mitochondria sub-populations with distinct morphological and bioenergetic profiles in HepG2 cells. Both oleic acid (OA) and palmitic acid (PA) induce the formation of lipid droplets (LDs) in HepG2 cells. OA induces a steep nucleation process, followed by a steady growth of LDs. PA stimulates a moderate increase in the number of LDs, which maintain a small size. Moreover, while OA does not affect mitochondrial morphology, PA elicits fragmentation of the mitochondrial network. Both treatments boost mitochondrial respiration (OCR), but with different outcomes. OA mainly heightens the overall transmembrane potential of the mitochondrial network ( ψm-oriented network). On the other hand, PA treatment favors ATP generation (ATP-oriented network). LD-mitochondria physical proximity increases upon treatment with either fatty acid. OA evokes higher levels of LD-mitochondria interaction compared to PA. However, LDs from OA-treated cells are rather "passive", as their larger size associates with lowerψm mitochondria, compared to PA. LDs from PA-treated cells are apparently more "active", as they are smaller and interact with higherψm mitochondria, which is concomitant with increased ATP levels. participated in LD growth by providing ATP for the synthesis of TG (8). These divergent mitochondrial behaviors explain not only the differences in LD morphology after PA and OA treatments but also different metabolic parameters ( Figure 4F): PA oxidation greatly fuels OCR and ATP production (i.e., ATP-oriented mitochondrial network), while OA accumulation induces a limited increase in OCR and ATP, leading to ψm accretion (i.e., ψm-oriented mitochondria network). These dissimilar metabolic fates partially explain the higher toxicity of saturated FA in hepatocytes compared to unsaturated FA.
Consistent with our aforementioned interpretation, the analysis of LD-mitochondria proximity showed that OAinduced LD is more extensively proximal to mitochondria than PA (Figure 5). However, these mitochondria maintain a baseline metabolic activity, as indicated by MitoTracker Orange staining. Thus, we hypothesized that OA-induced LD does not fuel oxidative metabolism, but instead consumes ATP, thus acting as "passive" LDs in terms of mitochondrial bioenergetics. Interestingly, LD-bound mitochondria under OA treatment displayed a lower ψm compared to the rest of the mitochondrial network. This can be explained by the fact that the ATP consumption required for LD growth contributes to the dissipation of the ψm. On the other hand, PA promoted the accumulation of small-sized LDs proximal to mitochondria with higher ψm (Figure 5). Accordingly, we speculate that these LDs fuel mitochondrial bioenergetics, thus acting as metabolically "active" LDs that do not particularly consume ATP, thus leading to ψm accumulation in the mitochondria that stand nearer.
Reportedly, mitochondria associate with LDs in tissues exhibiting a high capacity for TG storage and oxidation, such as the liver, heart, brown adipose tissue, and skeletal muscle (8,13,30). Notably, contrasting evidence has shown that mitochondrial oxidation of LD-derived fatty acids requires mitochondria fusion in MEF cells (43) or fission in brown adipocytes (8), suggesting that LD metabolism and mitochondrial dynamics are interrelated, and seemingly dependent on the cell type. Moreover, our study also showed that the fatty acid type also determines the LD/mitochondrial dynamics ratio.
CONCLUSION
In sum, our data uncover two patterns of LD-mitochondria interaction in response to treatment with two different fatty acids. On the one hand, OA led to triglyceride accumulation, concomitant with increased LD-mitochondria proximity. Under this condition, the overall mitochondrial network underwent only a slight metabolic boost, which mainly exerted a ψmoriented role instead of contributing to ATP production. Meanwhile, OA-induced LD appeared rather "passive", precluding fatty acids from mitochondrial oxidation, thereby thwarting the bioenergetic boost of nearby mitochondria. On the other hand, PA-induced a slight increase in LD accumulation. Newly formed LDs were seemingly "active" and associated with mitochondria with higher OCR and ψm, compared to the rest of the network (Figure 6). Thus, we hypothesized that PA treatment renders mitochondria more ATP-oriented compared to OA treatment, due to higher substrate availability. Our results underscore the importance of the dietary FA composition in the development of NAFLD, where saturated FA promote hepatic steatosis with mitochondrial dysfunction, which in turn can promote NAFLD progression.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
AUTHOR CONTRIBUTIONS
RT and VP conceived and designed the study. AE, FD-C, and JB performed the experiments. AE, RB-S, and VP analyzed the data. AE, VP, and RT interpreted the data. AE, RB-S, VP, and RT drafted the manuscript reviewed by all authors. All authors contributed to the article and approved the submitted version. | 6,832 | 2021-11-12T00:00:00.000 | [
"Biology"
] |
Security-Aware Data Offloading and Resource Allocation For MEC Systems: A Deep Reinforcement Learning
—The Internet of Things (IoT) is permeating our daily lives where it can provide data collection tools and important measurement to inform our decisions. In addition, they are continually generating massive amounts of data and exchang-ing essential messages over networks for further analysis. The promise of low communication latency, security enhancement and the efficient utilization of bandwidth leads to the new shift change from Mobile Cloud Computing (MCC) towards Mobile Edge Computing (MEC). In this study, we propose an advanced deep reinforcement resource allocation and security-aware data offloading model that considers the computation and radio resources of industrial IoT devices to guarantee that shared resources between multiple users are utilized in an efficient way. This model is formulated as an optimization problem with the goal of decreasing the consumption of energy and computation delay. This type of problem is NP-hard, due to the curse-of-dimensionality challenge, thus, a deep learning optimization approach is presented to find an optimal solution. Additionally, an AES-based cryptographic approach is implemented as a security layer to satisfy data security requirements. Experimental evaluation results show that the proposed model can reduce offloading overhead by up to 13 . 2% and 64 . 7% in comparison with full offloading and local execution while scaling well for large-scale devices.
large volumes of data [1]. Such applications include, efficient manufacture inspection, virtual/augmented reality, image recognition, Internet of Vehicles (IoV) and e-Health [2]- [4]. To alleviate the resource constrains of mobile IoT devices and meet the communication/processing delay requirement complex computations can be offloaded to more resourceful devices [5].
Cloud computing was firstly exploited as a resource-rich service for mobile devices via the Mobile Cloud Computing (MCC) paradigm. MCC provides flexible processing, storage and services capabilities while reducing battery consumption. High latency is considered one of the key challenges facing MCC, especially in real-time and delay-sensitive applications. Additionally, security poses a critical challenge that faces MCC, where applications data and services may be vulnerable to many types of attacks during various stages of data transmission and processing [6].
Mobile Edge Computing (MEC) was recently introduced as a viable and promising solution to address MCC's challenges. In MEC, the computation capabilities of the cloud are pushed to the edge of the radio access network, which is in close proximity to mobile devices, resulting in a cost-efficient and low-latency architecture [7], [8]. Application domains such as predictive maintenance of industrial machines benefit from the MEC provision to provide fast and highly localised feedback to modify a live representation of the world [9].
Numerous approaches and models for computation offloading in MEC emerged in the literature with the goal of decreasing the consumption of energy, reducing computation latency and/or allocating radio resources efficiently [10]- [14]. Obtaining an optimum offloading solution in complex and dynamic multi-user wireless MEC system is a challenging task. Additionally, the security threats encountered during data transmission have not been addressed in most offloading approaches in the literature [15]. Moreover, the lack of adequate data protection controls can quickly overshadow the advantages of the MEC paradigm. Motivated by these aforementioned considerations, we present a deep reinforcement learning model to handle performance optimization in a multiuser and multi-task MEC systems as well as a security layer for protecting data during edge server transmission. The main contributions of our paper are summarized as follows: • Formulating a combination model of computation offloading, security and resource allocation as an optimization problem with the goal of decreasing the total time and energy overhead of mobile devices. • Transforming the formulated problem into an equivalent form of reinforcement learning, in which all the possible solutions are modeled as state spaces and the movement between different states as actions. Then, a Deep-Q-Network-based algorithm has been proposed for solving this problem and obtaining the near-optimum solution in an efficient way. The reminder of this study is organized as follows. The related works on offloading strategies are introduced in Section II. In Section III, our system model is presented and the formulation of our optimization problem is defined. Then, the Deep-Q-Network-based proposed algorithm is presented in Section IV. Section V presents the experimental evaluation and discussion. Finally, this study is concluded in Section VI and the future work directions are presented.
II. RELATED WORK
Numerous optimization models and approaches for computation offloading in MEC environment have been proposed in the literature. Some of these models handle only multiuser single-task MEC systems, e.g. [16], whereas others deal with multi-user multi-task environments, e.g., [17]. In addition, offloading conventional methods such as Lyapunov and convex optimization techniques [18] have been used to solve these models, whereas new algorithms based on artificial intelligence and deep learning have recently emerged [11], [19]- [21]. This section will review a brief overview of the common offloading optimization models.
A. Conventional Optimization Methods
Minimizing the total consumption of energy under a latency constraint for a multi-user, single-task MEC environment is the objective of [22]. The authors formulated an optimization problem to jointly optimize the resources of computation and communication and the decisions of offloading. Further, an efficient algorithm based on separable semi-definite relaxation approach is developed for obtaining the near-optimum solution for this problem. However, this work neglects the deadline delay requirement for the computation tasks. Tuysuz et al. [23] proposed a novel approach for addressing the video streaming mobility based on quality of experience (QoE), which can be deployed at the MEC servers. More precisely, this method first generates a session on the basis of QoE level and collects a set of information from the user. Afterward, three core manipulations have been performed to maintain the quality of experience level for each mobile device and to balance the load between mobile users based on user locations and their mobility via handover operations.
Nur et al. [24] applied the caching concept with computation offloading for a multi-user system, in which the application code and their related data for the completed tasks are cached at the edge server for the next execution. To reduce the energy and delay costs, [24] considers the priority for the computation task which is calculated by task popularity, deadline, data size and computing resource. Nevertheless, the common drawback of [24] is the absence of security mechanisms to protect application's data from attacks during the transmission.
Dai et al. have addressed the computation offloading for multi-user environment with multi-task in [25] and [26]. Specifically, in [25], a new offloading framework of two-tier is proposed for a heterogeneous network. An optimization problem is formulated with the aim of decreasing the overall consumption of energy and MEC servers in which computation offloading, user association, allocation of transmission power and allocation of computation resource are considered. Furthermore, an algorithm is developed to find the optimum offloading decision. Whereas in [26], the authors have jointly considered the resource allocation and offloading along with mobility factors of vehicular edge computing systems. The load among vehicular edge computing servers is balanced by selecting the optimal offloading decision for the computation tasks while maximizing the system utility is the main goal. However, the main drawback in [25] and [26] is that the security and privacy of data during the offloading process are not considered.
The authors of [27] and [28] presented solutions to effectively secure applications data on MEC systems for computation offloading. Similarly, Meng et al. [27] presented a secure and efficient offloading framework for MCC, by regular renewing of the server key and random padding are jointly combined to protect against timing attacks. In addition, a hybrid and queuing model based on Markov chain is utilized to optimize security and performance. Whereas, Elgendy et al. [28], introduced a new security layer based on the AES cryptographic algorithm with a genetic algorithm to protect application data during transmission. However, management of offloading and processing in [27] are achieved via cloud data center, which results in increased delay. However, [28] only addressed a multi-user single-task environment and used a computationally prohibitive method for solving the associated offloading problem, especially for large-scale environments.
B. Deep Learning Methods
Deep learning algorithms are widely used in offloading for multi-user environments [11]. For example, an offloading scheme based on deep reinforcement learning for devices of IoT was proposed in [29] with the goal of minimizing the total system overhead. Specifically, the level of battery, the predicted amount of the consumed energy and the capacity of the channel are used in the optimal edge server for offloading the computation tasks. Then, a deep-Q-Network learningbased algorithm is proposed to decrease the dimensionality of the states space and to accelerate the learning speed. However, in [29], the application data is not protected from cyber-attacks during the transmission process.
A stochastic policy of computational offloading for a multiuser and multi-server environment was proposed in [30]. In this work, the task arrival, computation resources and the timevarying communication qualities between mobile users and the edge server are jointly considered. The authors formulated a Markov decision process as a problem whose aims is to increase the long-term utility performance of the entire system. Then, two efficient algorithms based on double Deep Q-Network are proposed to address the curse-of-dimensionality. In [31], Dai et al. proposed a novel artificial intelligence empowered vehicular network architecture for IoV which can intelligently orchestrate the edge computing as well as caching resources. In addition, they jointly formulate the edge computing and caching as a Markov decision process problem and design a Deep Deterministic Policy Gradient (DDPG) algorithm to locate the computation resources in an efficient manner. However, in [31], the popular contents are shared between the vehicles at the edge caching which are vulnerable to different types of attacks.
More recently, Huang et al. [32] proposed a framework based on deep reinforcement learning for an online computation offloading, where the resource allocation and the offloading decision are jointly formulated as a non-convex problem. The aim is to increase the rate of computation in wireless networks. Then, a deep reinforcement learningbased online algorithm is developed for solving this problem via decomposing it into two sub-problems, namely, decision of offloading and allocation of resource. In addition, for rapid algorithm convergence, an order-preserving quantization method and an adaptive procedure are designed. Meanwhile, a multi-user with a multi-task offloading model for IoT was proposed in [33], in which the latency of service, energy consumption and success rate of task are jointly formulated to enhance the QoE-oriented computation offloading. However, the common drawback of [32], [33] is the absence of security mechanisms to protect application's data from attacks during the transmission.
It is evident from the literature review that computation offloading was investigated for multi-user environment in which conventional methods and deep learning are used to solve these problems. However, handling the security issue in a MEC system, especially a multi-user environment with a multi-task is not addressed. In this class of systems, most mobile applications send multimedia services and generate a substantial data which may be offloaded via the mobile networks. This motivates this study of jointly considering the resource allocation challenge and offloading for an environment of a multi-user and with a multi-task. In addition, we attempt to address the data security requirement during transmission to protect against various types of attacks.
III. SYSTEM MODEL
We study a multi-user MEC system with a single wireless base station and N mobile devices, represented by a set N = {1, 2, . . . , N }, as shown in Fig. 1. In addition, an edge server is associated with the wireless base station to provide computational and storage services. Furthermore, each mobile device has a set of M = {1, 2, . . . , M } different types of computation tasks requirements that need to be accomplished locally or will be transmitted and executed remotely through a wireless channel. In our study, a quasistatic approach is assumed in which the number of users does not change through the offloading period whereas it may vary over different periods [28].
The next subsections are present the modeling of communication, computation and security followed with more details on the formulation of our optimization problem.
A. Communication Model
The assumed environment has a set of N = {1, 2, . . . , N } users that are connected to a single wireless base station via a wireless channel. Each mobile device has a set of M = {1, 2, . . . , M } computationally intensive tasks that need to be completed either locally or remotely. Our aim is to reduce the system overhead in terms of communication/processing time and consumption of energy.
We refer a i,j ∈ {0, 1} as the offloading decision for the computation task j of user i. Specifically, (a i,j = 0) indicates that the mobile device i selects to execute its computation task j locally, while (a i,j = 1) indicates that the device i selects to transmit and execute its computation task j remotely. So, we define A = {a 1,1 , a 1,2 , . . . , a N,M } as the profile decision of offloading for users.
Subsequently, in the offloading case, the data rate of uplink for the user i can be expressed as follows: where B and p i refer the bandwidth and the power of user i transmission and g i and θ 0 refer the gain and the density of power noise. Consequently, the simultaneous offloading of mobile devices is limited by the following bandwidth constraints: In this study, an Orthogonal Frequency Division Multiple access (OFDM) method is considered for addressing the transmission of multi-users at the same cell where the uplink transmission interference of intra-cellular is significantly reduced [28]. Furthermore, the consumption overhead for transmitting the result is neglected due to the small output size (result) of the computation task in comparison with the input data size [34].
B. Computation Model
This section presents the computation model for our system model that is composed of N number of mobile devices in which each device has an M number of intensive computation tasks that need to be completed. We use a tuple {I i,j , C i,j , τ i,j } to represent a computation task requirement in which I i,j , C i,j and τ i,j denote the input size of data for each task (code and parameters), cycles of CPU needed to accomplish the task and the maximum tolerable delay for task j completion of user i. The values of I i,j and C i,j depend on the nature of the application which is obtained using a program profiler [35].
Mobile Device Users
Wireless Base-station In the following subsections, the computation overhead for local and edge server computing approaches will be introduced with respect to both time of execution and consumption of energy.
1) Local Execution Approach: In local execution approach, each user i decides to execute its task j locally on its computation resources. So, the consumption of energy and time for processing the task j of user i locally can be calculated as follows: where f l i and ξ i denote the computational capability (CPU cycles/seconds) and the CPU cycle's consumed energy of user i.
2) Edge Server Execution Approach: In the edge server execution approach, the task j of user i will be transmitted and processed remotely. Therefore, the consumption of energy and time for offloading and executing task j of user i remotely, i.e., task transmission and execution, can be calculated as follows: where f e i denotes the capability of computation for edge (CPU cycles/seconds) which is allocated to each user i. This study assumed that the edge server's computational resources are equally shared between all users.
C. Security Model
During offloading of computation tasks and their related data to an edge server, the offloaded data may be vulnerable to different types of attacks. In order to eliminate the data security risks, a new layer is introduced to fulfil the data security requirements. AES is used to encrypt/decrypt application data during transmission due to its efficient security and performance [36].
First, each user receives the offloading decision from the edge server which determines if the mobile user will offload their computation task or not. For the offloading decision case, the user is issued with a secret key to encrypt the transmitted data using 128-bit AES before transmitting the encrypted data to the edge server. Afterwards, the edge server uses the same key to decrypt the received data and then executes the computation task upon this data. Finally, the edge server sends the result back to the user.
We denote β i ∈ {0, 1} as the decision of security for user i. Specifically, (β i = 0) refers that the computation task's data of a user i will be offloaded without encryption. Whereas, (β i = 1) indicates that the computation task's data of each user i will be encrypted using our security layer before being transmitted to the edge. Therefore, we define β = {β 1 , β 2 , . . . , β N } as a security profile. Accordingly, the extra-overhead for applying this layer could be defined as follows: where η i,j and δ i,j refer the CPU cycles needed for encrypting and decrypting the data at user i and edge server, respectively [37], [38]. Moreover, regarding the security, computation and communication models the total consumption of time and energy for processing a tasks j of the user i can be defined as: where T r i,j and E r i,j refer the total time and energy for our model with security consideration which can be expressed as follows: Finally, from Eq.(9) and Eq.(10), the total time and energy overhead can be calculated as follows: where w t i and w e i ∈ [0, 1] refer to parameters for the consumption of time and energy for user i.
D. Problem Formulation
In this section, an optimization model for a multi-user environment with a multi-task is formulated with the goal of decreasing the total system overhead for users with respect to communication/processing time and energy. The formulation is given as follows: The first two constraints are the energy and time limits for each computation task j. C 3 and C 4 constraints are the uplink data rate capacity and CPU computation capacity of an edge server node where F is the total CPU resources at each edge server. Finally, constraint C 5 ensures that the variable of decision offloading is binary.
Eq. (14) is considered as a linear problem where the optimal solution can be given by obtaining the offloading decision vector's values a. However, as a is considered as a binary variable, then, the set of feasible and the objective is considered as a non-convex, which makes the solving for this problem difficult, especially for a huge users' number. This is due to the problem of curse of dimensionality, in which problem size increases rapidly as the number of users increase [39]. Therefor, an deep reinforcement learning-based algorithm is proposed to obtain the near-optimum values for a.
IV. PROBLEM SOLUTION A. Reinforcement Learning
Reinforcement learning is considered as a variant of machine leaning that allows a system to learn how to behave within an unknown dynamic environment and make different decisions in an optimal way without explicitly being programmed or human intervened. Fig. 2 shows a general illustration of a reinforcement learning scenario in which the agent, environment, state, action and reward are considered the main components. It is observed from the figure that, at time step t, the agent receives an observation regarding state s t and chooses an action a t which translates the agent from state s t to a new state s t+1 on the basis of the policy π = P (a t |s t ). Then, the agent obtains a reward r t and transitions to the state s t+1 on the basis of function of reward and transition probability of state which are defined as R(s, a) and P (s t+1 |s t , a t ) respectively [40]. Subsequently, these steps are repeated until the agent reaches to the terminal state, where maximizing the expected cumulative rewards is the main goal which is defined as R t = ∞ k=0 γ k r t+k with a discount factor γ ∈ [0, 1]. The Q-learning algorithm is one of the most popular reinforcement learning algorithms where its learning method is defined based on recording a Q-value in the form of Qtable. This table declares the state-action pairs in which the row's headers represent the system states S, the column's headers represent the system actions A whereas the cell value represents the quality value, Q(s, a), of taking an action from that state having a long-term accumulative reward. Q(s, a) is calculated as:
Agent
where Q(s, a) and Q(s , a ) denote the current and the new Q values for that state and action respectively. In addition, r(s, a) denotes the reward value obtained when selecting the action a at state t. max Q(s a ) denotes the maximum expected future reward obtained given the new state s and all possible actions at that state. Finally, α and γ denote the learning rate and discount factor respectively. In this study, the computation offloading decision a i,j is used to represent the state s = {a i,j } while the corresponding movement among different states represent the action space A; this will be discussed with more details in the following subsection.
Regarding our optimization problem in Eq. (14), the Qlearning algorithm is not considered as effective for obtaining the optimal solution as the complexity of the problem increases rapidly as the number of users and their computation tasks increase; this leads to an increase in the state-action pairs. Moreover, it becomes difficult to store and compute the corresponding Q value for the Q table and solving this problem becomes computationally prohibitive as the number of stateaction pairs increases exponentially [39]. Therefore, Deep Q-Network (DQN) is considered to handle the Q-learning limitation through estimating the Q-value function instead of storing the Q-table as we will show in the next subsection.
B. Deep Q-Network
DQN is one of the effective reinforcement learning algorithm in which the neural network with parameter ω is used to approximate the function of Q-value and to generate the values for action as shown in Fig. 3. For DQN, the state is given as an input for the neural network and the Q-value is generated as the output, for all actions. In addition, -greedy strategy is used to select the action. A random action is selected for ∈ (0, 1), i.e., exploration, and a = arg max at Q(s(t), a(t); ω) for 1probability, i.e., exploitation.
In this study, an efficient DQN algorithm is proposed for solving our optimization problem and obtaining the nearoptimum offloading decision. This problem is presented in Eq. (14). The optimization problem firstly, needs to be transformed into an equivalent reinforcement learning form, in which all the possible solutions are modeled as state spaces and the movement between different states as actions. In addition, the rewards value can be calculated based on the objective function. Consequently, the state space, actions and reward for the problem can be defined as follows: • State: State space S is represented by the computation offloading decision X = {a 1,1 , a 1,2 , . . . , a N,M } which is a 1 × N M vector. Therefore, at an arbitrary index t, the system state can be defined as follows: • Action: The action space A is represented by the movement between two different states. Additionally, in this study, the system action can be defined as an indexselection within the state vector length in which the agent can move from the current state to a specific neighboring state based on the selected index. Specifically, a variable v is defined to denote the index of selection, in which v = 1, 2, . . . , N M , and the action a(t) = {a v (t)} is considered as 1 × N M vector. • Reward: The agent gets a reward R(s, a), at each step t, on the basis of a state s and after executing an action a which is considered as a scalar feedback signal for indicating how well the agent is doing. While the system state s(t) represents the computation offloading decision, the objective function in our problem, Z(t), can be derived based on the state s(t) and can be denoted as follows: where {a i,j (t)} is given by the state s(t) according to the definition in Eq. (16). Additionally, based on the values of Z s(t) (t) and Z s(t+1) (t + 1), the reward of the state-action pair (s(t), a(t)) is defined as follows: In this study, a pre-classification step has been applied on the state space in which the computation tasks that do not satisfy the completion time deadline constraints, i.e., T l i,j <= τ i,j ), must be forced to execute locally on the mobile device, i.e, a i,j = 0.
As shown in Fig. 3 and Algorithm (1), the DQN can be used to solve our optimization problem in Eq. (14). Firstly, given state, action and reward, the evaluation and target Q-network are initialized with random numbers ω and ω , respectively. Also, the replay memory Y is initialized with a capacity L. Then, for each episode k, an initial state s init is chosen. Afterward, for each time step t and based on the strategy, the evaluation network generates a random action a(t) for ∈ (0, 1) probability and a = arg max at Q pre (s(t), a(t); ω) for 1-probability. Then, on the basis of Eq. (18), the reward r(t) as well as the next state s(t + 1) are obtained. In addition, the transition (s(t), a(t), r(t), s(t + 1)) is stored in the experience replay Y . Consequently, for updating the evaluation network, a sample random minibatch of transitions (s(k), a(k), r(k), s(k + 1)) is selected from experience replay Y and the predicted and labeled Q values, Q pre and Q lab , are calculated respectively as Q(s(t), a(t); ω), r(t) + γ max a Q tar (s(t + 1), a (t); ω ) using evaluation and target networks shown in Procedure 1. This study is adopted as a loss function of a neural network which can calculate the loss between predicted and labeled Q values. In addition, Gradient Decent Algorithm (GDC) [41] is used to minimize this value. Finally, the parameter ω of target network is updated every C steps.
V. EXPERIMENTAL EVALUATION AND ANALYSIS
This section firstly introduces the setup of experimental. Afterward, an extensive discussion on the simulation results is presented to critically assess our proposed model's performance.
A. Experiment Setup
Our simulation is undertaken using a personal computer, which has an Intel ® CPU 3.4 GHz Core(TM) i7-4770 with 16 GB RAM capacity. Python for development. The software environment is TensorFlow and Numpy with preinstalled Python 3.6 on Windows 10 Professional 64-bit [42]. A Multi-user environment with a multi-task is considered in which we have five users. The system bandwidth, noise and transmission power are set to 20 MHz, −100dBm and 100 mW, respectively. Each mobile user has a face recognition application as an example which consists of three independent computation tasks, namely, face detection, pre-processing and feature extraction and classification. The data size is distributed uniformly in (0, 10)MB, while the cycles of CPU are set to 1000 cycles/bit. The user's capability is assigned randomly within the {0.5, 0.6, . . . , 1.0}GHz set, while edge server' CPU computational capability is set to 100GHz. We also assume that the channel bandwidth, the transmission power of each device and background noise are 20M Hz, 100mW and −100dBm respectively. The energy consumption for each mobile device is uniformly distributed within (0, 20 x 10 −11 )J/cycle [34]. For the DQN algorithm, the episode, size mini-batch and replay memory are set to 20000, 32 and 512. While, the discount factor, learning rate, and − greedy values are set to 0.99, 0.01 and 0.1 respectively.
Finally, to verify the performance of our algorithm, five different policies are introduced: • Unsecure DQN: Our model is applied without security layer addition. • Secure DQN: Our model is applied after adding the security layer. • Local Execution: All the computation tasks will be processed locally. • Full Offloading: All the computation tasks will be processed remotely. • Random Offloading: A random set of computation tasks are selected to be processed remotely while the remaining tasks will be executed locally.
1) Convergence Performance:
This subsection studies the convergence performance of the proposed algorithm, in which different values of each parameter are tested and the proper value will be selected for the next simulation. Fig. 4 demonstrates the convergence performance of the total cost over different values of learning rate, in which the leaning rate can be used to adapt the updating speed of ω. Figure shows that, with the 0.01 value, the process of convergence is becoming faster than 0.001 value and this speed increases the value of learning rate increases. However, with the large value of learning rate i.e., 0.1, the convergence process can not converge well in which it will be fallen into a local optimum solution. Therefore, it is important to choose the appropriate learning rate value suitable for specific situations. Regarding this, we set 0.01 as a learning rate value, which is the most appropriate value. Fig. 5 depicts the effects of different memory sizes on the convergence performance. Through the figure, we shows that with the smaller value of memory size, the convergence is becoming faster, but a local optimum solution is obtained instead of global one. Therefore, in the following simulations, the size of the replay memory is set to 1024 which is the most appropriate value. Fig. 6 demonstrates the convergence performance of the proposed algorithm over different values of batch size in which the batch size can be utilized to determine the experience samples' number which are extracted from the memory at each training interval. From the figures, the batch size is set to 32 in the next simulations.
2) System Performance: This subsection presents and discusses the simulation results of our proposed model. First, Algorithm 1 DQN based Computation Offloading Algorithm 1: Initialize the evaluation and target Q network parameters with random weights ω and ω , respectively. (ω = ω) 2: Initialize replay memory Y with capacity L 3: for each episode k ≤ 1, 2, . . . , K do 4: Choose an initial state s init
5:
for each step t do 6: Generate a random number ϕ ∈ [0, 1] 7: if ϕ < then 8: Randomly select an action a(t). Execute the action a(t) and Calculate Z s(t) (t) according to Eq. (18) 13: if Z s(t) (t) > Z s(t+1) (t + 1) then 14: Set r(t) = 1 15: else if Z s(t) (t) < Z s(t+1) (t + 1) then Save transition (s(t), a(t), r(t), s(t + 1)) in Y 21: Execute Procedure 1 for updating the evalution network 22: Reset ω = ω after each C steps. Calculate the label Q-value Q lab : Q lab = r(k)+γ max a(k+1) Q tar (s(k+1), a(k+1); ω ). 6: end if 7: Optimize the parameter ω using gradient descent algorithm which minimize the loss: the overhead of processing the computation tasks under the defined five policies over the different value of users is seen in Fig. 7. It is demonstrated from the figure that with 3 users, the our proposed DQN algorithm's overhead with and without security addition is equal to the full offloading policy and less than the other two policies. In addition, with the increasing the users' number, our model with and without security addition is able to achieve a lower overhead relative to full offloading policy. This is because the communication channels are shared and overloaded thereby lead to increase the communication time with users' number increasing. Moreover, our model can optimally select which computation tasks should be offloaded and which should not while minimizing the total cost of users Similarly, Fig. 8 illustrates the total cost of executing the computation tasks under five different policies versus different data size for each task. As seen in this figure, the total cost of the five policies increases with the increasing size of input data for each task. Additionally, our DQN algorithm with and without a security layer outperforms the other policies. Moreover, the full offloading policy curve increases much more rapidly than the other four polices with the increasing size of input data for each task. This is because of as the size of data that is transmitted increases, the communication time also increases which leads to a significant increase in the total cost of the entire system.
Finally, Fig. 9 shows the total overhead of processing the computation tasks for different MEC server's capacity. It is seen in this figure that the policy of local execution is not
VI. CONCLUSION
Our study proposed a resource allocation and security-aware data Offloading model for a multi-user environment with a multi-task. A new efficient security layer is introduced using the AES algorithm to protect the communicated data against attacks. In addition, a combination model of security, resource allocation and computation offloading is formulated as a problem with the goal of reducing the total time and energy overhead of mobile users. Furthermore, to practically obtain the optimum solution, an equivalent form of reinforcement learning is given, in which the state space is defined using all available solutions and the movement between different states is used to define the actions. Then, an efficient algorithm based on DQN has been proposed for solving this problem and finding the optimum solution. Simulation results demonstrate that the proposed model can achieve performance gains of up to 13.2% and 64.7% of overhead in comparison with full offloading and local execution approaches. Additionally, our DQN-based approach was proven to scale well for the networks with a large-scale. For a future work, an new layer of compression will be added to our model. This addition will compress the transmission data size to reduce the transmission time and enhance the overall system performance. Additionally, mobile users' mobility will be managed in an efficient manner, in which each user can move dynamically among different edge servers within an offloading period. | 8,078 | 2021-01-26T00:00:00.000 | [
"Computer Science"
] |
Emergent four-body parameter in universal two-species bosonic systems
The description of unitary few-boson systems is conceptually simple: only one parameter -- the three-body binding energy -- is required to predict the binding energies of clusters with an arbitrary number of bosons. Whether this correlation between the three- and many-boson systems still holds for two species of bosons for which only the inter-species interaction is resonant depends on how many particles of each species are in the system. For few-body clusters with species $A$ and $B$ and a resonant $AB$ interaction, it is known that the emergent $AAB$ and $ABB$ three-body scales are correlated to the ground-state binding energies of the $AAAB$ and $ABBB$ systems, respectively. We find that this link between three and four bodies is broken for the $AABB$ tetramer whose binding energy is neither constrained by the $AAB$ nor by the $ABB$ trimer. From this de-correlation, we predict the existence of a scale unique to the $AABB$ tetramer. In our explanation of this phenomenon, we understand the $AABB$ and $AAAB$/$ABBB$ tetramers as representatives of two different universal classes of $N$-body systems with distinct renormalization-group and discrete-scaling properties.
Introduction
When a two-boson system is resonant, i.e., its scattering length is considerably larger than the interaction range, its behavior becomes independent of interaction details; in this sense, it is universal. Numerous instances of such systems are found in atomic, nuclear, and particle physics [6]. Resonant three-boson systems are even more intriguing as they display a characteristic geometric bound-state spectrum. This sequence of states was originally predicted by Efimov in the seventies [7] and observed experimentally decades later with 133 Cs trimers [14]. The geometric factor between successive excited states in this spectrum is universal and equal to (22.7) 2 , while the groundstate binding energy (B 3 ) remains sensitive to short-distance details of the interaction [3,4] and becomes a parameter of the theory. If B 3 is known, the binding energies of larger clusters can be predicted. For the four-, five-, and six-boson systems at unitarity, the relations B 4 ∼ 4.7 B 3 , B 5 ∼ 10.1 B 3 , and B 6 ∼ 16.3 B 3 , respectively, were found numerically [19,20,21,22].
The occurrence of the Efimov spectrum is not exclusive to systems composed of identical bosons. Three-body systems with two species A and B, i.e. AAB and ABB (also referred to as Tango configurations [28]), in which the two identical particles are bosons, do have a geometric spectrum, too, even if only the AB interaction is resonant. This was experimentally observed for the first time with 87 Rb and 41 K trimers [1]. If the masses of species A and B are equal, the geometric factor is (1986.1) 2 (see e.g. [15]). If the odd particle is lighter (heavier) than the other two this factor is reduced (increased). For 6 Li and 133 Cs, for example, the theoretical prediction of a factor of (4.9) 2 for the Li Cs Cs trimer spectrum has been confirmed experimentally [18].
At this point, it is natural to wonder what happens with larger clusters composed of two species A and B. From the experience with bosonic systems, the naive expectation is that the ground-state energy of the two-species fewbody clusters is proportional to that of the trimer. And indeed, this is the case for AAAB clusters [27,5] (or ABBB clusters, depending on the statistics of species A and B).
In this manuscript, we investigate the AABB system, and find that its ground-state binding energy cannot be predicted solely from the AAB or ABB three-body parameters and a unitary AB system. This behavior was already foreshadowed in the independent AABB and AAB/ABB limit cycles in the Born-Oppenheimer m(A) m(B) limit [17], and our study establishes this decorrelation between three and four-body systems for equal masses. Furthermore, by treating the ABBB and AABB systems as representatives of generic classes of N -particle systems in which only a certain subset of pairs interacts resonantly with a contact S-wave potential (all the other two-body pairs are non-interacting), we gain a more general understanding of correlations amongst multi-species few-body clusters. Our description has the advantage of arranging the different two-species systems in two defined categories (which we call circle and dandelion) independently of their physical realizations. Based on numerical results, we conjecture that there is only one finite scale associated with each of these categories.
Theoretical framework:
We will first give a definition of the categories and later relate them to the specific two-species bosonic systems. We introduce the two classes ( fig. 1) for distinguishable particles (the nodes) in which only a specific subset of interactions is resonant (the dashed lines). In the first class -the Ncircle -each of the N nodes/particles interacts resonantly with exactly two neighbors. In the second class -the N -dandelion -a central particle interacts with all remaining (N − 1) particles that do not interact amongst each other.
The quantum-mechanical nonrelativistic dynamics of these graphs obey the N -body Schrödinger equation with an equal-mass kinetic term H 0 acting on a state with energy E N (for bound state solutions, −B N := E N < 0). The shape of the graph is encoded in the potential V , which specifies the subset of all the N (N − 1)/2 interaction pairs (i, j) that are resonant. In particular, the potentials defining the N -circle and N -dandelion are For the resonant interaction between equal-mass particles, we use a contact potential regularized with a Gaussian cut-off function: which is the leading-order of an effective field theory for systems with large scattering length [12]. In order to prevent the three-body collapse (i.e. B 3 → ∞ when R c → 0 [24]), we introduce a zero-range three-body potential [3] adopting the regulator prescription from the pair interaction: Numerically, we realize the zero-range limit with R c ∈ [0.01, 1] for dandelions and R c ∈ [0.1, 10] for circles. At the lower end of these ranges the binding energies show relatively little dependence on R c . In order to approximate a zero-range interaction we choose the minimum value of the cut-off as to be much smaller than any other length scale in the systems we study (yet it cannot be reduced much further without impairing the numerical stability of the calculations). The two-body coupling strength C(R c ) was calibrated via the Numerov algorithm to approximate a unitary scattering length |1/a 0 | < 10 −5 . In addition to = c = 1, we set m = 1. We chose a single three-body scale for the 3-circle and the 3-dandelion, namely the ground-state binding energy B 3 = 0.01 (1/a 0 B 3 m). Accordingly, we renormalized the strength of D(R c ) in both systems to reproduce this value of B 3 . This results in two sets of three-body couplings D(R c ) representing a repulsive potential, where we note that setting the 3-circle energy to 0.01 demands a stronger repulsion than fixing the 3-dandelion's ground state to 0.01.
The two interaction classes can be realized as two-species bosonic clusters in which only the inter-species interactions are resonant. Stated explicitly, the ABB and ABBB ground states are 3-and 4-dandelions, respectively, while the AABB cluster is a 4-circle (see fig. 1 and fig. 2). The 3-circle is equivalent to the standard three-boson system at unitarity 1 , but it is no subsystem of the two-species clusters considered here. We advance this is why AABB exhibits its own characteristic four-body scale.
This equivalence is explained by noting that the ground state and the potential have the same symmetries, i.e., cyclic permutations for the circle and permutations of the (N − 1) non-interacting particles for the N -dandelion. This implies that these particles effectively behave as indistinguishable, though in principle not necessarily as bosons. Yet, the contact interactions (5) and (6) enforce bosonic behavior for the N = 3, 4 circles and the N = 3, 4, 5 dandelions. In the R c → 0 limit, these interactions are only non-vanishing in relative S-wave configurations between interacting pairs, which requires them to behave as bosons. If the non-interacting pairs are antisymmetric, this will force the interacting pairs to be antisymmetric too 2 , resulting in a vanishing contribution from the contact-range interaction. As a consequence, provided that the interaction is of a contact-range type, all pairs behave symmetrically under pair permutations in the ground state. This is the reason why our calculations employ states with distinguishable particles. This choice allows us to avoid the explicit (and unnecessary) symmetrization of the numerical wave function, thus reducing the number of components to be computed. This in turn makes the computation of larger cut-off-radii ensembles feasible. However, we were careful to verify the equivalence numerically for selected threeand four-body cases. 3 All three-and four-body results obtained for this work employ the stochastic variational method [23] and were bench-marked for a sample set of threeand four-body predictions with the refined-resonating-group method [11]. Within this numerical framework, we calculate ground-state energies of three-, four-, and five-body representatives of the circle and dandelion. The convergence of these energies to a finite value indicates the renormalizability of the theory for the respective systems. The collapse of the ground states, i.e. B N → ∞ for R c → 0, is, in turn, the signature of an emergent new scale.
Results:
We begin our analysis recovering the expected collapse associated with the original Efimov spectrum. Specifically, we demonstrate that in the absence of a three-body repulsion all considered systems collapse as B N ∝ 1/R 2 c (see fig. 3). Then, we analyze the renormalizability of the N = 4, 5 dandelion and circle with a contact two-body resonant interaction and a contact three-body repulsion that stabilizes the respective trimers. In the large cut-off limit, we find the energy of these systems stable against variations of the cut-off and approximately B dand 0.06 for the five-body dandelion and circle. The cut-off dependence of these ratios is shown in fig. 4 and fig. 5. The 4-and 5-dandelion binding energies are large when compared to that of the 3-dandelion, but the reason why this is the case remains elusive.
3b circle 4b circle 5b circle 3b dandelion 4b dandelion 5b dandelion Figure 3: Cut-off dependence of ground-state binding energies (B N ) for trimer, tetramer, and pentamers with an interaction given by resonant two-body potentials which realize circular and dandelion systems. Results are obtained without a three-body counterterm.
In showing B N R 2 c , the divergent behavior of the binding energy is highlighted.
particle on the circle. Whether the binding energy of the circle converges to zero with the number of particles going to infinity remains an open question. Nevertheless, the circles are particle-stable because no (N + 1)-body circle can decay into an N -body circle. Although not obvious, we find the closest decay threshold to be the complete disintegration of the systems. Next, we assess whether the three-body force that renormalizes the 3dandelion suffices to renormalize the 4-and 5-circle too. The binding energy of the dandelion-regularized circle shows no sign of convergence to a particular value. However, its increase with the cut-off is less steep relative to the case without three-body repulsion (see fig. 6). The collapse was tested up to a cut-off of 100 (in units of mass), and although we cannot numerically exclude a stabilization at some even larger cut-off, we deem such behavior unlikely for two reasons: first, the extremely large binding of these systems 3b circle 4b circle 5b circle Figure 4: Convergent behavior of the 3-, 4-and 5-circle ground-state binding energies with an appropriately renormalized three-body repulsion (cf. fig. 3 for unrenormalized results).
for the considered range of cut-offs suggests that any hypothetical finite limit has a highly unnatural energy which, a priori, is not justified by any physical reason. Second, the fact that the circle is correctly renormalized via the repulsion set by the 3-circle means that it cannot be simultaneously renormalized by the repulsion that stabilizes the 3-dandelion. The discrete scaling factors of the 3-dandelion and 3-circle are de facto different, being respectively 1986.1 and 22.7 for equal mass systems [15]. These results indicate that, in the unitary limit, the dandelion and circle categories represent two different types of universal behavior. It is helpful to consider the consequences for two-species three-boson systems (either AAB or ABB) to which one additional boson is added. For an inter-species two-body potential at resonance, the AAAB/ABBB mixtures -the dandelions -exhibit bound states which are tied to those of the AAB/ABB three-boson spectrum. The AABB system, in contrast, is not renormalized by the same three-body repulsion. It obtains its renor-0.1 1 10 100 1000 1 10 100 1000 10000 3b dandelion 4b dandelion 5b dandelion Figure 5: Convergent behavior of the 3-, 4-and 5-dandelion ground-state binding energies with appropriately renormalized three-body repulsion (cf. fig. 3 for unrenormalized results).
malization condition from the 3-circle, which cannot be built in terms of A/B-mixtures. This three-body potential cannot be included in the theory a posteriori, since it would spoil the ABB system renormalization. Therefore, the renormalization of the AABB tetramer requires the introduction of its own four-body force at leading order (in contrast with the four-boson system, in which it is subleading [9,2,8]). It is noteworthy that the 5-circle cannot be represented by A/B-mixtures, and we thus abstain from a more detailed analysis of this system. However, it belongs to the same universality class as the 4-circle and is also renormalized by the 3-circle three-body force (see fig. 4). From this, we would expect the 5-circle to be renormalized by the four-body force that renormalizes the 4-circle in its physical representation (i.e. the AABB tetramer), though for the moment this is merely a conjecture.
The experimental verification of our theoretical results is complicated by 10 100 1000 10000 1 10 100 1000 10000 3b circle 4b circle 5b circle the fact that most physical AABB systems have mass imbalances. The heteronuclear 85 Rb-87 Rb system (both bosons), for which a Feshbach resonance has been observed [16], might be an exception. It is relatively close to the equal-mass limit and might enable (albeit experimentally challenging) a measurement of B AAAB ratios. To make a comparison we would need a second system with similar features, but no such system is known experimentally. Hence the identification of an independent fourbody scale remains elusive. Yet, for multi-species systems with similar mass imbalances, such as 41 K-87 Rb [1,26], 87 Rb-133 Cs [13] and 23 Na-39 K [10] (for which the mass imbalances are 0.47, 0.65, and 0.59, respectively), the comparison of their B AABB 4 /B AAB 3 ratios might very well be meaningful. If these ratios are wildly different, it could represent an experimental verification of the existence of a four-body parameter in the AABB system. Another way to realize the equal-mass dandelion and circular systems is provided by Fes-hbach resonances between atomic hyperfine levels. Our results could then be confirmed if besides 87 Rb [25] a second atomic species could be tuned to such a resonance and that the trimer and tetramer energies within the respective condensates are experimentally accessible. | 3,546.2 | 2021-03-26T00:00:00.000 | [
"Physics"
] |
An RGB colour image steganography scheme using overlapping block-based pixel-value differencing
This paper presents a steganographic scheme based on the RGB colour cover image. The secret message bits are embedded into each colour pixel sequentially by the pixel-value differencing (PVD) technique. PVD basically works on two consecutive non-overlapping components; as a result, the straightforward conventional PVD technique is not applicable to embed the secret message bits into a colour pixel, since a colour pixel consists of three colour components, i.e. red, green and blue. Hence, in the proposed scheme, initially the three colour components are represented into two overlapping blocks like the combination of red and green colour components, while another one is the combination of green and blue colour components, respectively. Later, the PVD technique is employed on each block independently to embed the secret data. The two overlapping blocks are readjusted to attain the modified three colour components. The notion of overlapping blocks has improved the embedding capacity of the cover image. The scheme has been tested on a set of colour images and satisfactory results have been achieved in terms of embedding capacity and upholding the acceptable visual quality of the stego-image.
Introduction
In the digital world, one of the major and essential issues is to protect the secrecy of confidential data during their transmission over a public channel. In general, the confidential digital data are pre-processed before their transmission over a public channel. This pre-processing operation changes the content of the information into another form, but only an authorized person is capable of appropriately executing the [14] have suggested a combination of LSB and PVD methods where three consecutive pixels are considered in hiding the secret message. Their scheme has improved the embedding capacity and retained the acceptable visual quality of the stego-image. Several other PVD variants [3,[15][16][17][18][19][20] are found in the literature for enhancing the PVD technique. Lee et al. [3] have introduced a tri-way PVD approach to improve the hiding capacity and to survive against several steganalyses. Tseng & Leng [15] have modified the traditional PVD-based quantization range table and introduced a new technique known as perfect square number (PSN). The secret message bits are concealed using the PSN and their proposed quantization range table. Liao et al. [16] proposed four-pixel differencing and a modified LSB substitution-based steganographic scheme. The edge region pixel is able to tolerate extensively more changes without perceptual misrepresentation than the smooth region. Swain [17] proposed another combination of LSB-and PVD-based improved image steganographic schemes where the secret message bits are hidden into 2 × 2 pixel non-overlapping blocks of a cover image. Recently, another block-based PVD steganographic scheme was presented in [18], where they have considered 3 × 3 non-overlapping image blocks. A seven-directional PVD scheme [19] is found in the literature with improved payload capacity. Conventional PVD suffers from a falling-off boundary problem in some blocks. Hence after the readjustment process, the distortions of those blocks are high when compared with the other blocks. It is of concern that sometimes it provides a low quality of stego-image. Some authors have addressed this problem and their solutions are effective with intensive computational overhead. Zhao et al. [20] proposed PVD with modulus function for improving the image quality while preserving the same embedding capacity as found in conventional PVD. Another work is found in [21] where the authors overcome the falling-off boundary problem by adopting the adaptive PVD approach.
Several researchers have employed either LSB substitution or a PVD-based steganographic approach to devise some efficient colour image steganographic schemes. In [22], the authors have enhanced the security of the colour steganographic scheme where they have not concealed secret message bits in sequential order into each colour pixel. The embedding process is realized based on a secret pseudorandom value which decides adaptively the payload capacity and the sequence of embedding secret message bits into each colour plane. Their indirect approach definitely enhances the security level. Another LSB substitution-based colour image steganography is found in [23] where the secret message bits are hidden with reference to an indicator colour plane instead of directly embedding the secret message bits in order. Another secret key-based colour image steganography is suggested by Parvez & Gutub [24] where the secret message bits are spread out over each colour plane based on some predefined secret key. A modified PVD-based steganography is proposed by Nagaraj et al. [25]. In their scheme, they used modulus 3 function with PVD for realization of secret message bits into colour pixels. Later, Prema & Manimegalai [26] proposed a colour image steganography using modified PVD. In their scheme, an RGB colour image is decomposed into non-overlapping blocks of two consecutive pixels. Three different pairs, namely (R,G), (G,B) and (B,R), are formed from two consecutive colour pixels and the secret message is embedded based on differences of colour component pairs. They have improved the hiding capacity while maintaining acceptable visual quality of the stego-image. Yang & Wang [27] devised a block-based smart pixel adjustment process where a block of two colour pixels is considered during the secret message-embedding process. However, in their scheme, hiding capacity is not excessive. Adaptive PVD-based colour image steganography is suggested in [28] where the secret message is concealed in the block level of each colour plane. The vertical and horizontal edges are exploited in each block during the message-embedding process. The above colour image steganographic schemes basically work on a colour plane instead of on colour pixels. Hence in this paper, we have proposed an RGB colour image steganography, where the secret message is concealed into each colour pixel independently. The proposed scheme chooses a colour pixel at a time and embeds the secret message into each colour pixel individually by employing the modified PVD appropriately. In the proposed scheme, the colour pixel is grouped into two pairs, namely (R,G) and (G,B), to form two overlapping blocks. PVD is applied to each pair, for embedding the secret message bits. Afterwards, the proposed readjustment process is carried on each pair to obtain the final modified stego colour components, i.e. R, G and B components. The proposed readjustment process ensures that, in the decoding process, PVD is applicable to extract the secret message bits from the stego colour pixel. The proposed scheme will improve the embedding capacity due to consideration of overlapping block concepts.
The rest of the paper is organized as follows. Section 2 presents the basic idea of the PVD method. The details of the proposed scheme are described in §3. The experimental results are presented in §4. Finally, §5 concludes the paper.
Basics of pixel-value differencing
The PVD method [13] uses grey-level images as the cover image and variable-sized secret message bit sequences are embedded into the cover image. Fewer secret message bit sequences are embedded into the smooth region compared with the edge region. Initially, the cover image is partitioned into nonoverlapping blocks of size 1 × 2 in raster scan order. Two consecutive pixels in the ith block are denoted as P i and P i+1 , respectively. The difference value, d i , between two consecutive pixels is calculated by In this method, ith block pixels P i and P i+1 will be replaced by the stego pixels P i and P i+1 . After the embedding process, the receiver side will compute the difference of the ith block d i = |P i − P i+1 |. The difference d i is used to search for the number of concealed bitstreams in the ith block using the quantization range from figure 1. The secret bitstreams are obtained after converting the decimal value of (d i − lower i ) into binary form. An example of the PVD process is illustrated below. Suppose the 4 bits binary secret message is 1011 2 and its corresponding decimal value is 11 10 . The modified difference and m are calculated as follows: Finally, as per equation (2.1), the stego pixels will be computed as follows:
Proposed scheme
The proposed colour image steganographic scheme is presented in this section. Initially, each colour pixel is decomposed into its corresponding colour components, i.e. R, G and B. Later we have formed two pairs with a combination of (R,G) and (G,B). Other ordered pairs are also acceptable, but in this work, we have implemented our scheme using the pairs like (R,G) and (G,B). (R,G) and (G,B) will form two consecutive overlapping blocks as shown in figure 3. In our scheme, we have embedded the variable secret message bits based on the difference of each pair using PVD. After embedding the secret message bits into each pair, the intermediate colour components are further readjusted to attain the final stegocolour components. A natural colour image may be dominated by particular colour components as an outcome of the data hiding process of that particular pixel, and the distortion may be large enough to be perceived. In this paper, we have avoided this circumstance by adopting a suitable threshold value. The data-hiding capacity in each colour pixel is restricted by the threshold value, so that the stego-image may retain high visual quality. Figure 4 shows the overall embedding process. The decoding process is shown in figure 5. The algorithm steps of the proposed embedding and extraction procedure are presented as follows:
Experiment results
In this section, the experimental results are presented to demonstrate the performance of the proposed scheme. The proposed scheme has been tested on a set of standard colour images, but in this paper, we present the results for six colour images where the images are selected with consideration of diverse image features to estimate the performance in terms of visual quality and embedding capability of the stego-images. The original images are shown in figure 7. The randomly generated message bits are insignificant, as shown in figures 21-26. The stego-image quality is further estimated in terms of the peak signal-to-noise ratio (PSNR) and embedding capacity/payload. that the distortion appearing after embedding of the secret message into the cover image is reasonably less and imperceptible to human visual perception. The proposed scheme is also compared with some other steganographic schemes in terms of embedding capacity and PSNR, and their results are given in
Conclusion
Most colour image steganography works on individual colour components instead of considering all colour components together. But in this paper, the proposed method conceals the secret message bits directly into each pixel sequentially. Conventional PVD works on the idea of overlapping blocks of colour components. The proposed readjustment process of colour components confirms the feasibility of conventional PVD-based decoding procedure. The experimental results reveal that the proposed scheme has a larger hiding capacity with acceptable imperceptibility of the stego-image. In addition, the proposed scheme is simple and easy to implement on RGB colour images.
Data accessibility. Our data have been deposited at Dryad (http://dx.doi.org/10.5061/dryad.21tm5) [29]. Authors' contributions. Both authors contributed to the design and implementation of the research, and to the writing of the manuscript. | 2,555 | 2017-04-01T00:00:00.000 | [
"Mathematics"
] |
Enhanced Oxidation of Nickel at Room Temperature by Low-energy Oxygen Implantation
The formation of oxide films on pure Ni surfaces by low energy oxygen ion-beam bombardment at room temperature was studied by X-ray photoelectron spectroscopy. Ion-induced oxidation is more efficient in creating thin NiO films on Ni surfaces than oxidation in oxygen atmosphere. The oxide thickness of bombarded samples is related to the penetration depth of oxygen ions in Ni and scales with the dose of implanted oxygen, Φ, as Φ1/6. This type of oxide growth is predicted theoretically for diffusion of Ni cations by doubly charged cation vacancies, which creation and mobility is greatly enhanced by ion-irradiation.
INTRODUCTION
OST metal surfaces exposed to an oxidizing environment will gradually corrode.However, many metals, such as chromium, cobalt or nickel, form an inert oxide layer on the surface that isolates the surface from the surrounding environment and provides an effective protection from further deterioration.[4][5][6] Nickel is one of the model metals for the oxidation studies as it forms only one oxide, NiO, under wide range of temperatures, T, and oxygen partial pressures, pO2. [7]Microscopically, the oxidation process in oxygen atmosphere starts with the chemisorption of oxygen at the very top of Ni surface, followed by the nucleation of NiO islands that eventually coalesce, even at room temperature, RT, and form a thin NiO film of several monolayers. [8]This thin NiO film passivates the surface.The further oxide growth requires higher temperatures.As NiO is a metal deficient (Ni1-δO) p-type semiconductor, the subsequent oxidation of Ni at higher temperatures is expected to proceed by the outward migration of Ni cations and electrons with the growth of a single-phase oxide at the oxide/gas interface. [1]Indeed, the mass transport by diffusion of Ni cations, with a possible contribution from an inward diffusion of oxygen anions, has been identified in a number of oxidation studies of Ni, including the high temperature experiments (500 to 1400 °C), [7,[9][10][11][12] or oxidation during potentiostatic anodic polarization on pure Ni. [13] It is very important to gain information on the initial oxidation stage of Ni, as it represents the very first step of Ni reaction with the oxidizing environment.However, at high temperatures, this initial reaction is very fast and it is difficult to determine all aspects of the initial oxidation stages, including the oxidation mechanism.On the other hand, the rapid passivation of Ni surface at RT prevents any further adsorption of oxygen and growth of thicker Ni oxides.In order to study the low-temperature oxidation of Ni, the oxidation chemistry of Ni should be enhanced, not by increasing temperature, but by implementing some other extreme conditions.Indeed, the irradiation of Ni by beams of electrons or Ar + ions during oxygen exposure enhances significantly the oxidation rate and the formation of Ni oxides. [8,14]In addition, it has been shown that the oxygen implantation represents an attractive and feasible M alternative for oxidation of Ni, even at RT, [15] that could be more efficient in creating thin NiO films on Ni surfaces than oxidation by some other methods, such as electrochemical methods. [16]][19][20] In the present study, we explore further the radiation-enhanced oxidation of Ni at RT by the low-energy oxygen ion-beam bombardment in order to examine the oxidation kinetics during initial stages of oxidation process.X-ray photoelectron spectroscopy (XPS) characterization of oxidized surfaces was used in order to compare the initial stages of oxidation of pure Ni metal in oxygen atmosphere at RT with the oxidation of the same material by the oxygen ionimplantation at RT.
EXPERIMENTAL
The oxidation studies were performed on a 0.5 mm-thick nickel foil (Alfa Aesar, 99.994 wt.% Ni).Before any oxidation step, the foil was abraded with SiC papers (800 -1200 grit), cleaned with ethanol and redistilled water and then slightly etched within the analysis ultrahigh vacuum (UHV) chamber by cycles of 2 keV Ar + bombardment at RT (these samples are referred to as cleaned samples).The cleaned samples were oxidised in situ in oxygen atmosphere, under the pure oxygen-gas (purity 99.999 %) pressures around 2 × 10 -4 Pa.The oxidation dose is expressed in units of Langmuir, connected to the gas pressure (in Pa) and the exposure time (in seconds) as 1 L = 1.33 × 10 -4 Pa s.The ionbeam oxidation was also carried out in situ with a broad beam of 1 or 2 keV O2 + ions with the typical current density of 2 μA/cm 2 .The implanted dose Φ (in O atoms/cm 2 ) for the conditions used in the present study is related to the bombardment time (t, in seconds), as 1.25 × 10 13 × t.For the bombardment times used in this study (from 2 seconds to 4 hours), the corresponding implanted dose of oxygen covers almost four orders of magnitude, from 2.5 × 10 13 O atoms/cm 2 to 1.75 × 10 17 O atoms/cm 2 .
All samples were characterized by XPS in a SPECS XPS spectrometer equipped with a Phoibos MCD 100 electron analyser and a monochromatized source of Al Kα X-rays of 1486.74 eV.The typical pressure in the UHV chamber during analysis was in the 10 -7 Pa range.For the electron pass energy of the hemispherical electron energy analyser of 10 eV used in the present study, the overall energy resolution was around 0.8 eV.All spectra were calibrated by the position of C 1s peak, placed at binding energy of 284.5 eV.The photoemission spectra were simulated with several sets of mixed Gaussian-Lorentzian functions with Shirley background subtraction.
RESULTS AND DISCUSSION
The development of Ni-O bonds during different oxidation stages on Ni surfaces can be obtained from the chemical shifts in photoemission spectra around the Ni 2p or O 1s core levels.While the focus of XPS measurements in the present study was on the Ni 2p photoemission that shows the more distinctive structure and larger chemical shifts than the O 1s emission, [21] the O 1s photoemission was also monitored in order to follow up the build-up of oxygen concentration in the near surface region during each oxidation step.The Ni 2p emission is characterized by two asymmetric peaks at binding energies, BE, of 852.5 eV and 869.9 eV, respectively, characteristic for the spin-orbit splitting of the Ni 2p energy level of metallic Ni, Ni(0), into 2p3/2 and 2p1/2 levels, respectively (not shown). [22]In the present work we have considered only the more intense Ni 2p3/2 component, as the large Ni 2p spin-orbit splitting of around 17 eV prevents any mixing of contributions from 2p3/2 and 2p1/2 states. [21]everal characteristic Ni 2p3/2 spectra obtained from cleaned and oxidized Ni surfaces are shown in Figure 1.The photoemission from a cleaned sample is characterized by the metallic Ni(0) peak (peak 1 at BE of 852.5 eV) and a less intensive satellite peak (peak 1' at BE around 858.5 eV; the fitting of the Ni 2p3/2 line from the cleaned sample, shown in Figure 2, reveals the second characteristic satellite at BE around 586.0 eV, as reported previously in the literature for pure Ni surfaces). [21]After the oxidation in oxygen atmosphere at RT several new peaks emerge in spectra (peaks 2-4 in Figure 1), already after the lowest dose of 10 L of oxygen used in the present study.Intensities of these new peaks increase only slightly for higher oxygen doses, and saturate after around 100 L of oxygen.Obviously, the metallic Ni surface is quite resistant to the oxidation in oxygen atmosphere at RT.The photoemission results from Figure 1 indicate the fast formation of only several monolayers of Ni oxide on the surface of Ni metal that reduces the reactivity of surface and prevents any further adsorption or uptake of oxygen at higher oxygen doses.The further growth of oxide films on Ni requires oxidation temperatures well above RT.
The ion-induced oxidation of Ni induces several dramatic changes in the shape of Ni 2p photoemission peaks, as shown, as an example, in Figure 2 for bombardment of Ni surfaces with 2 keV O2 + ions for 300 and 3600 s, respectively (corresponding to Φ of 3.75 × 10 15 and 4.5 × 10 16 O atoms/cm 2 , respectively).The ion-bombarded spectra are fitted with several mixed Gaussian-Lorentzian functions 2-6 (in addition to the metallic, Ni(0), peak 1 and two satellites, 1' and 1'', from the cleaned sample): peak 2 at BE of 853.8 eV, peak 3 at 855.5 eV, and satellite peaks 4, 5 and 6 at 860.7, 863.9 eV and 866.2 eV, respectively.Indeed, the new peaks 2-6 are known from the literature as the characteristic photoemission features from NiO surfaces. [23] strong metallic Ni(0) peak 1, still present in the spectra after oxidation with the highest oxygen dose used in the present study, indicates the formation of a very thin oxide film on the surface.In such case, the XPS signal includes a contribution from the underlying metallic nickel.
We supplement Ni 2p core-level measurements from Figures 1 and 2 with the XPS measurements around O 1s levels (Figure 3) and the valence band photoemission measurements (Figure 4, obtained also with Al Kα X-rays), respectively.Several characteristic spectra are shown in Figures 3 and 4, respectively.All O 1s spectra in Figure 3 exhibit an intensive peak at 529.5 eV, characteristic for the emission from the O 2-states in the transition metal oxides, such as NiO. [24]he smaller peak around 531.3 eV has been assigned previously to the defective oxide (oxygen atoms attached to the lattice vacancies) and/or CO or OH impurities adsorbed on the surface. [21]However, in the case of Ni oxides, this peak has also been assigned to the emission from the Ni2O3 phase that can be present at low concentration within the NiO films. [25]A possible contribution from a Ni2O3 phase in the Ni 2p3/2 spectrum is difficult to distinguish within the signal dominated with NiO phase, as the BE of Ni-O bonds in Ni2O3 overlap with the multiplet splitting structure of NiO (peak 3 from Figure 2). [15,26]Although in the present paper we only discuss the formation of a NiO phase, the presence of a small concentration of Ni2O3 phase should also be anticipated within the total concentration of Ni oxide. [15]In the case of oxidation in oxygen atmosphere (Figure 3a), the O 1s emission increases after exposing the surface to 10 L of oxygen, but remains almost constant for the higher oxygen doses, revealing the saturation of oxygen uptake after the lowest oxygen dose of 10 L. On the other hand, the concentration of oxygen increases continuously with the bombardment time during the ion-implantation experiments (Figure 3b), indicating the continuous build-up of oxygen concentration below the surface.The valence band spectra in Figure 4 exhibit the same trend.The valence band of samples oxidised in oxygen atmosphere (Figure 4a) saturates immediately after the lowest oxygen dose of 10 L. It is characterized by the broadening of peak A, i.e. development of an additional peak, C, corresponding to Ni 3d states in NiO. [27,28]In contrast to the oxidation in oxygen atmosphere, the valence band of ion-beam bombarded samples (Figure 4b) exhibits a more complex structure, characteristic for NiO: a well pronounced new peak, C, about 2.2 eV below the Fermi level, corresponding to Ni 3d states, and a strong peak D, about 9.5 eV below the Fermi level, representing Ni 3d valence band satellites. [27,28]n general, at low temperatures, the oxidation of Ni in oxygen atmosphere involves the tunnelling of electrons from Ni atoms to the adsorbed O atoms at the surface that produces an electric field across the oxide.This electric field causes the outward migration of metal cations to the surface, or the inward migration of oxygen anions to the oxide/metal interface, resulting in the thickening of the oxide film with the exposure time, best described by a logarithmic growth rate. [29,30]In this process oxidation is terminated when the electric field is no longer strong enough to support the ion migration.However, our oxidation in oxygen atmosphere, carried out at RT, are not consistent with the logarithmic kinetics.The oxidation process saturates very fast and prevents any further oxide growth.It is consistent with the first few steps in the oxidation process described by the dissociation of oxygen molecules from the gas phase and adsorption of separate oxygen atoms on the surface of Ni.As the adsorbed oxygen atoms exchange places with the underlying metal atoms and become incorporated below the surface, a thin oxide layer or oxidized islands form on Ni surface. [8]On the other hand, the oxidation of Ni in oxygen atmosphere at elevated temperatures is driven by Ni 2+ diffusion through the oxide film.[11][12] When a Ni vacancy is created, two neighbouring Ni 2+ atoms, in order to balance charges, each lose an electron forming two Ni 3+ ions (i.e. two electron holes). [1]The incorporation of oxygen in Ni is described by reactions [1] ½ O2 → Oo where, in the Kröger-Vink notation, Oo x represents an O ion on a regular lattice site, V"Ni (V'Ni) is a doubly (singly) ionized Ni vacancy and h is a positively charged electron hole.It has been shown (both theoretically and experimentally) that the concentration of cation vacancies scales with the oxygen partial pressure, pO2, as (pO2) n , where the exponent n equals 1/2, 1/4 or 1/6 for the neutral or singly and doubly charged Ni vacancies, respectively. [1,31,32]Consequently, any quantity, x, directly related to the diffusion of charged particles by cation vacancies within the oxide film, such as the oxide thickness or the change in weight as a result of oxidation, should be related to the oxidation time, t, as x = K t n , i.e. log(x) = log(K) + n log(t), where K is the rate constant [1] .Therefore, if the plot of experimental data on a log(x) vs. log(t) scale produces a straight line, the slope of this line is determined by the exponent n.
The change of thickness, d, of a NiO film with the bombardment time (i.e. the oxygen dose Φ) can be determined from the numerical fits of Ni 2p photoemission peaks shown in Figure 2. The intensity ratio, INi / INi ∞ , of metallic Ni 2p3/2 peaks (such as peaks 1 in Figure 2), taken from the Ni foil with an oxide film (INi) and from a cleaned Ni surface (INi ∞ ), is given by the relation [33] where I is the area under the metal peak (peak 1 in Figure 2), λ is the inelastic mean free path of electrons (λ = 1.1 nm for 630 eV electrons from the present XPS measurements [34] ) and θ is the emission angle with respect to the surface normal (equals 0° in the XPS instrument used in the present study).The results are plotted on a log(d) vs. log(t) scale in Figure 5 for the two oxygen-ion impact energies of 1 and 2 keV.The saturation oxide thickness obtained by this method is about 2.4 nm for 1 keV O2 + ions, while it approaches 3.3 nm for 2 keV oxygen ions.For the comparison, the saturation oxide thickness, obtained from relation ( 2) for a sample oxidised in oxygen atmosphere at RT to a dose of 5000 L (the largest oxygen dose used in our experiments) is about 0.9 nm.On the other hand, the oxide thickness on ion-bombarded samples can be estimated from SRIM simulations. [35]For 1 keV (or 2 keV) O2 + bombardment the ion range (Rp) of oxygen in Ni or NiO changes only slightly from 1.1 to 1.2 nm (or 1.6 to 1.9 nm), while the range straggling (ΔRp) changes from 0.9 to 1.3 nm (or 1.3 to 1.6 nm) Therefore, the total oxide thickness (Rp + ΔRp) of 2.0 to 2.5 nm (or 2.9 to 3.5 nm) is expected for 1 keV (or 2 keV) O2 + bombardment, in very good agreement with the saturation oxide thickness obtained from XPS measurements plotted in Figure 5.
Several different slopes, also plotted in Figure 5, are related to the mass transport during the oxidation processes driven by the diffusion of cation vacancies of different charge states, as predicted by the theory of parabolic oxidation growth rate. [1]For both ion-bombardment energies (1 or 2 keV) the oxide thickness closely follows the parabolic growth rate with the exponent n of about 1/6.This result strongly points to the dominant contribution of doubly charged Ni vacancies, V"Ni, in the mass transport during ion-induced oxidation of pure Ni.It is worth mentioning here that the oxidation of Ni in oxygen atmosphere at elevated temperatures exhibits several different growth rates of NiO, depending on the oxidation temperature.For example, the mass transport estimated from the diffusion coefficients of Ni in NiO as a function of pO2 indicates the (pO2) 1/3.5 dependence at 1400 °C, (pO2) 1/4 dependence at 1300 °C and finally a (pO2) 1/6 dependence bellow 1000 °C. [10]This result confirms the important role of VNi in mass transport during the thermal oxidation of pure Ni, with a dominant contribution of doubly charged vacancies at 1000 °C and singly charged vacancies at 1300 °C, while neutral vacancies may exist at higher temperatures.In contrast to thermal processes, the ion-induced oxidation eliminates the need for elevated temperatures and bypasses several thermally activated processes, such as absorption, dissociation, placeexchange, diffusion or bond braking, while, at the same time, enhances the production of point defects, such as vacancies and interstitials. [36]The point defects may considerably enhance mobility and reactivity of metal cations and oxygen anions in material and, therefore, influence the oxidation process.On the other hand, the sputtering process present during ion-bombardment can limit the final oxide thickness.However, the sputtering rate in our experiments is quite low, at least two orders of magnitude lower than in some other techniques based on sputtering process, such as Secondary ion mass spectrometry (SIMS). [37]From our SIMS measurements on Ni [15] , we estimate the sputtering rate in present oxidation experiments to the order of 10 -3 Å / s.But, more importantly, the oxygen ions are implanted deeper below the surface with an increase of Rp directly related to the erosion rate of the surface by sputtering.For a steady state condition between the sputter-erosion of the surface and the implantation depth of O atoms, the thickness of the final oxide layer does not depend on the sputtering rate.
CONCLUSION
In summary, we present the XPS analysis of the oxidation process on pure Ni surfaces by low-energy oxygen bombardment at RT.The oxidation starts below the surface, at depths within the Rp + ΔRp range.Oxygen in excess to the amount consumed in the formation of a shallow buried nickel oxide rapidly diffuses toward the buried oxide interface where it oxidizes the available nickel.At the same time, the outward diffusion of Ni ions through cation vacancies, created within the buried NiO (the formation of vacancies is further enhanced by ion-bombardment), places more Ni ions close to the surface where they react with the impinging oxygen ions and provide the oxide growth at the oxide/gas interface.This oxidation step is characterized by the parabolic growth rate, which scales with the amount of implanted oxygen as Φ 1/6 , in contrast to the oxidation processes carried out in an oxygen atmosphere, but in full agreement with the theoretical prediction for the oxidation driven by the diffusion of cations through doubly charged cation vacancies.The oxide thickness is related to the implantation kinetics of oxygen ions and is limited by the simultaneous sputtering of the Ni surface by energetic ion bombardment.
Figure 1 .
Figure 1.Ni 2p3/2 core-level photoemission spectra from a cleaned nickel surface and surfaces oxidized at RT in oxygen atmosphere to 10, 100 and 5000 L, respectively.
Figure 2 .
Figure 2. Fitting of Ni 2p3/2 photoemission peaks, obtained from a cleaned surface and surfaces ion-beam bombarded at RT with 2 keV O2 + for 300 and 3600 s.Closed circles represent experimental data and solid lines a mixure of Gaussians and Lorentzians.
Figure 3 .
Figure 3. XPS spectra around O 1s core-level measured a) on a cleaned Ni sample and samples oxidized at RT in oxygen atmosphere to different oxygen doses or b) on samples bombarded at RT with 2 keV O2 + ions for different times, as indicated in the figure.
Figure 4 .
Figure 4. Valence band photoemission spectra (obtained with Al Kα X-rays) from a cleaned Ni surface and surfaces oxidized at RT in oxygen atmosphere to 10, 100 and 5000 L or ion-beam bombarded at RT with 2 keV O2 + ions for 10 and 14400 s.
Figure 5 .
Figure 5. Thickness of oxide films obtained on pure Ni samples by 1 or 2 keV O2 + ion-beam bombardment at RT as a function of bombardment time, plotted on a log-log scale.Symbols represent experimental results.Several different slopes, characteristic for different parabolic growth rates, are also indicated in the figure. | 4,823 | 2017-07-03T00:00:00.000 | [
"Materials Science"
] |
The performance of
X¯ control charts for large non‐normally distributed datasets
Because of digitalization, many organizations possess large datasets. Furthermore, measurement data are often not normally distributed. However, when samples are sufficiently large, the central limit theorem may be used for the sample means. In this article, we evaluate the use of the central limit theorem for various distributions and sample sizes, as well as its effects on the performance of a Shewhart control chart for these large non‐normally distributed datasets. To this end, we use the sample means as individual observations and a Shewhart control chart for individual observations to monitor processes. We study the unconditional performance, expressed as the expectation of the in‐control average run length (ARL), as well as the conditional performance, expressed as the probability that the control chart based on estimated parameters will have a lower in‐control ARL than a specified desired in‐control ARL. We use recently developed factors to correct the control limits to obtain a specified conditional or unconditional in‐control performance. The results in this paper indicate that the X¯ control chart should be applied with caution, even with large sample sizes.
INTRODUCTION
Shewhart control charts are commonly used to monitor process data. Typically, the performance of such control charts is heavily dependent on the assumption of normally distributed data. In practice, this assumption is often violated. For example, Alwan 1 analyzed 235 real datasets and concluded that most of these datasets do not meet the assumptions underlying the traditional control charts.
Since recent advances have led to an increase in the amount of available information, one way to work around the violation of the normality assumptions is to gather larger datasets and use subgroup averages instead of indi-vidual observations. Because averages are normally distributed under certain conditions, according to the central limit theorem (CLT), this should largely resolve the issue of non-normally distributed data (cf Billingsley 2 ).
While the approach of using averages instead of individual observations is suitable for many statistical techniques, the major difference with many other statistical techniques is that in statistical process monitoring (SPM) we are interested in the long tail behavior of the distribution. This means that, even when the statistic is almost normally distributed, small deviations at the long tails can lead to a bad control chart performance in terms of the false alarm rate and the average run length (ARL). In this paper, we therefore investigate the performance of Shewhart-typeX control charts for large non-normally distributed datasets using the convolutions of the distributions. To the best of our knowledge, the performance of ShewhartX control charts in this setting has not been investigated thus far.
The paper is structured as follows. In the next section, we briefly describe the model and control charts considered in this paper. Subsequently, in Section 3, the CLT is summarized followed by the convolutions of various probability distributions. In Section 4, we investigate the differences between the normal and non-normal convolutions. Next, Section 5 describes the performance of the Shewhart control chart based on large non-normally distributed datasets. Finally, Section 6 provides some concluding remarks.
THE CLASSICAL SHEWHART CONTROL CHART
Because of the increase in data supply and storage, nowadays organizations often possess large datasets. As the CLT states that under certain conditions the sample means are normally distributed when the samples are sufficiently large, we could treat the sample means as individual observations and use a Shewhart control chart for individual observations under normal theory. To construct such a chart, m samples of size n are collected when the process is assumed to be in control. On the basis of these data, the process mean is estimated by where X ij is the j-th observation in the i-th subgroup (i = 1, 2, … , m and j = 1, 2, … , n), and the process standard deviation is estimated from the standard deviation of the sample meansX i S = ( An unbiased estimator of the standard deviation of the sample means ( The choice of the estimator of the standard deviation of the sample means is based on Cryer and Ryan. 3 We have also evaluated the alternative and more traditional estimator based on moving ranges (which was also used by Roes Correction added on 19 June 2018, after first online publication: the running header has been corrected et al 4 ). However, the use of this estimator has not improved the performance of the ShewhartX control chart, which confirms the result of Cryer and Ryan. 3 The control limits based on estimated parameters are given bŷ withÛCL andLCL the respective upper and lower control chart limits and k the factor used to achieve the desired in-control performance. When the process parameters are known, k is commonly set equal to 3, which yields a false alarm rate of 0.0027 or equivalently an ARL of 370.4. However, when process parameters are unknown, other values can be chosen to match a certain desired performance. Obtaining a desired control chart performance for practitioners in expectation represents the unconditional performance of the control chart. Recently, factors k u have been derived to ensure that the in-control ARL in expectation (EARL) is equal to a specified value (EARL 0 ) (see Goedhart et al 5 ).
Another recent development is to evaluate control chart design on the variation of the in-control ARLs of the individually estimated, also called conditional control charts. Saleh et al 6 investigated the conditional performance of the traditional control charts based on estimated parameters. They show that for estimated control chart limits for k = 3 the probability of ending up with an estimated chart that has an in-control conditional ARL (CARL) lower than 370.4 is considerable. Goedhart et al 7 developed new correction factors k c for control charts in order to ensure that the probability (P E ) that a design delivers an estimated control chart with an in-control CARL lower than a specified value (CARL 0 ) is at most a specified probability (p).
In this article, we study both the unconditional and conditional performance of the control chart constructed with (3) including the newly developed factors, for the cases where the data are non-normally distributed and various sample sizes (n = 5, 30, 50, 100, 250, 1000). With this model, we can investigate whether the CLT works well and whether the newly developed correction factors are applicable to large non-normal datasets as well. We consider the normal distribution, the standard uniform distribution, heavy tailed symmetrical distributions (Student's t 4 and t 10 and the logistic distribution), and skewed distributions (the lognormal, Gamma(5, 1), Gamma( 5 2 , 2) ∼ 2 5 and 2 20 distributions). The distribution of the sample means for any one of these non-normal distributions can be found using the convolution of that non-normal distribution, ie, where C n is the convolution of n i.i.d. random variables with distribution F. In the next section we produce the distribution of C n for the considered non-normal distributions.
THE DISTRIBUTION OF THE SAMPLE MEAN
Let X 1 , X 2 , … , X n be n i.i.d. observations drawn from F, with E[X i ] = and Var[X i ] = 2 < ∞. Then as n tends to infinity, the random variables √ n(X − ) converge in distribution to a normal N(0, 2 ) (cf Billingsley 2 ), ie, Hence the asymptotic distribution of the sample means is normal under the above restrictions. The exact distribution for finite values of n can be obtained by evaluating the convolution. To assess the performance of the Shewhart control chart for sample means of non-normally distributed samples, we need the distributional properties of the convolution of these samples: C n = ∑ n i=1 X i . The convolutions will allow an investigation of the distribution of the sample means of non-normal distributions and a comparison with the asymptotic normal distribution according to the CLT.
The convolutions are given below; further details on the derivations and approximations are given in the appendix.
The convolutions 3.1.1 The normal distribution
The convolution of i.i.d. normal random variables is just a normal distribution, with mean n and variance n 2 C n ∼ N(n , n 2 ).
The uniform distribution
The convolution of i.i.d. standard uniform random variables has an Irwin-Hall (IH) distribution, which has a piecewise polynomial probability density function with parameter n (see Hall 8 ):
The Student's t v distribution with degrees of freedom
For = 1, t 1 is equal to a standard Cauchy distribution and its convolution C n will have a Cauchy distribution as well (see Blyth 9 ): where 0 and n denote the location and scale parameters of the Cauchy distribution respectively. Note that the conditions needed to apply the CLT do not hold for this case, as the Cauchy distribution has no finite mean and variance. For > 1, we use an approximation based on the numerical inversion of the characteristic function.
The logistic distribution
The standardized version of the sum of i.i.d. logistically distributed random variables with = 0 and s = 1 can be approximated by a Student's t distributed random variable with = 5n + 4 degrees of freedom (George and Mudholkar 10 ):
The lognormal distribution
The distribution of the convolution C n of the lognormal distribution can be approximated using 2 methods: the Fenton-Wilkinson approximation by Fenton 11 or the Pearson IV approximation by Nie and Chen. 12 The performance of the Pearson IV approximation turns out to be more accurate than the Fenton-Wilkinson approximation as it matches 2 more moments (see Section 3.2). In the sequel, we will use the Pearson IV approximation with location parameter , scale parameter > 0, and shape parameters m > 1 2 , ≠ 0.
The gamma Γ( , ) distribution with parameters and
, with parameters and , then its convolution is gamma distributed with parameters n and C n ∼ Γ(n , ).
The chi-squared 2 distribution with degrees of freedom
The convolution distribution of the sum of n i.i.d. chi-squared random variables with degrees of freedom is again a chi-squared distribution with n degrees of freedom:
Accuracy of the approximated distributions
As reported in the previous section, the convolutions of the Student's t with > 1, logistic and lognormal distributions have to be approximated. In the graphs in the left column of Figure 1, the approximated densities of the convolutions for the t 10 , t 4 , logistic and lognormal distributions are plotted and compared with the empirical distribution based on 6 million samples. The graphs in the middle and right columns of Figure 1 zoom in on the 0.135th and 99.865th percentiles of the distributions. The graphs show that the approximated t 10 , t 4 , and logistic convolutions are accurate. For the lognormal approximations, we find that the Pearson IV approximation is closer to the empirical distribution than the Fenton-Wilkinson approximation. Thus, we will use the Pearson IV approximation in the sequel.
EVALUATION OF THE CENTRAL LIMIT THEOREM
To investigate the differences between the actual distribution of the sample mean and the appropriate normal distribution, we have plotted both distributions and the tail behaviors. In Figures 2 to 4, we have used n = 5, 30, 250 and = 0.0027 to investigate the tail behaviors. The graphs on the left give the densities, while the graphs in the middle and on the right zoom in on the 0.135th and 99.865th percentiles of the distributions. The graphs show that, for a sample size of n = 30 or larger, the convolutions of the uniform, t 10 and logistic distributions, do not deviate much from the normal distribution. The distribution of the t 4 convolution, however, clearly has wider tails than the normal distribution.
The overall distribution of the Gamma convolution is quite close to normal, with gamma( 5 2 , 2) ∼ 2 5 closer to normal than gamma(5,1). When we zoom in on the tail behavior, the gamma distributions show skewed tail behavior with narrower tails on the left and wider tails on the right than the normal distribution.
The 2 20 convolution deviates a little from the normal distribution, but less so than the 2 5 convolution. The lognormal convolution shows the largest difference with the normal distribution. The distribution of the lognormal convolution is still strongly skewed for large values of n (n = 250).
Note that when we consider a relatively small sample size (n = 5), there are large differences for all distributions. This indicates that the normal approximation is not good enough for small sample sizes.
Simulation procedure
To evaluate the control chart performance, we conduct 10 000 simulation runs for each parameter combination. For each simulation run 1. A dataset consisting of m samples of size n is generated. On the basis of these data, is estimated by X and ∕ √ n is estimated by S∕c 4 (m), using (1) and (2). Next, UCL andLCL can be determined using (3). Factor k u is based on Goedhart et al 5 and factor k c on Goedhart et al. 7 2. For each dataset, the conditional false alarm rate (CFAR) is calculated as CFAR = 1 − P(LCL <X < UCL) = 1 − P(nLCL < C n < nÛCL) using the convolutions of Section 3.1. The CARL is given by 1∕CFAR.
When we perform the above procedure, we end up with 10 000 CARLs of individually estimated control charts. When k u is used, the EARL is estimated by averaging the 10 000 CARLs of the simulated control charts. When k c is used, the exceedance probability (P E ) is obtained by determining the percentage of CARLs lower than a specified value (CARL 0 ). Both the unconditional and conditional results were verified using the empirical distribution of the non-normal distributions.
We expect that the higher EARL 0 or CARL 0 , the larger the sample size should be to ensure that the performance of the control charts is as desired. This is because the higher these values are, the more our interest moves towards the long tail of the distribution of the sample means, where minor deviations from the normal approximation have more impact on the performance. For this reason, we consider various values for EARL 0 and CARL 0 , namely, 1000, 370.4, and 100.
Finally, as we expect that the correction factors are more accurate when the sample size (n) is larger, we consider a broad range of values, namely, n = 5, 30, 50, 100, 250, 1000. For the amount of samples m, we take values m = 30, 50, 100, 200.
Unconditional performance
In this section, we present the simulation results of the control charts based on (3) and k u as defined in Goedhart et al. 5 Tables 1 to 3 present the results for an EARL 0 equal to 1000, 370.4, and 100, respectively. Each table presents the EARL and 5th, 50th and 95th percentiles of the CARL distribution.
Each table shows that the larger the sample size (n), the closer the EARL is to its desired value EARL 0 and so the more applicable is the correction factor. Increasing the number of samples (m) also reduces the deviation in performance with respect to the case of normally distributed data, but the impact of m is less strong than the impact of n, as was to be expected. Also, the value of EARL 0 is of influence: the higher EARL 0 , the larger the sample size should be to obtain a performance that resembles the performance under normality. This can be explained as the relative difference between the distributions of the means based on the non-normal and normal distributions is the largest in the tails of the distributions. To give an example, for the case EARL 0 = 1000, the t 10 and logistic distributions require a sample size of 100 or larger in order to obtain a reasonable in-control performance with the use of the given correction factors while, for the case EARL 0 = 100, a sample size of 30 is sufficient to obtain the desired EARL values.
As discussed in Section 4, the uniform distribution is the only distribution that has a convolution distribution with thinner tails than the normal distribution on both sides. This produces extremely large EARL values for small n. Furthermore, as the uniform distribution is bounded by an interval, conditional control limits have been generated that produce a CFAR of zero for small values of n giving an infinite CARL. Tables 1 to 3 show the amount of infinite values we found for the uniform distribution within the second parentheses. In Section 4, we already indicated large differences between the normal distribution and the distributions of the lognormal and t 4 convolutions and small deviations compared with the uniform, t 10 , logistic, Gamma(5, 1), Gamma( 5 2 , 2) ∼ 2 5 , and 2 20 convolutions. The EARL results confirm these hypotheses, as for all values of n and m the lognormal EARL values are consistently far below the desired EARL 0 , indicating the strong skewness as observed in the analysis of the convolutions.
Conditional performance
In this section, we present the results of the control charts based on (3) with k c such that the probability of having an in-control CARL lower than a specified value (CARL 0 ) is equal to p (cf Goedhart et al 7 ). We set p = 10%. Tables 4 to 6 present the realized exceedance probabilities P E for a specified CARL 0 of 1000, 370.4, and 100, respectively. Each table presents the results for various sample sizes (n = 5, 30, 50, 100, 250, 1000), various numbers of samples (m = 30, 50, 100, 200), and various distributions (normal, uniform, t 10 , t 4 , logistic with = 0 and s = 1, lognormal with = 0 and = 1, Gamma(5, 1), Gamma( 5 2 , 2) ∼ 2 5 and 2 20 ). As for the unconditional case, the tables show that the larger the sample size (n), the closer P E is to its desired value p(10%), and so the better the applicability of the control charts. Also, the value for CARL 0 has an impact: the lower the CARL 0 , the closer the control chart performance is to the desired performance. This can be explained by the increase in relative difference further in the tails of the distributions.
The normal approximation is worst in the case of the lognormal distribution, as we see that the deviation of P E with respect to p = 10% is the largest. A very large sample size (n) is needed to guarantee a desired conditional performance. In the case of CARL 0 = 100, a sample size of 1000 gives reasonable P E values, also for the lognormal distribution, while for CARL 0 = 1000 and 370.4 even a sample size of 1000 is not large enough to ensure the right exceedance probabilities.
Interestingly, increasing m actually increases P E for the non-normal distributions in most situations. For example, the t 4 distribution for CARL 0 = 370.4 and n = 50 has a P E of 17.2% for m = 30. With m increased to 200, for t 4 now 40.3% of the CARLs are below the desired CARL 0 = 370.4. This can be explained by a decrease in parameter estimate variation and thus a decrease in the constant k c , causing tighter control limits.
SUMMARY AND CONCLUDING REMARKS
In this paper, we have studied the applicability of the CLT to large non-normal datasets. According to the CLT, sufficiently large samples should lead to normally distributed sample averages. However, since SPM is concerned with the far tail of the distribution, it was unclear whether the convergence to normality would be sufficient.
In this research, we have thus investigated whether the charting constants that are designed for normally distributed data can also be applied to large non-normal datasets. In particular, we have applied the Shewhart control chart for individual observations to monitor the sample means of non-normally distributed datasets.
The study demonstrates that the appropriateness of the control charting constants, also for non-normally distributed data, depends on various factors. These factors include the sample size (n), the number of samples (m), the specified desired performance of the control chart, and the degree of the deviation from normality. When the deviation from normality is moderate (as is the case for the uniform, t 10 , logistic, Gamma(5, 1), Gamma( 5 2 ) ∼ 2 5 , and 2 20 distributions), a sample size of 100 is large enough to ensure appropriate use of the correction factors.
However, when the deviation from normality is substantial due to heavy tails (t 4 ) or substantial skewness (lognormal), the correction factors are not applicable even when
A.1 The normal distribution
The convolution of i.i.d. normal random variables can be found using the moment generating function approach. The moment generating function of a convolution of normally distributed variables X ∼ N( , 2 ) is ) which is just the moment generating function of a normal distribution, with mean n and variance n 2 and hence C n ∼ N(n , n 2 ).
A.2 The uniform distribution
As shown by Hall, 8 the convolution of i.i.d. standard uniform random variables has a piecewise polynomial probability density function of degree n − 1 which we denote as the IH(n) distribution.
A.3 The Student's t distribution with degrees of freedom
There is no closed form of the convolution of Student's t distributed random variables X ∼ t for > 1(see Walker and Saw 13 ), but approximations do exist. We use an approximation based on the numerical inversion of the characteristic function given by Witkovsky. 14 The characteristic function of the sum of Student's t distributed random variables, C n , equals C n (t) = n X (t), where the characteristic function of a single Student's t distributed random variable equals in which K {z} denotes the modified Bessel function of the second kind. The distribution function F C n = Pr{C n ≤ x} of C n is found using the inversion formula of Gil-Pelaez 15
A.4 The logistic distribution
Now assume a logistic distribution for the random variable: X ∼ logistic( = 0, s = 1). The standardized version of the sum of X i can be written as which distribution can be approximated by with = 5n + 4 degrees of freedom. For more details on this approximation see George and Mudholkar. 10
A.5 The lognormal distribution
The characteristic and moment generating function of the lognormal distribution are undefined. The distribution of the convolution C n can be approximated by 2 methods. In the first place, the Fenton-Wilkinson approximation will be used, as it is said to perform well in the tails of a lognormal distribution (see Mehta et al 16 ). Secondly, an approximation based on the type IV Pearson distribution will be used.
A.6 The Fenton-Wilkinson approximation
Consider the sum of lognormal (LN) random variables X i , where each X i ∼ LN( , 2 ) with the expectation E(X i ) = exp( +0.5 2 ) and variance Var(X i ) = (exp( 2 )−1)exp(2 + 2 ). The expectation and variance of C n are E(C n ) = nE(X i ) and Var(C n ) = nVar(X i ). The Fenton-Wilkinson approximation is a lognormal PDF with parameters C n and 2 C n such that ex ( C n + 0.5 2 C n ) = nE(X i ) and (ex ( 2 C n ) − 1)ex (2 C n + 2 C n ) = nVar(X i ). Solving for C n and 2 C n results in a lognormal distribution for the sum: C n ∼ LN( C n , 2 C n ).
A.7 The type IV Pearson approximation
The type IV Pearson approximation was developed by Nie and Chen 12 and equates the first 4 central moments ( 1 , 2 , 3 , 4 ) of the sum of lognormal distributions to the 4 parameters of the Pearson IV distribution. Denote the sum of lognormal random variables by C n , where each X i ∼ LN( , 2 ).
Where the Fenton-Wilkinson approximation only uses the first 2 moments as parameters for a lognormal distribution to represent the sum of lognormal random variables C n , the Pearson IV method uses 4 moments to approximate | 5,752.6 | 2018-10-01T00:00:00.000 | [
"Business",
"Mathematics"
] |
The physics of badminton
The conical shape of a shuttlecock allows it to flip on impact. As a light and extended particle, it flies with a pure drag trajectory. We first study the flip phenomenon and the dynamics of the flight and then discuss the implications on the game. Lastly, a possible classification of different shots is proposed.
Introduction History
The first games important to the creation of badminton were practised in Asia 2500 yr BC [1]. Soldiers played tijian-zi, which consisted of exchanging with their feet a shuttle generally made of a heavy leather ball planted with feathers ( figure 1(a)). This game is now called chien-tsu and is practised with modern shuttles as shown in figure 1(b). Rackets were introduced for the first time in Japan with hagoita ( figure 1(c)). During this period, shuttles were composed of the fruits of the Savonnier tree, which look like beans and were again furnished with feathers. Contemporary badminton is a racket sport originating from the Indian game tomfool, modified by British colonials, and played with a feathered shuttlecock and a racket made with strings, as attested by the painting of Jean-Siméon Chardin, reproduced in figure 1(d).
The modern game Badminton is played either by two opposing players (singles) or two opposing pairs (doubles). Each player (or team) stands on opposite halves of a rectangular court which is 13.4 meters long, 5.2 meters wide and divided by a 1.55 meter-high net (figure 2(a)). Players score points by striking a shuttlecock with their rackets (a typical racket is shown in figure 2(b)) so that it passes over the net and lands in the opponent's half-court. Each side may strike the shuttlecock only once before it passes over the net. A rally ends once the shuttlecock has hit the floor or a player commits a fault. The shuttlecock is a feathered (or, in uncompetitive games, plastic) projectile. It is made of 16 goose feathers planted into a cork (figure 2(c)). This object weighs M 5 g = , its length is L 10 cm = and its diameter is D 6 cm = . Shuttlecocks have a top speed of up to 137 m s −1 [2]. Since the projectile flight is affected by the wind, competitive badminton is played indoors. Since 2008, all the finals of the Olympic Games and the World Championships have been contested by Lin Dan (China) and Lee Chong Wei (Malaysia). Looking at those finals, one observes that a typical game lasts about one hour (20 min by set), each rally lasts on average about 10 s with typically 10 exchanges. Badminton strategy consists of performing the appropriate shuttlecock trajectory, which passes over the net, falls in the limit of the court and minimizes time for the opponent reaction.
The state of the art The trajectories of shuttlecocks have been extensively studied with experimental, theoretical and numerical approaches. Cooke recorded the trajectories of different shuttlecocks in the court and compared them to numerical simulations [3]. The aerodynamics of several shuttlecocks was studied in a wind tunnel by Cooke and Firoz [4,5]. They measured the air drag F SC U 2 exerted by air on a shuttlecock (where ρ is the air density, S D ( 2) 2 π = the shuttlecock cross-section and U its velocity) and showed that the drag coefficient C D is approximatively constant for Reynolds numbers (Re DU ν = , with ν the air kinematic viscosity) between 1.0 10 4 × and 2.0 10 5 × . For commercial shuttlecocks, C D varies between 0.6 and 0.7 depending on the design of the skirt. Wind tunnel measurements also reveal that there is no lift force on a shuttlecock when its axis of symmetry is aligned along the velocity direction. A synthesis of data collected in the court and wind tunnels has been done by Chan [6]. Shuttlecock trajectories have been calculated by Chen [7], and Cohen et al [8] proposed an analytical approximation for the range of projectiles submitted to weight and drag at high Reynolds number. Nevertheless, the peculiarities of shuttlecocks such as their conical shape and flipping properties have rarely been discussed [9]. For instance, the observation of impacts with a racket (figure 3) reveals a dynamics specific to badminton: shuttlecocks fly with the nose ahead so that they can be hit on the cork by each player, which requires the shuttlecock to flip after each racket impact. The questions we address in this work are: what makes the shuttlecock flight unique, and how does it influence the badminton game? In the first section, we study the 'versatile' behavior of a shuttlecock. The characteristic times associated with the motion are measured, and we develop an aerodynamical model to predict them. The second part concerns trajectories at the scale of the court, that is, for clear strokes. In this section, we study how the flight of a shuttlecock depends on its characteristics (mass, composition and geometry) and on the fluid parameters (density, temperature and humidity). Finally, we discuss in the third section how the shuttlecock flight influences the badminton game in terms of techniques, strategies and rules. Accordingly, to the extent that the law allows, IOP Publishing disclaims any liability that any person may suffer as a result of accessing, using or forwarding the image and permission should be sought before using or forwarding the image from Wikipedia and/or the copyright owner. This image was made available on Wikipedia under a creative commons CC BY SA 3.0 licence. performs a complete turn. In figure 3(a), the flip lasts four time intervals, which corresponds to 15 ms. The oscillating time of the shuttlecock direction is estimated as 80 ms. After 130 ms, the shuttlecock axis of symmetry is aligned along the velocity direction. When the hit intensity decreases, the dynamics of the shuttlecock slows down. Figure 3(b) shows the same shuttlecock leaving the racket at a velocity two times smaller than the previous one. The flipping time increases to 35 ms, the oscillating time lasts about 120 ms and the stabilizing time is estimated as 180 ms.
Such movies allow us to measure the angle φ between the shuttlecock axis and the velocity direction, as defined in figure 3. A typical example of the time evolution of φ is plotted in figure 4. Such graphs highlight the three characteristic times introduced earlier. The first one is the flipping time f τ needed for φ to vary from 180°t o 0°. The second one, denoted as o τ , is the pseudo-period of oscillations. The third one is the stabilizing time s τ , which corresponds to the damping of the oscillations (red dashed lines in figure 4). The purpose of this section is to understand this complex dynamics.
Flip model
In order to understand the shuttlecock behavior after impact, it is necessary to evaluate the forces applied to it, namely weight and aerodynamic pressure forces. These latter reduce to drag, the application point of which being the pressure center, where the aerodynamic torque vanishes [10]. Its location depends on the pressure profile around the projectile. If this profile is constant around the projectile, the aerodynamic center is the centroid of the object. Since the mass as a function of axial distance is non-homogeneous in a shuttlecock, the center of gravity is closer to the cork and it differs from the center of pressure. Using numerical simulations, Cooke estimates that the distance l between the center of mass and center of pressure is about 3.0 cm [3]. The sketch in figure 5(a) highlights the effect of the drag F D on an inclined shuttlecock. The aerodynamic drag applies a torque in a way opposite to the projectile velocity U and stabilizes the cork (corresponding to 0 φ = ). Since the versatile behavior of a shuttlecock arises from the non-coincidence between its center of mass and center of pressure, we model the object with two spheres. The first one stands for the skirt of mass M B and large cross-section S positioned in B, and the second one represents the cork of mass M C and smaller cross-section s placed in C ( figure 5(b)). The shuttlecock characteristics are condensed in a heavy small cork and a large light skirt. A torque balance around G provides the following equation in the realistic limit SM sM where C D is the drag coefficient of a sphere and l GC the distance between the points G and C (l M M l ). The calculation leading to equation (1) is detailed in appendix A. This second order differential equation for φ is one of a damped oscillator. The square of pulsation SC U Ml 2 corresponds to the stabilizing torque generated by the aerodynamic drag ( figure 5(b)). The damping term, , results from the drag associated with the orthoradial movement of the shuttlecock as φ varies. The different characteristic times arising from (1) can finally be compared to the data.
Flipping time
Experiments show that the flipping time is smaller than the stabilizing time. This remark leads one to neglect the damping term˙s φ τ in equation (1). In this limit, the equation of motion can be integrated with the initial conditions t ( 0) φ π = = and ṫ ( 0)˙0 φ φ = = , which yields:˙2 (1 cos ). ˙0 φ and shuttlecock speed U are measured. Figure 6(a) compares the experimental flipping time f exp τ with the theoretical one f th τ predicted by solving equation (2). All the data (blue dots) are distributed around a line of slope 1.1. Figure 6(b) highlights the dependency between the initial angular velocity˙0 φ and its initial velocity U. For a standard impact, the two initial conditions given to a shuttlecock are thus not independent.
Oscillating time
The oscillating time of a shuttlecock can also be predicted by equation (1). By considering typical values of the characteristics of a shuttlecock, we estimate the quantity This leads one to consider low damped oscillations where the oscillating time can be expressed as 2 . This approximation provides:
Stabilizing time
The stabilizing time is experimentally determined to be about one hundred milliseconds ( figure 3) and it appears in equation (1) through the damping term˙s φ τ . According to the previous model, it can be expressed as: We can look at the evolution of the stabilizing time with the shuttlecock velocity for different impacts. Figure 7(b) shows the stabilizing time as a function of the one predicted by equation (5). All the data (blue dots) collapse on a line of slope 0.4. The fact that the slope is lower than unity may come from the approximation of a drag coefficient independent of the shuttlecock orientation. Actually, the shuttlecock drag coefficient increases with the orientation angle φ as shown in figure 8(b). This phenomenon, which is not taken into account in the model, tends to reduce the calculated stabilizing time.
On the shape of a shuttlecock
We now discuss how the shuttlecock geometry influences its flipping dynamics, which ideally might explain why a shuttlecock opening angle Λ close to 45°was selected (figure 2(c)). In order to answer this question, shuttlecock prototypes have been constructed. They are made with a dense iron ball and a light plastic skirt ( figure 9(a)). The characteristics of these prototypes (length L, diameter D, mass M and opening angle Λ) can be easily varied. For each one, the flipping dynamics was captured and analyzed in a free fall experiment where the shuttlecock was released upside down without initial velocity. These experiments were conducted in a water tank in order to reduce the length scales. The Reynolds number corresponding to the flow is also reduced but it stays in the regime of high Reynolds number (Re 10 3 > ) where fluid effects are described by the same laws. Figure 9(b) shows a chronophotograph of a prototype flipping during its fall in water. We performed experiments with given mass M and diameter D, but different opening angles Λ between 10°and 160°. The evolution of f exp τ and s exp τ with Λ is reported in figure 9(c). The graph shows the existence of an optimal opening angle for which the flipping and stabilizing times are minimal.
The dependency of o τ and s τ with Λ can be understood qualitatively. For small opening angles, the shuttlecock is elongated and the skirt has a high momentum of inertia. The object is difficult to set in motion and the characteristic times are long. In the opposite case (large Λ) the shuttlecock is short and l is small as the stabilizing torque resulting from the drag force. This situation also corresponds to large values of flipping and stabilizing times. Between these two regimes, there is a range of opening angles for which the flipping motion is faster. Real shuttlecocks seem to belong to this family of intermediate opening angles that rapidly flip.
The basic model developed previously for shuttlecocks can be applied to any object which have a distinct center of mass and center of pressure. One observes the superimposition of numerical and experimental trajectories. This agreement validates the assumptions of constant C D (and S) along the shuttlecock trajectory, as also observed by Phomsoupha et al [11]. The equation of motion for shuttlecocks has an analytical solution [8]. This solution leads to an approximate expression for the range x 0 of the projectile, defined as the position on the horizontal axis where the particle returns to its initial height (figure 10): x U g 2 cos ln 1 4 sin . , we observe a logarithmic dependency of x 0 with the initial velocity: the range virtually saturates at a distance scaling as . In badminton, the initial launching velocity U 0 is often much larger than U ∞ and players can feel the saturation of the range with initial velocity. In this regime, x 0 highly depends on shuttlecock and air properties through the aerodynamic length . In the following, the influence of the shuttlecock and fluid characteristics on trajectories is studied.
Difference between plastic and feather shuttlecocks
Shuttlecocks are usually classified in two categories, namely plastic and feathered. In order to understand the difference between both types, we observed their trajectories. Figure 11 reports two shuttlecock trajectories with the same initial conditions but with a different kind of projectile.
With the same initial angle and velocity, the range is larger for plastic than for feathered shuttlecocks. This increase is about one meter, which represents 10% of the total range. This phenomenon is observed on a large variety of plastic and feathered shuttlecocks [3]: both projectiles can be distinguished by their aerodynamic lengths. Parameters influencing are determined, such as drag coefficients measured in a wind tunnel, and results are plotted in figure 12.
Since C D is independent of the Reynolds number, we consider its mean value. For the feathered projectile, it is C 0.65 0.05 D f = ± whereas we have for the plastic one C 0.68 0.05 D p = ± . The exposed section is equal to 28 cm 2 for both shuttlecocks. The shuttlecock's mass is M 5.0 g f = for the feathered one and M 5.3 g p = for the plastic. Combining all these data, we estimate the aerodynamic length for each kind of shuttlecock: f = 4.04 m and p = 4.48 m. We solve numerically the equation of motion including these values and plot the resulting trajectories in figure 11 with solid lines. Numerical trajectories correspond to experimental ones and predict the range for both kinds of shuttlecock. Trajectories mainly differ because of the difference in aerodynamic length between the two projectiles, a difference itself due to the larger mass of a plastic shuttlecock compared to a feathered one. It is practically not very easy to reduce the mass of a plastic projectile while keeping its robustness and price unchanged, which explains why the two masses are different.
Experienced players prefer to play with a fragile feathered shuttlecock than with a cheaper and more resistant plastic one. This can be understood by the fact that feathered projectiles may have faster initial velocities without exiting the court, owing to their smaller aerodynamic length. Using feathered shuttlecocks, a player can hit a smash at a higher speed, which allows less time for the opponent to react.
According to experienced players, the trajectories of feathered shuttlecocks are more 'triangular', as indeed seen in figure 11. Players' feelings about the triangular nature of the trajectory may come from the curvature at the top ( ) , which is inversely proportional to and independent of the initial velocity U 0 . As a consequence, feathered trajectories are indeed more curved at the top than plastic ones. As the shuttlecock geometry is critical for the badminton game, we have imagined a way to approach the pure triangular trajectory. The skirt rigidity of a plastic shuttlecock is reduced by cutting it longitudinally (first image in figure 13). Figure 13 shows that increasing air flow reduces the cross-section S of the projectile by a factor 2 as the flow velocity increases from 0 m s −1 to 50 m s −1 . Figure 14 compares the trajectory of a cut shuttlecock with the one observed for a standard plastic projectile. For similar initial conditions, the skirt deformability indeed induces a modified trajectory which is more triangular than the normal one. The fact that the shuttlecock with a cut skirt has a lower range means that the increase of its cross-section at low speed predominates over its reduction at high velocity.
Shuttlecock rotation
Shuttlecocks are not exactly symmetric with respect to their axis because feathers are superimposed one over another. This asymmetry also exists for plastic models, and it implies that a shuttlecock rotates around its axis when placed into an air flow [12]. This section quantitatively describes this effect shown in figure 15(a). Considering that the behavior of a feather is similar to the one of a thin plate in an air flow, the fluid force is perpendicular to the object and in a direction opposite to its velocity. Forces exerted on each feather (as represented in figure 15(a) with blue arrows) create a torque which puts a projectile into rotation so that feathers rip through air.
Shuttlecocks rotate at a velocity such that this torque is balanced by air resistance. The rotational velocity Ω is measured as a function of the projectile speed U, as shown in figure 15(b). The graph reveals a linear correlation between R Ω and U, and differences between plastic and feather rotational velocities. Whereas the slope of the linear trend is equal to 0.02 for the plastic shuttlecock, the one for the feathered projectile is twice as large.
The link between the rotational and linear velocity of a shuttlecock can be understood by writing the balance between the propulsive and friction torques applied on a feather: where S p is the feather surface area, Λ the opening angle and β the tilted angle of feathers resulting from their superimposition. Experimental observations show that R U Ω ≪ , which leads to the following relation: This approach predicts the linear dependency of rotational shuttlecock velocity with U. In addition, the factor between these two quantities is estimated as 0.06 for 45 Λ ≈°and 4 β ≈°and the model roughly captures the origin of rotation of a shuttlecock around its axis, and its amplitude. The effect of rotation on the flight can also be discussed. In a wind tunnel, we measured the drag coefficients of projectiles without rotation or free to rotate. The results are gathered in figure 16. Whatever the rotation, the drag coefficient is found to remain between 0.65 and 0.75. Considering the uncertainty of our experiments, we conclude that the rotation of a shuttlecock has no strong effect on the drag coefficient.
One may wonder whether rotation induces gyroscopic stabilization. Such a phenomenon happens if the angular momentum of the shuttlecock is high compared to the aerodynamic torque applied to it. It eventually leads to a non-zero value of the angle φ between the axis of the shuttlecock and its velocity direction along the trajectory. Figure 17 reports the time evolution of this angle along a high clear trajectory. Apart from the first flipping phase, the shuttlecock is never tilted compared to its velocity direction.
The fact that axial rotation does not lead to gyroscopic stabilization can be understood. On the one hand, the angular momentum of this object is J 2 Ω where J is the moment of inertia of the shuttlecock relative to its axis of rotation and Ω the angular velocity. On the other hand, the aerodynamic torque scales as R U l . Using a pendular system, Cooke measured different shuttlecocks' moments of inertia and concluded that J is 1.2 10 kg m 6 2 × − − [3]. Considering typical values (l 3 cm ≃ , R 3 cm ≃ and 1.2 kg m 3 ρ = − ), we deduce the following criterion for gyroscopic stabilization: R U 0.1 Ω ≫ . According to figure 15(b), this condition is not achieved when rotation is imposed by air flow and shuttlecock rotation along its axis does not stabilize it in a direction different to the velocity one. However, when the axis is not aligned with the air flow, the aerodynamic torque on the rotating object induces a precessing motion of period J SC U l 4 The typical distance over which a shuttlecock follows precession is . Considering typical values of the ratio R U Ω (extracted from figure 15(b)) and the characteristics of a shuttlecock, we estimate that U p τ is about 2 m for a plastic projectile and 4 m for a feathered one. This difference leads to a smoother early path for the second case. This phenomenon may also contribute to the players' preference for feathered shuttlecocks.
Obayashi et al also investigated the effect of a shuttlecock rotation on to its skirt deflection [13]. They proved that the skirt enlargement due to the centrifugal forces is compensated by the effect of the aerodynamic drag. Thanks to rotation, shuttlecocks keep a constant cross-section.
3. Influence of the shuttlecock flight on the game 3.1. Flipping strokes We discussed in section 1 how shuttlecocks flip after being impacted by a racket. Among the strokes used in badminton, we aim to determine which ones are influenced by this versatile motion. The flipping dynamics of a shuttlecock is sensitive to the players only if the stabilizing time s τ compares to the total flying time 0 τ . We plot in figure 18 the ratio s 0 τ τ , where s τ is deduced from relation (5), as a function of the horizontal traveled distance x 0 normalized by the court length L field .
The graph reveals that there is only a small domain of the court (x L 0.25 3 m field 0 ≲ = ) where players can receive a shuttlecock not yet aligned with its velocity direction. This situation only happens in the case of net drops. When a good player performs a net drop, his purpose is to delay the flip of the shuttlecock and let the skirt fly ahead. Then, the opponent cannot hit the cork of the shuttlecock and send it back properly. In practice, players perform tricks called 'spin in' and 'spin out', which consist of gently hitting the shuttlecock and simultaneously gripping the cork to maximize the initial spin˙0 φ positively or negatively. Relations (4) and (5) imply that a small initial velocity U 0 , as employed in net drops, increases the shuttlecock oscillating and stabilizing times.
The criterion for having a shuttlecock turning several times on itself before stabilizing can be discussed quantitatively. This situation happens if the initial rotational kinetic energy, , is larger than the depth of the potential energy well, SC U l 2 , imposed by the drag exerted on the skirt. One deduces that a shuttlecock does several turns before stabilizing if the initial angular velocity verifies . For typical shuttlecocks, this relation becomes L U 2.2 0 φ ≳ . In the case of standard impacts, we saw in figure 6 This explains that shuttlecocks generally perform less than a complete turn after an impact with a racket. Only the 'spin in' or 'spin out' techniques allow one to outweigh this criterion and make the projectile turn several times before stabilizing with the nose ahead.
Apart from net drops, all other strokes have a stabilizing time shorter than the flying one. Thus the shuttlecock is always aligned with the velocity direction, corresponding to the trajectories studied in section 2.
Clear strokes
For clear strokes, section 2.1 shows that the range 'saturates' with the initial velocity at a maximal value which depends on the aerodynamical length . For the maximal initial speed ever recorded (U 137 m s max 1 = − ), the shuttlecock maximum range x max is 13.8 m [14]. This distance compares to the court length (L 13.4 m field = ), which implies that the projectile rarely leaves the field and may explain why the mean number of shots per rally (13.5) is so large in top level badminton competitions [15]. For comparison, this number falls to 3.5 in top level tennis competitions consistently with the fact that the maximum range of a tennis ball (x 66.9 m max = ) is much larger than the court length (L 24 m field = ).
A possible classification
Depending on players and shuttlecock positions, several kinds of stroke are used, as sketched in figure 19(a) [2]. Each stroke is characterized by a horizontal traveled distance x 0 and a flying time 0 τ . We propose classifying badminton strokes in the diagram drawn in figure 19(b). On the x-axis, one finds the flying time 0 τ divided by the time of reaction r τ of a player ( r τ is about 1 s for trained players). The y-axis shows the ratio between the horizontal traveled distance x 0 and the court length L field . This diagram reveals that smashes, drives and net shots correspond to short flying time strokes, as opposed to clears, drops and lifts. The only stroke whose range is short compared to the court size is the net shot. A red color is used for killing shots of proportion larger than 10% (see table 1): all the short-time shots fall in this efficient category.
Badminton strategy consists in moving the opponent away from the court center using clear, drop or lift strokes before finishing the point with a rapid shot such as a smash or a net shot. This strategy impacts the strokes frequency as reported in table 1, which differentiates the killing shots from other ones. For clears, drops and lifts, the frequency of non-winning shots is much larger than the frequency of killing shots, which emphasizes that these shots are defensive or preparatory shots; this contrasts with drives, smashes and net shots which largely dominate the statistics of killing shots. Thus, ending a rally in badminton is mainly a question of flight duration.
Upwards and downwards strokes
Another way to classify the different strokes consists in noting the direction: the up-going family is composed of clear and lift, while the down-going family includes smash, drop and kill (which is an offensive shot hit from the net area and not reported in figure 19). The probability of each family can be approached with geometrical considerations. Due to the presence of a net, the down-going family must be hit high enough, as represented by the striped area in figure 20.
Considering that a badminton player can reach a maximum height h max , that is, his/her own height (1.78 m for Lin Dan and 1.74 m for Lee Chong Wei), plus his/her jumping height (0.7 m for Lin Dan), plus the racket length (0.65 m), we estimate the total cross-sectional area Σ reachable by a player. Thus the ratio between the figure 19 for all playing shots (second column) and for killing shots (third column). Data are extracted from [16]. Strokes which are killing shots with a frequency larger than 10% are highlighted in bold, as also stressed in red in figure 19(b).
Strokes
Playing shots Killing shots Clear (1) One guesses that a change in the net height would modify this frequency and impact the characteristics of the game, such as the mean number of exchanges per rally and the mean number of points per unit time.
Conclusion
The dynamics of a shuttlecock and its influence on the badminton game have been questioned. The versatile behavior of a shuttlecock after impact arises from its non-homogenous mass as a function of axial distance. The cork being denser than the skirt, a shuttlecock has distinct centers of mass and pressure, and thus undergoes a stabilizing aerodynamic torque setting its nose ahead. The geometry of commercial shuttlecocks is empirically chosen to minimize flipping and stabilizing times. In practice, badminton players try to delay stabilization with net drops, in order to prevent the opponent from hitting the projectile correctly.
For other strokes, the stabilizing time is much shorter than the total flying time. In this limit, a shuttlecock is aligned with its velocity. Because this light particle experiences a large drag, its trajectory is nearly triangular [8] and it highly depends on the projectile properties. This explains why players carefully choose shuttlecocks as a function of skills and atmospheric conditions (see appendix B). Experienced players prefer shuttlecocks submitted to a slightly larger drag, such as feathered ones, in order to hit them violently without exiting the court. The difference in rotating speed between the two kinds of shuttlecock (plastic and feathered) also plays a role in this choice since a faster rotation of feather projectile limits its precession.
Beyond this study, many questions concerning the physics of badminton remain to be solved. For example, the impact dynamics of a shuttlecock with a racket is not considered in this paper. One may wonder if there is an optimal rigidity for the shaft and the strings to enhance the launching speed of a shuttlecock. Finally, the laws established for shuttlecock flights could be discussed with other projectiles having a non-homogeneous mass along their axis, such as air missiles [17] or dandelion achenes [18].
Appendix B
Badminton players always test shuttlecocks before competitions. They hit the projectile with a maximum strength from one extremity of the court. Only projectiles reaching the corridor on the opposite side are selected for the game. This test selects shuttlecocks which are appropriate to the current atmospheric conditions, and it proves that air temperature and humidity influence the trajectory. The temperature modifies the shuttlecock aerodynamic length via air density ρ, as reported in table B1 . As air is hotter, the shuttlecock aerodynamic length increases. This implies an increase of the range of the projectile by about 10% between 10 and 40°C, that is, in the typical range of temperature at which badminton is practised.
The effect of air humidity is less obvious to understand. At first glance, parameters in the aerodynamic length do not depend on hygrometry. But such effects only occur with feather shuttlecocks. Goose feathers Table B1. Air density ρ as a function of its temperature T. For each condition, the shuttlecock aerodynamic length is estimated with M= 5.0 g, S 28 cm 2 = and C D = 0.6. The maximal range x max is calculated for the maximal velocity recorded in a badminton court, U 117 m s 0 1 = − , and the corresponding optimal initial angle θ ⋆ which verifies possess structures at different scales ( figure B1 ). Structures at the micro-scale are good precursors for small water droplets resulting from vapor condensation. This phenomenon explains why a feathered shuttlecock weight depends on air humidity, as does its aerodynamic length. We conduct an experimental study of shuttlecock mass as a function of humidity conditions at T = 20 • C. Corresponding results are gathered in table B2 . These data reveal the increase of the weight of the projectile with air humidity up to 10%, which leads to increase the maximal range up to 5%.
This study proves that the shuttlecock aerodynamic length and its range increase with air temperature. Players usually counterbalance this effect by using lighter shuttlecocks when air is hotter. Alternatively, they do not hesitate to fold the extremities of feathers toward the interior or the exterior in order to modify the shuttlecock cross-section and adapt the aerodynamic length to the present atmospheric conditions. Also, players avoid aerodynamic length variation during a game by exposing the shuttlecocks to ambient humidity several hours before the game starts.
For clear strokes, the trajectory ends with a nearly vertical fall. This leads to a high sensitivity of the badminton game to wind. During vertical fall, wind blowing horizontally at a velocity U w deviates the impacting point of the shuttlecock by a quantity U g sin | 7,739.2 | 2015-06-01T00:00:00.000 | [
"Physics"
] |