text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Can We Use the QA4ECV Black-sky Fraction of Absorbed Photosynthetically Active Radiation (FAPAR) using AVHRR Surface Reflectance to Assess Terrestrial Global Change?
: NOAA platforms provide the longest period of terrestrial observation since the 1980s. The progress in calibration, atmospheric corrections and physically based land retrieval o ff ers the opportunity to reprocess these data for extending terrestrial product time series. Within the Quality Assurance for Essential Climate Variables (QA4ECV) project, the black-sky Joint Research Centre (JRC)-fraction of absorbed photosynthetically active radiation (FAPAR) algorithm was developed for the AVHRR sensors on-board NOAA-07 to -16 using the Land Surface Reflectance Climate Data Record. The retrieval algorithm was based on the radiative transfer theory, and uncertainties were included in the products. We proposed a time and spatial composite for providing both 10-day and monthly products at 0.05 ◦ × 0.05 ◦ . Quality control and validation were achieved through benchmarking against third-party products, including Sea-Viewing Wide Field-of-View Sensor (SeaWiFS) datasets produced with the same retrieval algorithm. Past ground-based measurements, providing a proxy of FAPAR, showed good agreement of seasonality values over short homogeneous canopies and mixed vegetation. The average di ff erence between SeaWiFS and QA4ECV monthly products over 2002–2005 is about 0.075 with a standard deviation of 0.091. We proposed a monthly linear bias correction that reduced these statistics to 0.02 and 0.001. The complete harmonized long-term time series was then used to address its fitness for the purpose of analysis of global terrestrial change.
Introduction
The majority of solar radiation available to the Earth system is absorbed at or near the oceanic and continental surface. This energy is ultimately released to the atmosphere through fluxes of infrared radiation, as well as sensible and latent heat. The phytosphere, which itself accounts for most of biomass, affects these exchanges with the atmosphere through a contact surface (leaves) estimated to be larger than the surface of the entire planet [1]. The state and the evolution of terrestrial vegetation are characterized by a large number of physical, biochemical, and physiological variables. Few of these are directly observable from space, but they jointly determine the fraction of absorbed photosynthetically active radiation (FAPAR) that acts as an integrated indicator of the status and health of plant canopies and can be retrieved by space remote sensing techniques [2][3][4]. FAPAR plays a critical role in the global carbon cycle and in the determination of the primary productivity of the biosphere [5][6][7].
The properties of terrestrial surfaces thus concern a large number of users in such applications including agriculture, forestry, environmental monitoring, etc. [8][9][10]. Since plant canopies significantly affect the spectral and directional reflectance of solar radiation, the analysis of these reflectances leads to a better understanding of fundamental processes controlling the biosphere. This supports the policies of sustainable resources exploitation and the control of the effectiveness of any adopted rules and regulations. FAPAR is recognized as one of the terrestrial essential climate variables (ECVs) by the global climate observing system (GCOS) [11,12].
Three FAPAR products from Advanced Very-High-Resolution Radiometer (AVHRR) are currently available: the Global Inventory Modeling and Mapping Studies GIMMS3 products [13], NOAA's National Centers for Environmental Information (NCEI) AVHRR FAPAR [14] and the Global LAnd Surface Satellite (GLASS) [15]. Xiao et al. (2018) [16] compared these three products and found that they are spatially consistent with a strong discrepancy over tropical forest regions at latitudes from 55 • N to 65 • N.
In the context of an FP-7 European project-Quality Assurance for Essential Climate Variables (QA4ECV), the Joint Research Centre (JRC)-FAPAR algorithm was designed for the AVHRR Land Surface Reflectance Climate Data Record v5.0 [17]. Further to the previously mentioned products, we generated three temporal scales (daily, 10-day, and monthly) at 0.05 • × 0.05 • with associated grid cell uncertainties. In addition, the 10-day and monthly data were regridded at 0.5 • × 0.5 • . Uncertainties were included in each processing step from error propagation to temporal and spatial composites. These various spatial and temporal datasets enabled validation of native products against ground-based measurements, and then how these datasets were used for global-scale analysis were explored. A generic system for implementation and evaluation of quality assurance (QA) was applied to provide traceability information [18]. In summary, the JRC-FAPAR retrieval method was used to assess the presence on the ground of live green vegetation. The main procedure provided an estimate of "green" FAPAR in the plant canopy. It is important to note that the retrieval value corresponds to the black-sky FAPAR at the time of data acquisition. Past JRC-FAPAR algorithms have been optimized for various optical instruments and operationally implemented with data from the Sea-Viewing Wide Field-of-View Sensor (SeaWiFS) [19], MEdium Resolution Imaging Spectrometer (MERIS) [20,21] and Copernicus Sentinel-3 Ocean and Land Colour Instrument (OLCI) [22]. Validation exercises at a medium spatial resolution have been performed for SeaWiFS [23,24], MERIS [24,25] and OLCI [26,27].
Recent studies of global terrestrial change, such as assessment of global "greenness", exploited various Earth observation (EO) products [28]. At the same time, discussions on the fitness of several products for trend analysis showed that the results changed as a function of the products [29]. This means that, before any global analysis can be undertaken, both a validation process and a verification process are mandatory to assess, if a space product can be used to assess terrestrial global change. In addition, it is important to highlight that the uncertainty associated to the product should be available at each grid cell level. This information also helps to ingest these EO products into climate or land assimilation schemes.
In this paper, we therefore proposed to evaluate QA4ECV products against in situ and third-party data before using them for assessing terrestrial global change. We first presented the seasonal behaviour and magnitude of FAPAR against the in situ data, together with NCEI FAPAR products [14,30] and the JRC-Two-Stream Inversion Package (TIP) [31]. All the results were discussed, taking into account each respective FAPAR definition using outcomes from [32]. Secondly, we performed a more generic quality control over the QA4ECV virtual validation land sites defined in [33] from 1982 to 2006 using monthly products at 0.05 • × 0.05 • . The evaluation was then completed through direct comparisons against JRC-FAPAR daily global products from the SeaWiFS at 0.05 • × 0.05 • for the years 1999 and 2003 corresponding to the AVHRR2 and AVHRR3 satellites, respectively. A further comparison of the monthly products at 0.5 • × 0.5 • over 1998-2005 then allowed us to propose and apply a monthly-grid linear correction over the entire time series to harmonize all products from 1982 to 2006. We finally analyzed QA4ECV long time series and uncertainties in order to illustrate and discuss the global change of terrestrial surfaces. The paper was ended with a conclusion section.
Data Used
The AVHRR surface reflectance products [17,34] were used as inputs to derive the black-sky FAPAR. This dataset was produced using state-of-the-art algorithms for geolocation, calibration, cloud screening, and atmospheric and surface directional effect correction, to be able to achieve a data record as consistent as possible. The latter data were converted to bidirectional reflectance factors (BRFs) as done by [14] but using the native view zenith angles instead of the nadir view.
NCEI FAPAR daily products were used during comparison exercises against ground-based measurements. These were produced from an artificial neural network (ANN) calibrated using the Moderate Resolution Imaging Spectroradiometer (MODIS) FAPAR as training datasets [14]. This ANN was optimized over six land cover classes, but no retrieval was available over a bare or very sparsely vegetated area. The NCEI FAPAR refers to direct illumination, i.e., the black-sky FAPAR at local noon, which has the same leaf scattering albedo definition as that of the MODIS [14]. In addition, the JRC-TIP FAPAR was processed using the MODIS Collection 6 surface albedo and retrieved under the "green" foliage assumption. This permitted us to keep the assumption on the leaf scattering albedo used in the QA4ECV products. The JRC-TIP FAPAR was defined as the white-sky FAPAR, i.e., diffuse component. Because of these different definitions, we expected some bias as shown in [32].
The JRC products derived from the SeaWiFS were then used for a comparison of daily (0.05 • × 0.05 • ) and monthly (0.5 • × 0.5 • ) products at a global scale. These products were processed using the same retrieval algorithm with top-of-atmosphere (TOA) reflectances used as inputs [19].
Significant efforts were devoted in the past to the validation of surface products such as FAPAR [23,[35][36][37]. However, ground-based FAPAR products suitable for validation can only be measured in fields with significant levels of difficulty. They are often a proxy of FAPAR, either derived from leaf area index (LAI) measurements or from the fraction of intercepted PAR (FIPAR). Impacts of different types of internal variability of an extinction coefficient (a rate of how the radiation is absorbed by a medium), together with the resolution of the sampled domain, were already analysed for clouds [38]. They established conditions, where three-dimensional (3D) effects can be anticipated to play a major role in the radiation transfer regime. Gobron et al. (2006) extrapolated their results to terrestrial land covers by associating the main radiative transfer regimes against statistical properties of the leaf extinction coefficient within the spatial domain of investigation [23]. In summary, the "fast" variability regime is associated with statistically homogeneous canopy, in which exists a Poisson distribution of the leaf density. The "slow" variability regime is associated with canopies, where the leaf angle distribution (LAD) is close enough to be locally homogeneous such that average local-scale flux values are representative. The last case, the "resonant" regime, is defined, where the spatial complexity is such that a typical photon beam samples various types of structures between entering and escaping the canopy. Table 1 summarizes ground-based approaches to assessing insitu FAPAR values over different sites as done in [23], and Table 2 recapitulates the geolocations of field sites together with their transfer regimes and land cover types. Table 1. Ground-based measurements types.
SN-Dhr SN-Tes
Based on the Beer-Lambert-Bouguer (BBL) law with measurements of leaf angle distribution (LAD) functions FAPAR (µ0 (a) ) derived from the balance between the vertical fluxes Leaf area index (LAI) derived from PCA-LICOR (b)
US-Seg
Based on the BBL law with an extinction coefficient equal to 0.5 (c) LAI derived from specific leaf area data and harvested above ground biomass Advanced procedure to account for spatio-temporal changes of a local LAI
US-Bo1
Based on the BBL law with an extinction coefficient equal to 0.5 (c) LAI derived from specific leaf area data and harvested above ground biomass Advanced procedure to account for spatio-temporal changes of a local LAI
US-Ha1
Based on the BBL law with an extinction coefficient equal to 0.58 (c) LAI derived from optical PCA-LICOR data Advanced procedure to account for spatio-temporal changes of a local LAI
BE-Bra
Based on full one-dimensional (1D) radiation transfer models LAI derived from optical PCA-LICOR data Time-dependent linear mixing procedure weighted by species composition
US-Kon
Based on the BBL law with an extinction coefficient equal to 0.5 (c) LAI derived from optical PCA-LICOR data Advanced procedure to account for spatio-temporal changes of a local LAI
US-Me5
Based on the BBL law with an extinction coefficient equal to 0.5 (c) LAI derived from optical PCA-LICOR data Advanced procedure to account for spatio-temporal changes of a local LAI
ZM-Mkt
Based on the fraction of intercepted PAR estimated from the Tracing Radiation and Architecture of Canopies (TRAC) data slight contaminated by woody canopy elements (a) Cosine of the sun zenith angle; (b) Plant canopy analyser (PCA); (c) Extinction coefficient taken as a constant, i.e., independent of the sun zenith angle. Table 2. Anticipated radiation regimes (a) of field sites.
"Fast variability"
Short and homogeneous over a 1-2 km distance
"Slow variability"
Mixed vegetation with different land cover types
Methods
BRFs representing the AVHRR NOAA like surface data were created using the "semi-discrete" radiative transfer model [43] that represents the spectral and directional reflectances of horizontally homogeneous plant canopies and computes the values of the black-sky FAPAR. The sampling of vegetation parameters, such as LAI, height of canopies, leaf radius, soil albedo, and angular values, was chosen to cover a wide range of environmental and observation conditions. These simulations constituted the basic information used to optimize the algorithm for each AVHRR sensor on-board NOAA-07, -09, -11, and -14 platforms, taking into account their respective spectral responses. Once these simulated datasets were created, the design of the algorithm consisted in defining the mathematical combination of two spectral bands that best accounted for the variations of the variable of interest (in this case, it was the "green" black-sky FAPAR) on the basis of (simulated) measurements, while minimizing the effect of perturbing factors such as angular effects. The coefficients for each NOAA platform were detailed in [44].
The associated daily uncertainty, σ, was expressed as one standard deviation using the error propagation theory and derivatives. In order to compute daily σ, we set the surface reflectance uncertainties at 10%. In the temporal (spatial) composite method, additional uncertainties corresponded to the standard deviation of the FAPAR after removal of outliers. In the regridded products, the FAPAR at 0.5 • × 0.5 • corresponded to the weighted average of individual grid cell values, and the uncertainties were computed using the quadratic mean.
When performing, at the original resolution, the benchmark comparison of daily products, we first computed the accuracy (A) that statistically represented the mean bias between QA4ECV values and SeaWiFS products as proposed in [45]. We also provided the precision, P, which indicated the repeatability and was computed as the standard deviation of the estimates around the reference values corrected for the mean bias (accuracy). Finally, U was the actual statistical deviation of the estimates from the reference that included the mean bias [46]. The results were discussed in comparison to the QA4ECV FAPAR uncertainties, σ, as well as the spatial standard deviation, Sdev.
Next, a monthly linear correction was proposed to harmonize all the different NOAA products, as these suffered from calibration issues as shown in Section 4.5. This was done using the products over 2002-2005 by optimizing linear equations such as Equation (1): where Q and S are the QA4ECV and SeaWiFS monthly products, respectively; m represents the month, and (x, y) is the coordinates of the grid cell at 0.5 • × 0.5 • ; the coefficients a(m) and b(m) are optimized for each month using data from 2002 to 2005. . These time series were associated with the radiation transfer in "regime 1" corresponding to the "fast" variability category. The QA4ECV product flag is indicated in red (blue) colour, when a vegetation (cloud) condition was detected, whereas the grey shade indicates temporal deviation. Ground-based estimates are plotted in black. The FAPAR baseline values over these sites are very low, and signatures of the different vegetation phenological cycles (for both the growing and senescence periods) were remarkably well-identified by both space and ground-based estimates. Moreover, the amplitudes, both the maxima and the minima, are in very good agreement amongst all products, although the space retrievals tended to slightly underestimate the ground-based values over SN-Dhr during the peak growing season (Figure 1a). Indeed, at this site, the landscape exhibited significant spatial heterogeneity not sampled by the ground-based measurements. NCEI values are slightly higher than the values of QA4ECV and JRC-TIP products, especially over the desert grassland (panel d). This may be due to leaf colour assumption differences. [31] illustrated that the assumption of green leaf resulted in low values in FAPAR. These differences were limited to a range from 0.05 to 0.20 as a function of vegetation's density. Indeed, the green hypothesis implies that a limited amount of scattering material, i.e., relatively low LAI values, is needed to match the observed data in the near-infrared band. At the same time, since the interception process mainly controlled the fraction absorbed in the visible band, relatively low LAI values in turn imposed a lower fraction of the radiation absorbed in the canopy layer. Both the JRC-TIP and QA4ECV-FAPAR products agree well with each other within their respective uncertainties. The results over vegetation conditions belonging to the "slow variability" category, i.e., radiation transfer in "regime 2", are displayed in Figure 2. In the case of the BE-Bra site (51.309 • N, 4.52 • E) (panel a), the amplitudes during the start and the end of the growing season estimated from remote sensing and ground-based measurements are in very good agreement, except for the QA4ECV results, as a lot of cloud contamination is present (blue dots). The estimated ground-based FAPAR values over the agricultural field site identified as US-Bo1 (40.006 • N, −88.29 • E) follow a well-defined pattern that was correctly measured by the QA4ECV products and the JRC-TIP (panel b). NCEI daily values reveal higher levels than other measurements from January up to June and after September. The third comparison performed with "regime 2" canopy conditions was conducted at the Harvard site (identified as US-Ha1), which is a mixture of conifer and hardwood forests (panel c). JRC-TIP and QA4ECV datasets were compared very well with each other including during the growing period. During the summer season, all space products then systematically showed lower values compared to those of the ground-based estimations, where vegetation got very dense. The largest differences occurred during the senescent period, where a time delay of about one month was observed between the FAPAR signatures given by space and ground-based datasets. JRC-TIP and QA4ECV estimations were well correlated along the cycle, although the QA4ECV products had a slightly lower bias at the end of the period. In these two sites, the bias occurring during the period of senescence was a consequence of using total (in ground-based estimations), instead of green (as assumed in the retrieval algorithm), values when assessing the FAPAR values.
Validation Using Ground-Based Measurements
The comparison results of ground-based and space-retrieved FAPAR over the US-Me5 site (44.437 • N, −121.56 • E), associated with "regime 3" are shown in Figure 3a. The two main interesting findings are as following: (1) neither source of information indicated a strong seasonal cycle, as could be expected over a pine conifer forest; and (2) the discrepancy in the FAPAR amplitudes between space and ground-based datasets was extremely high (by a factor of nearly 2). Both JRC-TIP and QA4ECV products showed the same amplitude of values, whereas NCEI did not provide any values. This is indeed a typical class of vegetated canopies deviating significantly from 1D statistically homogeneous situations. In this instance, the classical Beer-Bouguer-Lambert (BBL) law of exponential attenuation applies, only if the 3D radiative effects are adequately parameterized, which is not the case in the ground-based measurements.
The additional ground-based dataset associated with "regime 3" is over ZM-Mkt (-15.438 • N, 23.253 • E), derived from a collection and an analysis of the canopy gap fraction using the Tracing Radiation and Architecture of Canopies (TRAC) instrument in a mixed shrubland/woodland environment [20]. This instrument measures the canopy "gap size" distribution and the canopy "gap fraction" from radiation transmittance. These data provide a proxy for FAPAR, as they are used to derive the FIPAR. Figure 3b shows the time series of the FAPAR space products in 2001 together with the three transects of measurements (FIPAR spatial averages with associated standard deviations) collected by the TRAC instrument. The agreement between NCEI values and ground-based estimations is good, despite the spatial scale differences. QA4ECV-FAPAR products had lower systematical biases by about 0.2 on average during the two dry seasons. However, we have a high correlation between the two estimations. During the second wet season, we obtained opposite results of NCEI products in situ overestimate, whereas QA4ECV results were closer. It should be recalled that the contamination of the FIPAR measurements by the wooden elements of the canopy favours a bias towards green absorption values. This characteristic is higher during dry seasons, when the relative contribution to the leaf-only absorption process decreases, especially with such a sparse vegetation cover.
Quality Control of Monthly Time Series over 1982-2006
We presented here the time series of AVHRR-FAPAR monthly products at 0.05 • × 0.05 • over the QA4ECV validation sites for different types of land cover shown in Table 3. The time series of birch stand/pine stand forests sites are plotted in Figure 4. Red (pink) dots correspond to QA4ECV best representative FAPAR best representative values at vegetation (soil) pixels. Blue symbols indicate an NCEI cloud flag, meaning that no clear sky days were found during the time composite period. Shade bars indicate the uncertainties of the best representative day, whereas error bars represent the temporal standard deviations during a month. The intra-annual seasonalities from 1982 to 2006 are in general well-represented, except for a few months, for which outliers were detected. The level of FAPAR over Ofenpass was very low compared to that of Jarvselja-2, even though both were covered by pine stand forests. The theoretical range of FAPAR that was expected over pine-stand summer (winter) virtual scenes across Ofenpass varied from 0.3 (0.2) to 0.6 (0.3), depending on the sun zenith angles (not shown here). Over Jarvselja-1, slightly higher values were obtained. During the Northern Hemisphere winter seasons, brighter surfaces were detected over this site, resulting in null FAPAR values. Over Jarvselja-2, QA4ECV monthly products still contained a lot of data contaminated by clouds, especially during winter seasons. The two tropical forest sites' results are plotted in Figure 5. The FAPAR values over the Lope forest are lower when compared to those for Nghotto, and their respective maxima are 0.4 and 0.6. However, it was shown that when the JRC-AVHRR retrieval algorithm was applied to simulate surface reflectance, outputs were much higher than with real data, which means that atmospheric correction may suffer from cloud contamination at the 0.05 • × 0.05 • scale as is often the case over these tropical regions [23].
FAPAR time series over three different types of crops are displayed in Figure 6. Panel a shows time series over the Zerbolo site covered by a short rotation poplar forest. Panels b and c correspond to wheat (Thiverval) and citrus orchard (Wellington) crops, respectively. Over this latter site, few outliers appeared, one in 1988 and some in 1994, when input data suffered from three months of missing artefacts in the Southern Hemisphere. One can notice that the products represent very well the expected seasonality for crops each year, with a high level of about 0.7 during summer over Zerbolo. FAPAR time series over shrub and savanna sites, Skukuza and Janina, are displayed in Figure 7. Over both sites, few outliers appear for three months due to corrupted input data in 1994, for which only one-day results are available (indicated by the absence of error bars). The overall seasonality of both types of vegetation is well-represented over the entire period. This section presents the comparison between QA4ECV and SeaWiFS daily products at 0.05 • × 0.05 • for 1999 and 2003. SeaWiFS products were derived using the same FAPAR retrieval method except that the inputs are the top of atmosphere measurements [19,23]. In order to minimise the impact of the remaining cloud effects in the SeaWiFS aggregated products, data were filtered by keeping only grid cells that contained less than 50% of cloudy pixels. Daily comparison statistics at a global scale are reported in Figure 8. A and U values are plotted in red and pink lines, respectively. In addition, the SeaWiFS spatial standard deviation, Sdev, is displayed in green, whereas the QA4ECV uncertainty, σ, is in blue. We can see that, for 1999 (panel a) and 2003 (panel b), A (U) values are lower than 0.05 (0.10) until July but increase afterwards. U values are smaller or of the same order compared to the actual uncertainties of FAPAR. (A, P, U) are higher in 1999 compared to those in 2003. This may be due to calibration differences of instruments between the NOAA14 (AVHRR2) and NOAA16 (AVHRR3) platforms. Figure 9b presents the density scatter plots of two sensors' products, and Figure 9c shows the histogram of differences using 8-year data. We found that the mean average difference δ is at 0.0755 with a mean deviation of 0.091. We propose a linear bias correction for each month at grid level as explained in Section 3. The comparisons following this correction are displayed in Figure 10, where the mean average difference δ drops to 0.0011 and the mean deviation reduces to 0.0249. The longitudinal plot (panel a) shows that the absolute agreement is on average less than 0.10.
Global Change Studies: Impact of Calibration
This subsection discusses the level of confidence when using long-term time series for global change studies, as done in the state of climate reports [47,48]. The quality of the AVHRR data was first evaluated by examining the stability of the rectified channels in bands 1 and 2 during the entire period over the CEOS Libya-4 bright calibration site (28.55 • N, 23.29 • E).
One can easily see some artefacts during the period, especially in the near-infrared (band 2) (Figure 11b). These occurred at the end of 1984, 1987, 1988, 1993, and 2000, and afterwards, the values are more stable. The reasons for this instability concern principally the difficulty of calibration of old sensors and sensor drifts. Previous AVHRR land products suffer from these defects as shown in [13]. The stability of climate data records can be also checked with the time series of uncertainties. Figure 12 displays the three uncertainties provided in the QA4ECV monthly products at 0.5 • × 0.5 • averaged at a global scale and over the two hemispheres, respectively. We observed that the uncertainties values, σ, are lower than 0.1 (panel a). Both the spatial and temporal standard deviation plots also revealed the nonstability that was found in the previous analysis. These results indicated that we can filter products by applying a total uncertainties threshold. We also needed to take into account the actual number of grid cells that constituted the global mean for each period that could increase or decrease this global average. In Figure 13, we plotted the global and the Northern/Southern Hemispheres FAPAR anomalies from 1982 to 2006 (using linear-bias-corrected QA4ECV data). It should be noted that, for the spatial average, we removed grid cells, of which either one of the uncertainties or the total was above 0.20. We also removed monthly values, where the number of pixels used for the spatial average was less than 50% of the climatology of the number of pixels. [49,50]. The extreme negative anomaly occurred at the end of 1988 after the 1987 El Niño event, but this result should be interpreted with caution, as it was also at the end of life of the NOAA-09 instrument. The Pinatubo eruption played a strong role with respect to land precipitation and associated drought conditions in 1992 that are evidenced with a negative FAPAR anomaly. More recently, after the strong El Niño event in 1997, one can see that the negative FAPAR anomaly extended until the start of 2001 as was already shown in [51]. Afterwards, the anomalies became positive, and a small positive trend was detectable.
Conclusions
This paper presented the performance assessment of the QA4ECV black-sky FAPAR long-term records at different spatial and temporal scales, i.e., at 0.05 • × 0.05 • and 0.5 • × 0.5 • for daily, 10-day and monthly periods. Validation was done through comparisons against time series of ground-based estimates, together with NCEI products and JRC-TIP, using a categorization of the ground-based FAPAR datasets according to their most probable radiative transfer regimes. Despite the spatial-scale differences between these ground-based measurements and space products, we found out a relatively good agreement of radiative transfer in "regime 1" and "regime 2". These past ground-based measurements are, however, only a proxy of FAPAR space products, as they represented the interception rather than the actual absorption by green leaves. Progresses in making better reference validation measurements are ongoing within the European Space Agency (ESA) Fiducial Reference Measurements for Vegetation project (https://frm4veg.org/) that will provide traceable in situ measurements. In addition, the Ground-Based Observations for Validation (GBOV) component of the Copernicus Global Land Products (https://land.copernicus.eu/global/gbov) provides multiyear ground-based observations from existing global networks. However, these projects cannot provide data prior to 2016. Over the QA4ECV virtual validation sites, monthly products at 0.05 • × 0.05 • were used from 1982 to 2006 to check inter-annual variations and stability and to identify outliers. These results could be useful for further AVHRR calibration improvements and artefact detection.
We also compared the QA4ECV global daily products against SeaWiFS for 1999 and 2003 at 0.05 • × 0.05 • . Both bias and root mean square deviation (RMSD) were reported together with QA4ECV FAPAR uncertainties and the spatial standard deviation of SeaWiFS. We found larger differences in 1999 compared to those in 2003, because the AVHRR-2 and AVHRR-3 instruments have different calibration features. However, these statistical values were within the product uncertainties. Monthly products at 0.5 • × 0.5 • between SeaWiFS and QA4ECV were also benchmarked, and we demonstrated that a monthly linear correction can be used for correcting the entire QA4ECV time series, from 1982 onwards, for correcting the stability performance. Recently, Giering, et al. (2019) proposed a framework to establish fundamental satellite data series for climate applications [52]. Here, we applied a simple approach that can bring a solution for merging different sensor products.
We analysed the products over terrestrial surfaces at a global scale as was done in [48,53]. We identified the global changes of terrestrial surfaces that should be interpreted with care because of the instability of AVHRR data. This dataset could be easily extended with products from MERIS, "green" JRC-TIP and Sentinel-3 OLCI, as they represent the same variable. More work is required to explore these data as well as geostationary ones to increase the number of available productive climate data records.
Author Contributions: N.G. conceived, designed and implemented the JRC-AVHRR algorithm and made the performances studies. M.M. and M.R. contributed to the data processing. N.G. wrote the paper. E.V. contributed to the availability of an improved surface reflectance within the QA4ECV project. | 7,047.6 | 2019-12-17T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
Automated Machine Learning Driven Stacked Ensemble Modeling for Forest Aboveground Biomass Prediction Using Multitemporal Sentinel-2 Data
Modeling and large-scale mapping of forest aboveground biomass (AGB) is a complicated, challenging, and expensive task. There are considerable variations in forest characteristics that create functional disparity for different models and needs comprehensive evaluation. Moreover, the human-bias involved in the process of modeling and evaluation affects the generalization of models at larger scales. In this article, we present an automated machine learning framework for modeling, evaluation, and stacking of multiple base models for AGB prediction. We incorporate a hyperparameter optimization procedure for automatic extraction of targeted features from multitemporal Sentinel-2 data that minimizes human-bias in the proposed modeling pipeline. We integrate the two independent frameworks for automatic feature extraction and automatic model ensembling and evaluation. The results suggest that the extracted target-oriented features have an excessive contribution of red-edge and short-wave infrared spectrum. The feature importance scale indicates a dominant role of summer-based features as compared to other seasons. The automated ensembling and evaluation framework produced a stacked ensemble of base models that outperformed individual base models in accurately predicting forest AGB. The stacked ensemble model delivered the best scores of R2cv = 0.71 and RMSE = 74.44 Mgha−1. The other base models delivered R2cv and RMSE ranging between 0.38–0.66 and 81.27–109.44 Mg ha−1, respectively. The model evaluation metrics indicated that the stacked ensemble model was more resistant to outliers and achieved a better generalization. Thus, the proposed study demonstrated an effective automated modeling pipeline for predicting AGB by minimizing human-bias and deployable over large and diverse forest areas.
I. INTRODUCTION
T HE selection of suitable machine learning (ML) algorithms to solve the forest aboveground biomass (AGB) problem Manuscript requires domain expertise for improving and regulating model performances [1], [2], [3], [4], [5]. There are different ML-based models developed in the literature for the prediction of forest AGB. The most fundamental modeling approach to AGB prediction is generalized linear regression that allows to relate model parameters to the response variable via a link function [73], [77], [78]. Furthermore, kernel-based learners that include methods such as support vector machine [79], [80], [81] and Gaussian process regression [82], [83] have proven to be effective for the development of AGB estimation models. Additionally, the tree-based models such as Random forest (bagging), XGBoost, and CatBoost (boosting) were also found to be very efficient for the prediction of forest AGB [58], [84]. The latest developments in AGB estimation include artificial neural network based learners that use backpropagation algorithm for training the network to reliably predict AGB. These approaches include the use of some basic multilayer perceptron models [85], sparse autoencoders [86], and deep neural networks with generative learning strategy [87], [88]. However, these various ML models are trained and optimized with different strategies that make it difficult to determine whether a given technique is genuinely better or simply better tuned. Automated machine learning (AutoML) approaches can support a wide range of ML algorithms and automate algorithm selection, feature generation, hyperparameter tuning, iterative modeling, and model assessment to develop an optimized ML pipeline. The task of creating an optimized ML pipeline for AGB prediction is complex and time-consuming due to the diversity in forest type and species. The traditional approaches proposed for the development of ML pipelines use a trial and error mechanism for stacking and selection of models for AGB prediction [6], [7], [8]. In practice, the human element cannot be completely eliminated from the process due to the need of performing model checks and model explanation w.r.t target parameter, i.e., AGB. However, the human intervention can be reduced and replaced with efficient search and hyperparameter importance learning algorithms that minimize the number of user-regulated parameters and achieve faster and efficient modeling with reduced human-bias [9], [10], [11], [12]. In this regard, the concept of AutoML can be very instrumental in gaining better machine learning (ML) performance with the available computational budget and reduced human assistance to model forest AGB. Au-toML automates knowledge intensive tasks such as configuring learning tools for feature engineering, neural architecture search (NAS) and algorithm selection using an optimization-evaluation mechanism [13]. In a general architecture, the AutoML controller consists of an evaluator and an optimizer. The evaluator measures the performance of learning tools and provides feedback to the optimizer in order to update configurations for better performance. The optimizer generates configurations based on a search space that is determined by a process of targeted learning [14]. For example, if the learning process is "feature engineering," the learning tools to be configured are the "classifiers" and the search space would consist of "feature sets, feature enhancing methods (dimension reduction, feature generation, feature encoding) and related hyperparameters." There are a wide-ranging applications of AutoML such as medical image recognition [15], [16], object detection [17], [18], [19], super resolution [20], [21], [22], language modeling [23], text classification [24], semantic segmentation [25], etc. AutoML has delivered a quality performance for various tasks but scarcely experimented with developing an ML pipeline for AGB prediction. An ML pipeline is a directed graph of learning elements that can be automated by using a search of estimators/predictors, search of learning algorithms, and search of ensemble models [26], [27], [28]. In literature, there are various algorithms developed for automation of these learning elements of an ML pipeline [29]. The evolutionary algorithms such as particle swarm model selection (PSMS) use particle swarm optimization to automate the full model selection problem [30]. Such evolutionary algorithms inspired ensemble PSMS [31] which is a precursor to the development of the latest AutoML systems [28]. Recent AutoML systems are capable of building regression/classification pipelines, full model selection, multiobjective optimization and best architecture/hyperparameter search for deep learning models [32]. Auto-WEKA [33] system uses the sequential model-based algorithm configuration method [34] which is a robust stochastic optimization framework under noisy function evaluations for the lowest cross-validation misclassification. AutoSklearn [35], which is a system similar to sequential model-based optimization (SMBO), has a distinctive search process initialization that exploits a meta-learner delivered the best performance in many AutoML challenges. The tree-based pipeline optimization tool [37] is an open-source genetic AutoML system that optimizes a series of ML models for high-accuracy and compact pipelines for supervised classification. The recent surge in the application of deep neural networks led to the development of AutoML systems like Auto-Net [38] and NAS [39], [40] which are built on combination of Bayesian optimization and Hyperband to automatically tune deep neural networks without human intervention.
The concept of meta-learning is integral to AutoML systems and enables them to learn from the meta-data of the learning elements [41]. AutoML systems perform faster and more efficiently on a new task with meta-learning techniques that replace human-engineered ML pipelines with data-driven pipelines [42]. Meta-learning techniques have been successfully implemented in literature to automate various learning elements (feature engineering, architecture search, hyperparameter optimization) of the AutoML systems [26], [43], [44], [45], [46], [47]. In this context, there are many studies conducted using meta-learning techniques to execute various ML tasks in remote sensing applications. A trivial problem in deploying advanced ML algorithms in the RS domain is few-shot (low training samples) learning for which meta-learning techniques have provided some significant solutions [48], [49], [50], [51]. Meta-learning techniques can efficiently handle multiscale RS data and provide efficient solutions to inversion modeling problems [52], [53]. These techniques have been used to solve more specific RS problems such as the study in [54] specifically focused on deep neural network model for semantic segmentation of circular objects in satellite images. Apart from such specific problems, meta-learning techniques are capable of dealing with combination of such problems to solve larger problems in RS. For example, a recent study [55] aimed at generating captions for RS images using meta-learning-a task that requires dealing with a combination of problems based on visual and textual features to generate RS image captions. Thus, meta-learning approaches have a significant contribution in solving complex RS problems and hold a great potential for improving AGB modeling in diverse scenarios such as mixed tree species, forest types, and varying density of forests.
The possibility is unlikely for a single modeling algorithm to perform effectively on most types of AGB modeling scenarios. Ensemble models are a combination of base models built on the hypothesis that the combination of multiple (weak) models can produce a more reliable and accurate model as compared to the individual base models [56]. Ensemble models are developed based on techniques such as bagging, boosting, and stacking. The bagging and boosting models both involve homogeneous weak learners that are combined by a deterministic strategy. The difference is bagging algorithms follow a parallel learning strategy (e.g., Random Forest) and boosting algorithms follow a sequential learning strategy (e.g., AdaBoost, XGBoost). Differently, stacking involves heterogeneous weak learners that are combined using a meta-model with a parallel learning strategy. Ensemble models have been used for AGB modeling and prediction by various studies in literature [8], [57], and [58]. However, developing a stacked ensemble with manual trial and error approaches to model AGB is an inefficient and time-consuming task. Meta-learning-driven AutoML systems can be adequately used for developing optimal stacked ensemble models (SEM) for AGB prediction [59]. AutoML systems use a collection of base models with hyperparameter tuning algorithms for producing SEMs [60]. The AutoGluon system [61] uses a multilayer stack ensembling with K-fold bagging that stacks models in multiple layers and trains in a layer-wise manner. The AutoSklearn system [28] employs a Bayesian optimization algorithm for searching through the hyperparameter space and meta-learning for warm-starting of the search procedure. The AutoSklearn 2.0 [35], which is an improvement upon Autosklearn, uses portfolio learning after performing Bayesian optimization. Among the most recent AutoML systems, "H2O" AutoML [62] performs stacked ensembling with random forest, gradient boosting machines, linear models, and deep learning models using a super learning algorithm.
The primary objective of this article is the prediction of AGB using multitemporal multispectral (MT-MS) satellite remote sensing data. Prediction of AGB using satellite remote sensing requires a robust ML pipeline to deal with outliers in the training data and increase the generalization ability of the model. Moreover, satellite remote sensing is generally used for large-scale AGB mapping and may require testing multiple models due to the spatially varying characteristics of the forest. Thus, it is required to have a comprehensive evaluation framework for the multiple models that are considered. Therefore, we propose an AutoML system for the prediction of forest AGB that enables training and evaluation of models within a single framework. In particular, we use a meta-learning-driven AutoML system that automates the selection and stacking of candidate base learners for modeling AGB. Additionally, we propose to incorporate a SMBO procedure to automatically extract features from the MT-MS satellite data to minimize human-bias in the proposed ML pipeline.
A. Study Area Description and Field Data
The study area is the province of Trento (6216 km 2 ) situated in north-eastern Italy in the southern part of the Alps. The area is mountainous and has a 60% of forest cover mostly owned by public institutions that are subject to broad goals of forest management (i.e., forest protection, species biodiversity, carbon storage etc.). The area has mountains with high elevations and landlocked valleys suitable for species such as Norway spruce (Picea abies (L.) Karst.) and Swiss pine (Pinus cembra L.). The area has mountains with lower elevations of an average 1000 m above sea level that are suitable for species such as Silver fir (Abies alba Mill.) and European beech (Fagus sylvatica L.). The areas that are below 1000 m are mainly characterized by broadleaves species such as Ostrya carpinifolia, Carpinus betulus, Fraxinus ornus, Quercus pubescens, Quercus petrae, etc. The geographical location of the study area and the distribution of the field plots in the study area are shown in Fig. 1.
The field data consists of 315 circular plots with a fixed radius of 15 m (see Fig. 1). These data include 98 broadleaf plots, 152 coniferous plots, and 65 mixed plots. The total 315 plots were divided into three categories (broadleaf, coniferous, and mixed plot) such that more than 80% of the plot AGB should be derived from one particular tree type (broadleaf or coniferous), otherwise the plot was considered as a mixed plot. All trees having a diameter at breast height (DBH) above 7 cm inside a plot were geolocated and their species, DBH and heights were measured. The AGB of each tree was computed using the allometric equations stated in [63], and the plot level AGB was computed as the sum of the AGB of all measured trees inside the plot. The field-estimated plot level AGB values ranged from 1.07 to 711.41 Mg ha −1 . The plot coordinates were recorded using a survey-grade GPS unit.
B. Remote Sensing Data
The study was performed using MT-MS images acquired by ESA's Sentinel-2A, 2B satellites. The acquisition dates of the Sentinel-2 images are stated in Table I. Sentinel-2 images are characterized by 13 spectral bands at 60, 20, and 10 m spatial resolution. We used ten spectral bands for our experiments, i.e., four spectral bands (B, G, R, and NIR) at 10 m spatial resolution and six spectral bands (3-Red Edge, Narrow NIR, and 2-SWIR) at 20 m spatial resolution. The central wavelengths of these ten considered spectral bands ranged from 490 to 2190 nm.
III. DESCRIPTION OF ALGORITHMS
The developed mechanism for achieving the proposed objectives was based on coupling two algorithms: 1) Tree-structured Parzen Estimator (TPE) [64] which is a hyperparameter optimization algorithm based on an iterative search process on a dynamic search space and 2) the Super Learner (SL) [65] which is a V-fold cross-validation based meta-learning algorithm that selects weights for combining candidate models. The following subsections provide a description of the two algorithms.
A. TPE for Automatic Feature Extraction
The TPE algorithm is based on a SMBO approach that overcomes the limitations of computationally expensive and inefficient random search and grid search algorithms. The inputs required by the TPE algorithm are parameters (x) and loss (y) based on the prior search history to deduce hyperparameters for the next trial. The input pair (parameters and loss) is split into two densities ( (x) and g(x)) based on the loss of the historical data by a γ quantile (2) of the search result. The TPE defines this split p(x|y) using the two densities according to the following equations: and where (x) should be maximized as it represents the density formed from the observations x (i) that correspond to the loss (y) smaller than the target performance (y * ), whereas g(x) should be minimized as it represents the density formed by using the remaining observations. Therefore, the output of the TPE algorithm can be simply characterized as the ratio between g(x) and (x). The idea of the TPE algorithm is based on maximizing the expected improvement (EI y * (x)) computed by (3) based on convergence of loss (y) and target performance (y * ) in subsequent trials where p(x) can be written as . (4) Therefore, we can deduce Finally combining (4) and (5), we obtain The expression in (6) is a product of two terms of which the second term is independent of "x" and hence the expression is directly proportional to the first term and implies that to maximize improvement points "x" with high probability under (x) and low probability under g(x) are desirable. Also, this expression shows that the ratio g(x) (x) should be minimized in order to increase the expected improvement.
B. SL Algorithm for Automated Stacked Ensemble Modeling
The SL algorithm can be used for ensemble modeling by considering a set of diverse modeling algorithms as base learners and selecting an optimal ensemble through V-fold cross-validation. We first define a library "L" of "K" modeling algorithms trained to solve a regression problem of estimating the expectation of target A given the observed variable A . The optimal value of the parameter of interest ψ 0 (A) that is required to be estimated and the related loss function L(O, ψ) that indicates the difference between the observed and the predicted value (squared error loss) can be written as Thus, the optimal values of the parameter of interest for the "K" individual algorithms included in the library "L" can be written as ψ k (A) where k = 1, 2, ….., K. The number of algorithms "K" to be considered in the library is dependent on the sample size of the data. Note that all the considered algorithms estimate the same parameter ψ k (A) but may use different subsets of A , different basis function, estimation procedures, and range of tuning parameters. The identification of the best predicting algorithm from the library over the data distribution (P o ) is determined by minimizing the expected risk difference d n (ψ k , ψ 0 ) , i.e., The use of the same data to estimateψ k (A) and d n (ψ k , ψ 0 ) generates bias in the estimation of the true risk to determine the best algorithm. Thus a V-fold cross-validation selector is used for unbiased estimation of the risk. Given the empirical distributions for the training and validation sets for each V-fold, the cross-validation selector can be defined as where P 0 n,C n and P 1 n,C n are the distributions of training set (C n (i) = 0) and validation set (C n (i) = 1), respectively.
IV. PROPOSED APPROACH
The flowchart of the proposed approach given in Fig. 2 shows the process of integrated implementation of TPE and SL algorithms to create an ML pipeline for AGB prediction. In the following subsections, the pre-processing of Sentinel-2 data, implementation of TPE and SL algorithms, and model explanation parameters are described in detail.
The extraction of targeted spectral features from Sentinel-2 data for forest AGB prediction requires a comprehensive understanding of the dynamic changes of optical properties with respect to AGB. Vegetation indices are standardized yet retain bias attributed to their pre-defined equations. The TPE algorithm provides flexibility to the spectral features and fits spectral bands to empirical equations based on dynamics with AGB. The TPE algorithm supports spectral bands in parameter search space and composes the spectral features in computationally efficient manner. This makes TPE algorithm an optimal choice for producing target-oriented spectral features from empirical equations. This process was followed by the deployment of the SL algorithm that uses K-fold crossvalidation to estimate the performance of multiple ML models (base learners) and the same model with different hyperparameter settings. This creates an optimal weighted average of base learners (i.e., an ensemble model) that improves prediction accuracy, avoids overfitting, and minimizes parametric assumptions. Thus, the SL algorithm becomes a suitable choice to achieve the objective of stacked ensemble modeling for prediction of forest AGB.
A. Data Preprocessing
The Sentinel-2 images were acquired in Level-1C (top of the atmosphere reflectance) format and converted to Level-2A (bottom of the atmosphere reflectance) format including atmospheric and terrain correction using the Sen2cor processor [66]. The spectral bands at 20 m spatial resolution were resampled at 10 m using the nearest neighbor sampling method for spatial consistency and performing computations. The preprocessed data were used for extracting plot reflectance and preparing season-wise analysis-ready data frames using "rgeos" and "rgdal" packages of R software. The analysis-ready data frames of each season consisted of plot reflectance values for each spectral band for all sample plots.
B. Implementation of Algorithms
The two core frameworks of the proposed approach are based on TPE and SL algorithms implemented sequentially to produce a robust ML pipeline for AGB prediction. First, the TPE algorithm generates automatically optimized features from the preprocessed analysis-ready data frames. Later, these optimized features and the response variable are used for training the SL algorithm that automates the process of training a large number of base models and performs stacked ensembling to produce a leaderboard of models. The TPE algorithm was implemented using "Optuna" framework that was developed in [67] and the SL algorithm was implemented using "H2O-3" framework that was developed in [62]. The sequential implementation of the two frameworks is explained in detail in the following paragraphs.
The implementation of the TPE algorithm to extract features from the input spectral bands requires a model, i.e., an empirical equation for which it can generate a set of parameters and select a combination of optimal spectral bands to accurately predict the Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. Table II. The library of 33 empirical equations (indexed as I n ) framed from an exhaustive database of 500+ spectral indices [68] was available from [69]. The TPE randomly selects an empirical equation and generates parameters depending on the number of spectral bands (N SB ) and the coefficients (CF = α, β, γ, ρ, and σ) in the equation. A generalized linear regression model is fitted to select the optimal spectral bands and coefficient values to maximize the coefficient of determination (objective function). The coefficient of determination enables quantifying the proportion of the variance in extracted optimized feature w.r.t the target AGB values. The coefficient of determination should be maximized to determine the best fit in terms of spectral bands and coefficient values for the empirical equations. The TPE initialization provides a pair of parameters and loss as stated in Section III-A that are split into two densities ( (x) and g(x)) as per (1) and (2). The ratio of the two densities should be minimized (thus maximizing the objective function (R 2 ) in our case) through an iterative process and the best empirical equation is determined with the related optimum set of parameters (spectral bands, coefficient value). However, the random initialization creates a selection bias in the TPE algorithm that favors the empirical equations with less number of parameters, i.e., empirical equations involving less spectral bands. This selection bias problem was resolved using a meta-learning strategy by performing optimization process in groups. In total, the empirical equations were divided into five groups based on the number of spectral bands (N SB = 2, 3, 4, 5, 6) and 1000 iterations were performed for each group. The best empirical equation in each group was identified and after performing three repetitions of this process, the overall best empirical equation was identified. The selected empirical equation, optimal spectral bands, and coefficient values were used to compute the features and directed to the subsequent framework.
The SL framework receives the computed TPE-optimized automatic features and the target response variable as the input training frame. As previously mentioned, we used the H2O-3 platform to deploy the SL algorithm which consists of a library of base models stated in Table III. Additional details regarding the specifications of the base models and model parameters can be accessed from H2O.ai documentation (https://docs.h2o.ai/). The SL algorithm exploits the base models and derives an optimal SEM that minimizes the expected risk difference as per (10) and (11). In order to achieve the defined objective of this study, we used all the available base models in the library (see Table III). All base models were trained until their convergence with five-fold cross-validation on a 128 GB NVIDIA GeForce GTX 3090 GPU and Linux-based operating system. However, the maximum number of models and maximum runtime for models can be specified before the training process depending on the available computational budget and time restrictions.
C. Model Explanations
The process of sequential deployment of TPE and SL frameworks followed in the proposed approach results in TPEoptimized features, model rankings, model predictions, and prediction assessment metrics. Model explanations such as model rankings, evaluation statistics, regression scatterplots, and feature importance chart are provided as results. The performance of the base models and the SEM were evaluated on the basis of different evaluation metrics and ranked according to the model agreement score (coefficient of determination). All models were cross-validated using five-fold cross-validation method. The metrics used to evaluate these models are coefficient of determination (R 2 cv ), root mean squared error (RMSE), root mean squared log error (RMSLE), mean absolute error (MAE), mean absolute percentage error (MAPE), and relative absolute error (RAE). The R 2 cv shows the goodness-of-fit of the regression model, the RMSE shows the standard deviation of the residuals, RMSLE shows the log-transformed standard deviation of residuals, MAE shows the mean of absolute values of residuals, MAPE shows the accuracy of prediction as a percentage, and RAE shows accuracy of measurements relative to the range of the observed variable.
A. TPE Optimized Features and Feature Importance
The features for the SL framework were extracted based on the hyperparameter optimization results of the TPE algorithm. The empirical equations with respective spectral bands and coefficient values for each season selected post-TPE optimization procedure are given in Table IV. The selected empirical equations have been referenced using index numbers (I n ) mentioned in Table II. The spectral bands selected for the respective equations have been referenced w.r.t their central wavelengths (in nm). The corresponding Sentinel-2 spectral band for each central wavelength can be identified from Copernicus website (https://sentinels.copernicus.eu/). The empirical equations that had multiple selected combinations of spectral bands were referenced as "a," "b," and "c." In total, 24 extracted features were selected for training the base models for AGB prediction following the TPE-based optimization procedure. Particularly, there were six features from autumn, seven from spring, six from summer, and five from winter seasons. Interestingly, all extracted features consisted of spectral bands from the vegetation red-edge spectrum (VRE) and short-wave infrared (SWIR) spectrum of the Sentinel-2 data. The TPE algorithm that was conditional on the target variable for the optimization process suggested an important role of the VRE and SWIR spectral bands for modeling forest AGB. Precisely, the SWIR band at λ c = 1610 were selected for 21 out of the 24 extracted features and the contribution of VRE was distributed at λ c = 705, 740, and 865. The season-wise analysis indicated that VRE at λ c = 740 and SWIR at λ c = 1610 was selected for all autumn features. Also, SWIR at λ c = 1610 was selected for five features and λ c = 2190 for the remaining two features. For the summer season, VRE at λ c = 705 and 865 were selected for five out of the total six features and SWIR at λ c = 1610 for all features. Thus, the clear observable pattern in the specific spectral contributions of the extracted features accounts for their target-oriented properties.
The stacked bar chart in Fig. 3 shows the computed feature importance for all the considered base models. The features have been referenced as "index_season" and the scaled feature importance values of the base models have been color coded for analyzing the percentage contributions. The highest average feature importance was observed for "12a_sum" (79%) that was extracted with λ c = 705, 490, and 665. The feature achieved specific feature importance of 74% for DNN, 24% for DRF, and 100% for GBM, XGBoost, and GLM. This indicated that feature "12_sum" contributed significantly for most of the base models and has a dominant role in modeling AGB. A total of five features for DNN (26_spr, 12a_sum, 8b_sum, 8a_sum, 12b_sum), two features for DRF (28c_aut, 33_sum), five features for GBM IV TPE OPTIMIZED FEATURES FOR ALL SEASONS (12a_sum, 32_aut, 23_aut, 28b_aut, 7_sum), three features for XGBoost (12a_sum, 7_sum, 12b_sum), and five features for GLM (12a_sum, 7_sum, 33_sum, 23_aut, 17_aut) were characterized by feature importance score greater than 50%. The feature "12a_sum" achieved the highest feature importance for three (GBM, XGBoost, and GLM) out of the five base models and the features '26_spr' and '28b_aut' achieved highest feature importance for DNN, and DRF, respectively. All these features were associated with either summer, autumn, or spring season indicating comparatively low importance of winter features in predicting AGB. Overall, the summer features were the most dominant with a total of 12 out of 20 features with a feature importance greater than 50%.
B. Model Leaderboard and Predictive Analysis
The H2O-3-based AutoML framework used to implement the SL algorithm trains all the base models (see Table III) and also produces an optimal SEM for predicting AGB. The total computational time for executing the SL algorithm to produce the modeling results was 1215 s.
The model ranking results and computed assessment metrics shown in Table V indicated that SEM achieved the best overall performance for predicting forest AGB. The SEM model achieved the highest agreement of R 2 cv = 0.71 and the best prediction precision RMSE = 74.44 Mg ha −1 . The RMSLE = 0.56 was equal for the top three ranked models despite a significant difference in the RMSE indicating the existence of outliers in the data. This explained the slightly higher MAE of the SEM despite achieving better model agreement as compared to DNN and DRF. The same explanation is applicable to the DNN and DRF which have a significant difference in the MAE and MAPE despite an identical RMSLE score. This suggests that SEM is more robust to outliers and the meta-learning process enabled the model to achieve better model fitting. The DNN model achieved the second-best performance on the leaderboard and the overall results suggested DNN to be more prone to outliers as compared to SEM. The RAE score for the SEM model was 0.43 and successively increased to 0.75 for other models on the leaderboard. This indicated the lowest saturation from SEM and successive increment in saturation for other models to predict AGB. The low RAE score for SEM and DNN models shows that these models can predict greater AGB values with better accuracy as compared to the other models. This is also evident from the scatterplots of the regression models shown in Fig. 4. The scatterplots clearly indicate that SEM, DNN, and DRF models efficiently predicted large values of AGB as compared to GBM, XGBoost, and GLM models. Thus, the proposed automated ML pipeline yielded SEM that outperformed single base models and efficiently modeled AGB. The results also pointed out the crucial role of the meta-learning process in modeling AGB for eliminating uncertainties associated with the data and producing robust models.
The forest plot-type-based analysis of model performance was carried out by independently computing RMSE for broadleaves, coniferous, and mixed plots. The results (shown in Table VI) indicate that the SEM model (which showed the best overall performance) was sensitive to the type of forest plot. Overall, the least RMSE errors were observed for the mixed-type plots and the highest RMSE errors were recorded for broadleaf plots. This type of behavior was consistent with the results from previous studies that obtained more accurate AGB estimations results for conifers as compared to the broadleaves using spectral data
VI. DISCUSSION
In this article, we have proposed an automated ML pipeline for developing an SEM for the prediction of forest AGB using multitemporal Sentinel-2 data. The key elements of the ML pipeline were the TPE and SL algorithms that automated the process of feature extraction from the data and training a library of base models leading to the development of a stacked ensemble for modeling AGB. The automated ML pipeline was proposed for dealing with various issues identified in the literature related to the human-bias in AGB modeling and systematic evaluation of models. In this section, we extensively analyze our results with respect to the contemporary literature and precisely identify the advancements delivered by the study for AGB modeling using satellite remote sensing data.
The choice of features is very crucial in any modeling process and the performance of the models is highly dependent on their quality. The use of MS data for AGB modeling has led to the development and testing of various features. Practically, there are numerous combinations of spectral bands that could be used for extracting a feature from MS data for modeling AGB. Studies in literature use a few standard vegetation indices as features for modeling and mapping forest AGB [70], [71], [72], [73].
A comparative analysis of these studies indicates an unstable response of vegetation indices depending upon the spatial resolution, sensor specifications, and available spectral bands of the data. To state simply, a vegetation index identified as effective for a particular study area or data with a certain radiometric specification may not be as effective for other study areas or radiometric specifications. Thus, the human-intelligence-based selection of these vegetation indices can be highly inefficient and ambiguous for AGB modeling. In order to overcome this issue, our study proposed an automated mechanism for extracting such features. The TPE algorithm-based optimization procedure extracts features that are highly target-oriented and involves less human intervention. It is capable of overcoming the shortcomings identified in the literature with regard to extraction and selection of effective features from satellite MS data for AGB modeling. The reduction of human-bias from the process enables the development of more robust and reliable features. Moreover, it performs necessary changes in composing the features based on the specifications of the data such as spatial, spectral, and radiometric resolution. Thus, our study demonstrated the success of the proposed automated approach that led to consistent and accurate modeling results.
The second component of the modeling process after developing robust features is the choice of a modeling algorithm. The type of modeling algorithm substantially affects the precision and accuracy of the predictions. As per literature, there are several studies that use different ML models for predicting forest AGB [7], [57], [58] and provide a comparative assessment of the models in order to identify the best modeling algorithm for AGB prediction. Apart from the range of ML models that can be used, each model has associated hyperparameters that require tuning. A faulty hyperparameter tuning can adversely affect the performance of an ML model and restrict its generalization capability. Moreover, studies that use the same ML models but with a different combination of hyperparameters or architecture sometimes lead to contrasting performance on an identical task. For example, the study in [58] identified XGBoost to deliver the best performance as compared to the other competing models. However, the XGBoost model has multiple associated hyperparameters that were defined manually for finding the best combination of hyperparameters using a grid search. This introduces a human-bias in the process and reduces the chances of reproducing the results for other data or scenarios. To deal with this problem, our study proposed the use of an AutoML approach that effectively automates the iterative tasks associated with the development of a model. Precisely, it automates the process of hyperparameter selection and a range of ML models are trained in the same pipeline for effective comparison and reproducible results.
Studies have identified that a combination of models (stacked ensemble) can produce more efficient results as compared to a single model [74], [75]. There are only a few studies that focused on using SEMs for remote-sensing-based forest applications [6], [76]. These studies used a manual or semi-automatic approach for identifying an optimal combination of models for AGB prediction. An optimal SEM requires a library of diverse models and a systematic algorithm to evaluate the combination of models with optimized hyperparameters and generate a meaningful explanation of models. The solution for this complex problem was provided by the proposed use of SL algorithm implemented using H2O-3 framework that produced a stacked ensemble of base models. The developed SEM model in this study delivered the best performance as compared to the individual base models. Moreover, it limited the required number of user-defined parameters reducing human-bias in the ensemble selection. Thus, a reliable and automated pipeline for robust stacked ensembling and model training was established in this study.
VII. CONCLUSION AND FUTURE WORK
This article proposed an end-to-end ML pipeline for modeling forest AGB using MT-MS satellite remote sensing data. The results demonstrated that reducing human-bias from the modeling process and deploying a comprehensive model evaluation strategy under a single framework can provide better model explanations. The derived model explanations can be instrumental to frame effective schemes for accurately mapping AGB on large areas with diverse forest characteristics. Moreover, instead of using predefined features for modeling, an automated optimization procedure can produce more effective features by weighing more on the spectrum of the data that holds greater importance in explaining the target AGB. The future developments of this work can possibly aim at improving the robustness of the proposed pipeline by the addition of optimization elements and by replacing or improving the deployed meta-learning strategies. The advances in the latest AutoML systems can be possibly incorporated to derive additional model explanations for framing better modeling schemes for large-scale AGB mapping. These advanced AutoML systems could be possibly deployed at a global scale to develop models that can handle a greater diversity of forest types, species, and geographical conditions. | 8,596 | 2023-01-01T00:00:00.000 | [
"Environmental Science",
"Computer Science"
] |
Global classical solutions for mass-conserving, (super)-quadratic reaction-diffusion systems in three and higher space dimensions
This paper considers quadratic and super-quadratic reaction-diffusion systems for reversible chemistry, for which all species satisfy uniform-in-time $L^1$ a-priori estimates, for instance, as a consequence of suitable mass conservation laws. A new result on the global existence of classical solutions is proved in three and higher space dimensions by combining regularity and interpolation arguments in Bochner spaces, a bootstrap scheme and a weak comparison argument. Moreover, provided that the considered system allows for entropy entropy-dissipation estimates proving exponential convergence to equilibrium, we are also able to prove that solutions are bounded uniformly-in-time.
Introduction
In this article, we first consider the following quadratic reaction-diffusion system: on Ω T , on Ω T , n(x)·∇ x u i (t, x) = 0, on ∂Ω T , where u i := u i (t, x) ≥ 0 for i = 1, 2, 3, 4 denotes non-negative concentrations at time t and position x of four species A i subject to non-negative initial concentrations u i0 (x) ≥ 0. Moreover, d i > 0 are the corresponding positive and constant diffusion coefficients. We suppose x ∈ Ω, where Ω is a bounded domain of IR N (N ≥ 1) with sufficiently smooth boundary ∂Ω ∈ C 2+α , α > 0. Finally, n(x) is the outer normal unit vector at point x of ∂Ω and we denote Ω T = [0, T ] × Ω and ∂Ω T = [0, T ] × ∂Ω for any T > 0.
The above system (1) constitutes a mass-action law model of the evolution of a mixture of four diffusive species A i , i = 1, 2, 3, 4 undergoing the single reversible reaction until a unique positive detail balance equilibrium is reached in the large time behaviour, see Section 2. For a derivation of system (1) or related mass-action law reaction-diffusion systems from kinetic models or fast reaction limits, we refer to [2,3,4,8].
Note that for the sake of readability, we have set the forward and backward reaction rates in (1) equal to one (the general case can be treated without any additional difficulty). Also, without loss of generality, we shall also assume that Ω is normalised (i.e. |Ω| = 1), which can always be achieved by rescaling the spatial variable.
The quadratic reaction-diffusion system (1) and in particular the question of global-in-time solutions has lately received a lot of attention, see e.g. [5,12,13].
In [12], for instance, a duality argument in terms of entropy density variables was used to prove the existence of global, non-negative L 2 -weak solutions in any space dimension, see also Section 2.
While the existence of global classical solutions in 1D follows already from Amann, see e.g. [1], the existence of global classical solutions in 2D was recently shown by Goudon and Vasseur in [13] by using De Giorgi's method. For higher space dimensions the existence of classical solutions constitutes in general an open problem, for which the Hausdorff dimension of possible singularities was characterised in [13].
The (technical) criticality of quadratic nonlinearities was underlined by Caputo and Vasseur in [6], where smooth solutions were shown to exist in any space dimension for systems with nonlinearities of sub-quadratic power law type, see also e.g. [1].
A further related result by Hollis and Morgan [14] showed that if blow-up (here that is a concentration phenomenon since the total mass is conserved) occurs in system (1) in one concentration u i (t, x) at some time t and position x, then at least one more concentration u j =i has also to blow-up (i.e. concentrate) at the same time and position. The proof of this result is based on a duality argument.
Recently in [5], an improved duality method allowed to show global classical solutions of system (1) in 2D in a significantly shorter and less technical way than De Giorgi's method.
In this work, we will prove in higher space dimensions N ≥ 3 and under a dimension-dependent closeness condition of the diffusion coefficients d i that the global, L 2 -weak solutions to (1) (see [12]) are in fact global classical solutions. More precisely, we shall denote by the maximal distance between the diffusion rates appearing in (1) and prove the following: Theorem 1.1 (Global classical solutions and uniform-in-time bounds). Consider N ≥ 3 and assume that δ, a and b as given in (2) satisfy where C a+b 2 ,2Γ is defined in Proposition 2.1. Assume initial data u i0 ∈ L 3N 4 (Ω) (which also implies with 3N 4 > 2Γ, assumption (3), estimate (17) of Proposition 2.1 and the theory of global, weak solutions in [12], the existence of global, weak solutions u i ∈ L 2Γ (Ω T ) to (1) for any T > 0).
Then, any global, weak solutions u i ∈ L 2Γ (Ω T ) for any T > 0 is a global, classical solutions to (1). Moreover, these solutions are bounded uniformly-in-time in the sense that for all τ > 0, there exists a positive constant C = C(τ ) such that Remark 1.2. The actual novelty of Theorem 1.1 concerns the range Γ ∈ ( N +2 2 − 2N N +2 , N +2 2 ) because when Γ ≥ N +2 2 , then the quadratic right hand side terms of (1) is bounded in L N +2 2 (Ω T ) and standard parabolic regularity estimates (which reflect the convolution with the heat kernel being in L N +2 N −ε , for all ε > 0) implies u i ∈ L ∞ (Ω T ) and thus classical solutions, see e.g. Proposition 2.3 from [5], where the more stringent δ-smallness condition (21) was assumed, which yields directly u i ∈ L N +2 (Ω T ).
In this paper, we are able to show classical solutions under the weaker δ-smallness condition (3) by combining regularity and bootstrap arguments in Bochner spaces with interpolation of Bochner spaces (see Lemma 4.1) and a further bootstrap based on a weak comparison argument (Lemma 3.6).
For example, for N = 3, our approach lowers the initially required regularity u i ∈ L 2Γ (Ω T ) from the seemingly natural assumption Γ = 5 2 (as assumed in Proposition 2.3 from [5]) down to Γ > 13 10 . Since global L 2 -weak solutions to (1) exist in all space dimensions, Theorem 1.1 leaves the gap 1 < Γ ≤ 13 10 in the case N = 3, for which it remains an open problem if global weak solutions u i ∈ L 2Γ (Ω T ) are also global classical solutions. This corresponds to systems (1), for which Remark 1.3. We remark that the assumption on the initial data u i0 ∈ L 3N 4 (Ω) is not optimal, but chosen for sake of the readability of the proof of Theorem 1.1. In fact, the proof of Theorem 1.1 would work equally for u i0 ∈ L µ0 (Ω) with µ 0 > max ν, (ν−1)N 2 . Remark 1.4. We further remark that the uniform-in-time L ∞ -bound (4) follows from an interpolation argument between the exponential convergence to equilibrium of system (1) (see Proposition 2.2 below) and the fact that the proof of Theorem 1.1 only involves regularity constants, which grow not faster than polynomially in time.
Remark 1.5. There is an alternative argument to derive the initially required regularity u i ∈ L 2Γ (Ω T ) provided a suitable δ-smallness condition of the diffusion coefficients d i of system (1). System (1) implies, for instance that Then, by a duality estimate (see e.g. the proof of [18,Lemma 3.4]), it follows directly for any p ∈ (1, +∞) and some C = C(p, T ) that Thus, with u 3 L p (ΩT ) ≤ u 1 + u 3 L p (ΩT ) , the required estimate for u 3 L p (ΩT ) holds provided that We emphasise that our proof of Theorem 1.1 only relies on the closeness assumption (3), uniform-intime L 1 a-priori estimates (which follows typically from mass conservation laws) and the quadratic order of the non-linearity of the reaction terms on the r.h.s. of (1). Thus our results can be similarly applied to related mass-conserving reaction-diffusion systems with quadratic non-linearities. Moreover, one can also easily generalise the proof to systems with super-quadratic reaction terms provided the more stringent closeness condition (7), see the following Corollary. Corollary 1.6. Let N ≥ 3 and ν ≥ 2 and consider the following non-linear reaction-diffusion systems: on Ω T , where u = (u 1 , . . . , u k ) ≥ 0 denotes k ∈ IN + non-negative concentrations and for all i ∈ {1, . . . , k} the nonlinearities f i (u) are positivity-preserving, polynomials of order ν, i.e. f i (u) ≤ C k i=1 u ν i for a constant C > 0 and satisfy conservation laws of the following form: For all i ∈ {1, . . . , k}, there exists a constant, positive vector 0 ≤ (a i j ) j∈{1,...,k} ∈ IR k with a i i > 0 such that k j=1 a i j f j (u) ≤ 0 and thus where C a+b 2 ,2Γ is defined in Proposition 2.1. Assume initial data u i0 ∈ L (2ν−1)N 4 (Ω) (which also implies with (2ν−1)N 4 > 2Γ, assumption (7), estimate (17) of Proposition 2.1 and a theory of global, weak solutions analog to [12], the existence of global, weak solutions u i ∈ L 2Γ (Ω T ) to (1) for any T > 0).
Then, any global, weak solutions u i ∈ L 2Γ (Ω T ) for any T > 0 is a global, classical solutions to (1).
Remark 1.7. We remark that in contrast to Theorem 1.1, Corollary 1.6 does not state a uniform-in-time bound analog to (4). In fact, general systems (5) will not feature an entropy functional like system (1). Without the entropy method proving exponential convergence to equilibrium, our method only shows that global classical solutions to (5) grow (at most) polynomially-in-time. More precisely, it is due to the uniform-in-time L 1 -bounds (6) that we are able to show global solutions with polynomially growing bounds for quadratic and super-quadratic systems.
For the existence of classical solutions to systems with sub-quadratic nonlinearities (i.e. ν < 2), we refer to Caputo and Vasseur [6] and the references therein.
Remark 1.8. Again, like in Remark 1.2, the novel range is Γ ∈ ( (ν−1)N 2 +4 2(N +2) , N +2 2 ) because when Γ ≥ N +2 2 , then L ∞ -bounds and the existence of classical solutions follows from standard arguments. We remark that depending on the polynomial order ν, Corollary 1.6 only constitutes an improvement under to following condition on the dimension N : which is false for ν large. Thus, our approach does not lead to improvements in the case of strongly super-quadratic nonlinearities. However, for N = 3, we are able to lower the threshold, which is sufficient to show the existence of classical solutions, for systems with third order and slightly higher nonlinearities.
Notation: Besides standard notations, we shall denote by L p,q (Ω T ) the Bochner space with space-time norm: Idea of the proofs: For the convenience of the reader we present a short outline of the proofs of Theorem 1.1 and Corollary 1.6: First, we use an improved duality estimate as proven in [5] in order to derive the starting a-priori estimate u i ∈ L 2Γ (Ω T ) (or u i ∈ L νΓ (Ω T ) respectively). Then, we apply Sobolev's embedding theorem, interpolation with the uniform-in-time L 1 -bound and other classical inequalities in order to derive L p -estimates for some p to be appropriately chosen. Next, we use these estimates as a starting point for a bootstrap scheme, which allows to improve the regularity of the solutions to u i ∈ L pn,∞ , i.e. u i is L ∞ in time with values L pn in space. Then, after bootstrapping p n sufficiently large, the regularity u i ∈ L pn,∞ allows to use the weak comparison Lemma 3.6, which leads to global-in-time L ∞ and, thus, classical solution to system (1). Finally, uniform-in-time bounds for solutions of systems (1) follow from an interpolation argument with the exponential convergence to equilibrium.
Outline: In Section 2, we shall first state basic properties of the systems (1) and recall previous existence results and methods. Then, in Section 3, we shall first prove several Lemmas required for the proof of Theorem 1.1 and finally prove Corollary 1.6. In the Appendix 4, we provide a proof of Lemma 4.1 for the convenience of the reader.
Preliminaries
Non-negativity, mass conservation laws and equilibrium The chemical reaction terms on the r.h.s. of (1) satisfy the quasi-positivity property and thus ensure that solutions to (1) propagate the non-negativity of the initial data u i0 ≥ 0, i.e. for all T > 0, we have on Ω T , for all i = 1, 2, 3, 4.
Moreover, system (1) subject to homogeneous Neumann boundary conditions satisfies the following mass conservation laws: where j = {1, 2} and k = {3, 4}. Notice that only three of these four conservation laws are linearly independent.
The non-negativity of the solutions together with the mass conservation laws (8) provide natural uniform-in-time a-priori L 1 -estimates for the concentrations, i.e.
where M denotes the total initial mass M = M 13 + M 24 = M 14 + M 23 .
System (1) features a unique positive detail balance equilibrium. Due to the homogeneous Neumann boundary conditions this equilibrium (u i,∞ ) i=1,..,4 consists of the unique positive constants balancing the reversible reaction u 1,∞ u 2,∞ = u 3,∞ a u,∞ and satisfying the conservation laws u j,∞ + u k,∞ = M jk for (j, k) ∈ ({1, 2}, {3, 4}), that is: Duality estimates, entropy variables and global weak solutions The system (1) can also be rewritten in terms of the entropy density variables z i := u i log(u i ) − u i (compare with the entropy functional (18) below). By introducing the sum z := 4 i=1 z i , it holds (with a and b defined as in (2)) that Then, by a duality argument (see e.g. [12,14,19] and the references therein), the parabolic problem (11) satisfies for all T > 0 and for all space dimensions N ≥ 1 the following a-priori estimate where C is a constant independent of T , see [5,12]. Thus, given (u i0 ) i=1,..,4 ∈ L 2 (log L) 2 (Ω), we have (u i ) i=1,..,4 ∈ L 2 (log L) 2 (Ω T ) and the quadratic nonlinearities on the right hand side of (1) are uniformly integrable, which allows to prove the existence of global L 2 -weak solutions in all space dimensions N ≥ 1, see [12]. Moreover, in 2D and in higher space dimension under the assumption of sufficiently strong δ-smallness condition, the following duality Lemma allows to show global classical solutions: Let Ω be a bounded domain of IR N with smooth (e.g. C 2+α , α > 0) boundary ∂Ω and T > 0. We consider a coefficient function Then, any function u satisfying and (with p ′ < 2 denoting the Hölder conjugate of p) satisfies for all T > 0 Here, the constant C T depends polynomially on T and the constant C m,q > 0 in (14) is defined for m > 0, Specifically for system (1), we can rewrite the system in terms of z = u 1 + u 2 + u 3 + u 4 (it is not even necessary to consider entropy density variables) where a and b are defined as in (2) and obtain for all T > 0 ∀i = 1, 2, 3, 4 : Entropy functional, entropy method and exponential convergence to equilibrium The detail balance structure of system (1) ensures also the following non-negative entropy functional E((u i ) i=1,..,4 ) and the associated entropy dissipation functional D( It is easy to verify that the following entropy dissipation law holds (for sufficiently regular solutions (u i ) i=1,..,4 of (1)) for all t ≥ 0 In [11], exponential convergence in L 1 towards the unique constant equilibrium (10) (with explicitly computable rates) was shown for global L 2 -weak solutions in all space dimensions N . The proof of this statement was based on the so called entropy method, where a quantitative entropy entropy-dissipation estimate of the form with an explicitly computable constant C was established, which uses only natural a-priori bounds of the system and thus significantly improved the results of [10].
Let Ω be a bounded domain with sufficiently smooth boundary such that Poincaré's and Sobolev's inequalities and thus the Logarithmic Sobolev inequality hold. (8)).
Then, any global solution (u i ) i=1,..,4 of (1), which satisfies the entropy dissipation law (2) (this is true for any weak or classical solutions as shown to exist in [5,12]) decays exponentially towards the positive equilibrium state (u i,∞ ) i=1,..,4 > 0 defined by (10): for all t ≥ 0 and for constants C 1 and C 2 , which can be explicitly computed.
The following Proposition 2.3 from [5] shows the consequences of the Propositions 2.1 and 2.2 specific to the system (1).
Existence of Global Classical Solutions
In this Section, we first prove the existence of global classical solutions of the system (1) under the δ-smallness assumption (3), where δ (as defined in (2)) measures the maximal distance of the diffusion coefficients of systems (1) on a domain Ω ⊂ IR N for a given space dimension N ≥ 3 (recall that global classical solutions of system (1) are known for N = 1, 2). As a consequence of the δ-smallness assumption (3), estimate (17) of Proposition 2.1 provides an u i ∈ L 2Γ (Ω T )-estimate for the concentration u i of (1).
In particular, we are interested in the range since for Γ ≥ N + 2 and the considered quadratic nonlinearities f ≤ |u 1 u 2 − u 3 u 4 | ∈ L Γ (Ω T ), a standard parabolic regularity bootstrap argument for the heat equation with Neumann boundary conditions, i.e.
implies for a right-hand-side f ∈ L N +2 2 (Ω T ) that the solution satisfies the following L ∞ estimates where C T grows at most polynomially w.r.t. T , (see [5] for the polynomial dependence on T ). In particular, the range Γ ≥ N +2 2 was considered in Proposition 2.3, where δ was assumed to satisfy the more stringent δ-smallness assumption (21), which yields f ∈ L N +2 2 (Ω T ).
Thus, we consider here the δ-smallness assumption (3) and estimate (17) of Proposition 2.1 yields a-priori estimate of the form where C T is a constant depending polynomially on the time T and N +2 2 − 2N N +2 ≥ 13 10 for N ≥ 3.
As a first step in proving Theorem 1.1, we are faced with the fact that duality estimates like (25) address spaces L 2Γ,2Γ (Ω T ), which feature equal time-and space-integrability. However, as we shall see in the following, bootstrapping the problem (24) naturally leads to Bochner spaces with higher time-than space-integrability.
In this Section, we present a bootstrap, which allows to conclude from u i ∈ L 2Γ,2Γ to u i ∈ L 3N 4 ,∞ . Furthermore, we shall take care to ensure that all involved constants grow at most polynomially in T . (for instance as a consequence of a suitable mass conservation law) and consider initial data u 0 ∈ L p (Ω) for a p > 1 to be chosen. Then, (by using Sobolev's embedding theorem, the uniform-in-time bound (26) as well as Hölder's and Young's inequality) u satisfies the following L p estimate for all T > 0: where q = q(p, N ) := p N N −2 > p for N ≥ 3 and the constants C ′ andC only depend on p, M , d, N and C s , which is the constant from the Sobolev's embedding theorem. Moreover, γ, γ ′ , µ and µ ′ are Hölder conjugates to be chosen.
Furthermore, by using once more the uniform-in-time bound (26) and provided that 1 ≤ µ ′ (p − 1) ≤ q holds, u also satisfies Proof. By testing (24) with p |u| p−1 sgn(u) (more precisely by testing with a smoothed version of the modulus |u| and its derivative sgn(u) and letting then the smoothing tend to zero), we obtain by integrationby-parts and with the constant C 0 (p, d) :
Sobolev's embedding theorem
After adding on both sides C 0 Ω |u| p dx, we apply Sobolev's embedding theorem for N ≥ 3, i.e.
with Sobolev constant C s and and q(p) > p for all N ≥ 3.
Therefore, we get:
Interpolation and the mass conservation property
Here we remark that integration of (29) by means of a Gronwall type argument would lead to global, yet exponentially growing estimates of u p p . However, such exponential growing estimates can be avoided (and should be avoided in order to retain the possibility of interpolating a-priori estimates with exponential convergence to equilibrium) by using the mass conservation laws (8), which provides a uniform-in-time L 1 (Ω) bound of the form (26). More precisely, we interpolate the L p term u p p on the right hand side of (29) between L 1 and L q , i.e. 1 p = θ 1 + 1−θ q such that θ = q−p q−1 1 p ∈ (0, 1) since 1 < p < q. Thus, due to the uniform-in-time bound u(t, ·) 1 ≤ M for all t ≥ 0, we obtain with an exponent Now, since p > s, the last term on the right hand side of (30) can be controlled by the term u p q on the left hand side of (30) as in the following step.
Young's and Hölders's inequality, time-integration and further Hölder inequality
Next, we apply Young's inequality to the last term on the right hand side of (30) to derive for a δ > 0 to be chosen sufficiently small In the following, we estimate the first term on the right hand side with Hölders's inequality. In fact, this shall first be done for a general Hölder exponent 1 where µ can be chosen in order to match the necessary assumptions.
Remark 3.2. We remark at this point that we will mostly chose µ such that µ ′ (p − 1) = q, i.e. we shall choose µ according to the maximal space integrability, which can be controlled by the left hand side term u p q . Next, we integrate over (0, T ) and apply Hölder again: which proves the estimate (27).
Further interpolation with the uniform-in-time L 1 -bound
We first remind that In order to further exploit the mass conservation property in cases such that 1 ≤ µ ′ (p − 1) ≤ q, we will interpolate L µ ′ (p−1) ֒→ L q ∩ L 1 with a parameter θ ∈ [0, 1] that satisfies .
The estimate (27) still allows to choose µ and γ in our goal of deriving a-priori estimates for u. At this point, there are two basic options: First, we could aim to control the term u p−1 µ ′ (p−1),γ ′ (p−1) entirely by the duality estimate (25), which we have anyway already used to ensure the integrability f ∈ L Γ,Γ and u i ∈ L 2Γ,2Γ . Thus, in this first case, we choose µ and γ such that If we consider p = 2, then (32) requires to assume the δ-closeness condition in Proposition 2.1 small enough such that the duality estimate ensures u i ∈ L 3,3 and f ∈ L 3 2 , 3 2 . In the case N = 3, under such a δ-closeness assumption, (32) then implies directly u i ∈ L 2,∞ and we can apply Lemma 3.6 in order to establish global classical solutions in 3D, see also [16]. In order to show classical solutions in higher space dimensions N > 3, we can proceed similar by considering (32) for suitable exponents p > 2. However, the idea to directly control the term u p−1 µ ′ (p−1),γ ′ (p−1) via a duality estimate (25) does not lead to such general results as stated in Theorem 1.1. This which will become clear from the following considerations.
From now onwards, we are interested in the second case, where the term u p−1 µ ′ (p−1),γ ′ (p−1) cannot be controlled by the duality estimate (25), but has to be controlled by the term u p q,p on the left hand side of (27). In order to constitute an improvement of the argument described in the previous paragraph, we are interested in cases when holds.
Remark 3.3. In fact, we will see in the following that our method naturally considers µ ≤ γ, i.e. that the time-integrability is larger (or at least equal) to the space-integrability. This leads to two possible subcases where the first subcase would in principle still allow to partially use the duality estimate u 2Γ,2Γ ≤ C T in a bootstrap argument. However, by anticipating some details from the below calculations, in particular by considering the starting value of a bootstrap exponent p 0 = Γ(N +2)−2 N −2 and γ ′ (p 0 − 1) = p 0 , we see that the below arguments are able to successfully treat some cases (in dimensions N = 3, 4), where 2Γ < γ ′ (p 0 − 1) < µ ′ (p 0 − 1).
This means that the most general results as stated in Theorem 1.1 are derived from entirely controlling the term u p−1 µ ′ (p−1),γ ′ (p−1) by the term u p q,p on the left hand side of (27).
The following two Lemmata 3.4 and 3.5 develop further the estimates of Lemma 3.1 for two different choices of µ and γ: Lemma 3.4 takes into account that the duality estimates (25) only allow to control f in spaces with the same regularity in space and in time, i.e. f ∈ L Γ,Γ (Ω T ). Therefore, Lemma 3.4 sets µ = γ: N +2 and u 0 ∈ L p (Ω) for some p > 1.
Then, we have with q = pN with a constant C, which grows at most polynomially in time T .
Proof. By recalling (28) from Lemma 3.1 and setting γ = µ, we have especially Then, the first term on the right hand side can be controlled by the term u p q,p on the left hand side of (28) by setting if we verify that the corresponding Hölder exponent satisfies µ ′ > 1 (such that 1 µ ′ < 1) and that the interpolation L µ ′ (p−1) ֒→ L q ∩ L 1 used in Lemma 3.1 is admissible. This is done by recalling that q = pN N −2 and straightforward calculations show the corresponding values of µ ′ , µ and θ to be for all p > 1. Therefore, the estimate (34) becomes: Next, since µ ′ , µ > 1, we estimate u p q,p 1 µ ′ with Young's inequality and obtain for a sufficiently small enough ǫ > 0 and u 0 ∈ L p ⇒ u ∈ L p,∞ ∩ L q,p , where q = pN N −2 and the involved constants grow at most polynomially in time.
The following Lemma 3.5 will allow to perform a bootstrap argument build on estimate (33) of Lemma 3.4 in order to improve the integrability of f .
We point out that estimate (33) leads naturally to a-priori estimates for u which exhibit larger timethan space-integrability (see e.g. Lemma 4.1 for the interpolation of the space L p,∞ and L q,p ). Therefore, the following Lemma 3.5 provides a suitable variant of Lemma 3.4 based on the assumption that f features higher time-than space-integrability. More precisely, we shall assume f ∈ L r,p where r = r(p, N ) is a specific space-integrability exponent with r(p, N ) < p.
Lemma 3.5 (µ < γ estimate). Consider N ≥ 3. Assume for some p > 1 that u 0 ∈ L p and f ∈ L r,p with where as above q = q(p, N ) = pN N −2 > p. Moreover, by using interpolation in Bochner spaces (see Lemma 4.1), we have for any interpolation exponent θ ∈ (0, 1): Proof. We return to the proof of Lemma 3.1 and consider µ < γ in order to derive a-priori estimates on u based on f featuring higher time-rather than space-integrability. In particular, we can set µ ′ (p − 1) = q in (31), having in mind that the L q norm is the highest possible space, which can be controlled by the the left hand side of (31), see also Remark 3.2. Thus, we get: where the index denotes the minimal space-integrability requirement on f such that the corresponding term u p−1 q can still be controlled by the left hand side term u p q . Next, we integrate (37) over (0, T ) for any T > 0 and apply on the right hand side Hölder's inequality with 1 = 1 γ ′ + 1 γ to obtain Thus, we set γ ′ (p − 1) = p and γ = p in (38) and apply once more Young's inequality to the right hand side term u p−1 q,p such that for a sufficiently small ε > 0 u(T ) p p +Ĉ u p q,p ≤ u 0 whereĈ =C − ε > 0.
Altogether, if f r,p is bounded, i.e. f ∈ L r,p and u 0 ∈ L p , we have f ∈ L r,p and u 0 ∈ L p ⇒ u ∈ L p,∞ ∩ L q,p .
We remark that in f ∈ L r,p , since we have set γ(p − 1) = p and γ ′ = p in (38), the time-integrability index p represents the minimal required time-regularity such that estimate (39) holds.
In the last step of the proof of Lemma 3.5, we use the Bochner space interpolation Lemma (4.1) (see Appendix) and get L p,∞ ∩ L q,p ֒→ L σ,τ , where σ and τ are defined depending on an interpolation parameter θ ∈ (0, 1) by Finally, the following Lemma 3.6 provides a bootstrap argument based on ideas of [15,16,21], which combines semigroup estimates with weak comparison arguments and allows to bootstrap polynomially in time growing L ∞ -bounds. Lemma 3.6 (Weak comparison bootstrap argument). Consider ν ≥ 2 and let u i (t, ·) be weak solutions to the systems (1) or (5) for i = 1, . . . , k. In particular we have ν = 2 and k = 4 for solutions u i (t, ·) of system (1). Assume that for an exponent and for all i = 1, . . . , k holds where C(t) > 0 is a continuous function in time, which grows polynomially. Then, the solution u i (·, t) satisfies where C(t) > 0 is again a (different) continuous function in time on (0, ∞), which grows polynomially.
Proof. We remind that each ( In particular, the concentrations (u i ) i=1...k are non-negative and the considered nonlinearities |f i | ≤ C k j=1 u ν j are super-quadratic in the u i of order ν ≥ 2. This motivates us, by neglecting the negative terms of f , to consider: where u ν = k j=1 u ν j for simplicity and d i is the diffusion rate of u i . Now, we define g i = λu i + Cu ν for λ > 0 and all i = 1, . . . , k and consider the linear comparison problem: on Ω T , where g i (t, ·) is considered a given function satisfying g(t, ·) L µ 0 and C 1 (t) has the same properties as C(t).
Thus, by the comparison principle of weak solutions of parabolic equations with values in the reflexive Banach space L µ 0 ν (Ω) for µ0 ν > 1 (see e.g. [7, Theorem 11.9]), we have that 0 ≤ u ≤ u. Next, we refer e.g. to Rothe [20] for the following semi-group estimates of the Laplace operator subject to Neumann boundary conditions with C(q, r) > 0, which implies for L i = d i ∆ − λ subject to Neumann boundary condition (e tLi u)(t, ·) r ≤ C(q, r)e −λt max{1, ( By taking the r-norm of (43) and applying the above semi-group estimate with q = µ0 ν , we obtain and at this point, in order to have the above right hand side integrable near the singularity s = t, we need while for µ0 ν = N 2 , we can chose any r < ∞ and for µ0 ν > N 2 , we can set r = ∞. In all these cases, we can estimate eq. (44) as and the r-norm is bounded by a polynomial of order n + 1, if C 1 is a polynomial of order n. Recalling that 0 ≤ u ≤ u, we conclude that where C 2 (t) grows polynomially in time.
In order to bootstrap the estimates (43)-(45), we require µ 1 := r > µ 0 > ν, which is possible provided that Thus, we find that assumption (40) allows to start a bootstrap argument, which leads after finitely many steps to where µn ν ≥ N 2 and C n+1 (t) grows polynomially in time. Thus, after one or two (if µn ν = N 2 ) further bootstrap steps, we reach where for all τ > 0 the bound C(t) grows polynomially in time.
We can now present the proof of our main Theorem.
Proof of Theorem 1.1 (Global existence of classical solutions to system (1)) The first part of the proof establishes a bootstrap argument based on the Lemmata 3.4 and 3.5 in order to prove the following gain of regularity: Provided that Γ > N +2 2 − 2N N +2 and initial data u i0 ∈ L 3N 4 (Ω), then Once we know that u ∈ L 3N 4 ,∞ , then Lemma 3.6 implies u ∈ L ∞,∞ with constants, which grow at most polynomially in time. In a final step, we shall then interpolate polynomially growing a-priori estimates with the exponentially convergence of solutions towards equilibrium in order to prove uniform-in-time boundedness of the solutions.
Initial
Step: A-priori estimate via duality Proposition 2.1 In order to start our argument, we recall assumption (3) and estimate (17) of Proposition 2.1, which ensures that u ∈ L 2Γ,2Γ for Γ > N +2 2 − 2N N +2 > 1 for N ≥ 3. We remark again that for Γ ≥ N +2 2 , the statement of the Theorem 1.1 follows from well known parabolic regularity results, which directly ensure u ∈ L ∞ (Ω T ) for all T > 0, see e.g. [5], which also proves the polynomial growth of the constants in T .
Thus, Gagliadro-Nirenberg interpolation between L 1 and H k for k > N 2 yields a uniform-in-time L ∞ bound, i.e. there exists a θ ∈ (0, 1) such that for all T > 0 where we have used the exponential convergence to equilibrium stated in Proposition 2.2 and the polynomial-in-time dependence of the constant C T to obtain the uniform-in-time L ∞ -bound (4). This concludes the proof of Theorem 1.1. In fact, we expect in general that σ 0 < 2Γ < τ 0 .
Proof Corollary 1.6
Proof. First, we remark that thanks due the conservation laws assumed in Corollary 1.6, we observe that for every i = 1, . . . , k the sum z i := k j=1 a i j u j where a i i > 0 satisfies a problem of the form Thus, the δ-smallness condition (7) and Proposition 2.1 ensure for all T > 0 the formal a-priori estimate u i ∈ L νΓ (Ω T ) and thus for all i = 1, . . . , k, f i (u) ∈ L Γ and it is easy verified that Γ > 1. Thus, the existence of weak global solutions to system (5) can be shown as the limit of global solutions to suitable approximating systems: By using, for instance, approximating systems, which truncate the nonlinearities f i (u) following the lines of e.g. Michel Pierre [17], we can pass to the limit in system (5) and in particular in the nonlinear terms on the right hand side f i (u) ∈ L Γ for Γ > 1. This will yield global weak solutions of (5) satisfying f i (u) ∈ L Γ .
In the following, we shall prove that these global weak solutions are indeed classical solutions. This is done analog to the proof of Theorem 1.1: We start by proving that u ∈ L νΓ,νΓ with initial data u i0 ∈ L (2ν−1)N 4 (Ω) can be bootstrapped to u ∈ L (2ν−1)N 4 ,∞ provided that Γ > (ν − 1)N 2 + 4 2(N + 2) . | 8,856.4 | 2015-11-13T00:00:00.000 | [
"Mathematics"
] |
Tuning Nanopore Diameter of Titanium Surfaces to Improve Human Gingival Fibroblast Response
The aim of this study was to determine the optimal nanopore diameter of titanium nanostructured surfaces to improve human gingival fibroblast (hGF) response, with the purpose of promoting gingiva integration to dental implant abutments. Two TiO2 nanoporous groups with different diameters (NP-S ~48 nm and NP-B ~74 nm) were grown on Ti foils using an organic electrolyte containing fluoride by electrochemical oxidation, varying the applied voltage and the interelectrode spacing. The surfaces were characterized by scanning electron microscope (SEM), atomic force microscopy (AFM), and contact angle. The hGF were cultured onto the different surfaces, and metabolic activity, cytotoxicity, cell adhesion, and gene expression were analyzed. Bigger porous diameters (NP-B) were obtained by increasing the voltage used during anodization. To obtain the smallest diameter (NP-S), apart from lowering the voltage, a lower interelectrode spacing was needed. The greatest surface area and number of peaks was found for NP-B, despite these samples not being the roughest as defined by Ra. NP-B had a better cellular response compared to NP-S. However, these effects had a significant dependence on the cell donor. In conclusion, nanoporous groups with a diameter in the range of 74 nm induce a better hGF response, which may be beneficial for an effective soft tissue integration around the implant.
Introduction
The main challenge in developing a new generation of dental implants is to combine an improved osseointegration with a greater soft tissue integration around the implant, which will promote the durability of the implant [1,2].An effective soft tissue barrier, with gingival tissue attached to the implant abutment, may improve protective functions, preventing periodontal disease (inflammation of the supporting tissues of the teeth with progressive attachment loss and bone destruction) [3][4][5][6].
To enhance soft tissue compatibility, researchers try to manipulate the implant surface structure at the nanolevel [7], attempting to mimic the dimensions and properties of the physiological extracellular matrix [8,9].Current nanostructuration methods allow the formation of biocompatible surfaces and the production of distinct morphologies, for instance, by changing the electrochemical conditions when using electrochemical anodization-one of the methods most frequently used [10].
Cell behavior on biomaterial surfaces depends fully on its biocompatibility and surface properties [2,11].Interaction between cells and nanostructures provides the possibility of controlling the cell culture in order to achieve the desired biological responses [8,9,12].Modulation of cell behavior through physicochemical surface modification may be mediated by a phenomenon called mechanotransduction, the conversion of mechanical signals into biochemical signals [13,14].In fact, the cell fate of mesenchymal stem cells can be modulated by material interaction through nanosize integrin receptors [14].
Here, we wanted to determine the optimal nanoporous diameter of Ti nanostructured surfaces that exhibits a beneficial effect on fibroblast cell culture.For this purpose, human gingival fibroblasts (hGF) culture assays were carried out to analyze the effects of nanoporous diameters on cytotoxicity, cell adhesion, metabolic activity, and gene expression of genes related to the synthesis and organization of the extracellular matrix and collagen deposition.
Surface Characterization
Anodic oxidation is generally a simple, versatile, and low-cost technique to produce nanostructures on the surface of Ti [9].To achieve highly ordered porous systems, it is crucial to use optimized anodization parameters, such as anodization time, temperature and potential, as well as electrolyte composition and properties (pH, conductivity, or viscosity) [9,15,16].The topographical features of the obtained structure are strongly affected by these parameters, and they can therefore be modified to obtain the desired nanostructure.
As shown in the supporting information (Table S1), changes in parameters such as the electrolyte, voltage, or time changed the obtained nanostructure, and prove the importance of using the right parameters to achieve the desired well-ordered nanostructure.These results also show the difficulties encountered with anodization assays in obtaining a regular and defined nanostructure, since there are many factors that could alter the assay, including room temperature or humidity.After the optimization, two anodizing conditions were selected for the present study.
As shown by the scanning electron microscope (SEM) images (Figure 1), two different groups of well-aligned porous structures were obtained by changing the anodization voltage and electrode interspace (NP-S and NP-B).Bigger pore size was obtained when using a higher voltage and electrode interspace, as shown in Table 1.Our results are in agreement with previous reports, showing that the diameter of the TiO 2 nanotubes increases linearly with increasing anodizing voltage [17][18][19][20].Nevertheless, according to our experience, in order to obtain a different porous diameter it was not enough to change only the voltage, it was also necessary to modify the interelectrode spacing; thus, demonstrating the important role of this parameter, which particularly affects the electrolyte conductivity and concentration during anodization, especially in organic electrolytes [21].From the topographical parameters evaluated, no significant differences were found for Ra, Rsk, and Rkur.In contrast, Rsa and Rpc values increased, as did pore diameter.The greatest surface area and number of peaks was found for NP-B, being significantly different to the control and NP-S surfaces.In accordance with the roughness parameters, while visual inspection of AFM scans revealed similar surface features and good agreement with the SEM images, cross-sectional profiles revealed a great difference between the NP-B surface compared to the Ti control and NP-S surfaces.Rsa is defined as the percentage increase of the three-dimensional surface area over the two-dimensional surface area, which accounts for both the magnitude and the frequency of surface features, and provides a good measure of surface roughness [24].In our study, the greatest surface area and number of peaks was found for the NP-B surface, despite these samples not being the roughest as defined by Ra.This difference between Ra and Rsa when assessing the roughness is explained by the dependence on the frequency and distribution of surface projections; while an increase in peak count may not significantly affect the average roughness, it can represent an increase of surface area difference [24].It is worth mentioning the importance of using a pre-anodized ("aged") electrolyte.Some studies suggest that the electrolyte needs an aging process before being used in the anodization [20].Aging the electrolyte is another anodization factor with importance in improving the quality of the imprint pattern, and in obtaining a defined initiation of the tube growth [22].
The topographical features of the obtained nanostructured surfaces were evaluated by atomic force microscopy (AFM) and wettability by contact angle measurements, as shown in Table 1 and Figure 1.In order to describe roughness, parameters such as average surface roughness (R a ), root mean square surface roughness (R q ), and maximum surface roughness (R max ) are commonly used; however, these parameters might be insufficient for the description of the nanoarchitecture of surfaces, as they give no indication of the shape or spatial density of peaks [23].For this reason, additional horizontal parameters are used: Skewness (R skw ), kurtosis (R kur ), surface area difference (R sa ), and peak counts (R pc ) [23].
From the topographical parameters evaluated, no significant differences were found for R a , R sk , and R kur .In contrast, R sa and R pc values increased, as did pore diameter.The greatest surface area and number of peaks was found for NP-B, being significantly different to the control and NP-S surfaces.In accordance with the roughness parameters, while visual inspection of AFM scans revealed similar surface features and good agreement with the SEM images, cross-sectional profiles revealed a great difference between the NP-B surface compared to the Ti control and NP-S surfaces.R sa is defined as the percentage increase of the three-dimensional surface area over the two-dimensional surface area, which accounts for both the magnitude and the frequency of surface features, and provides a good measure of surface roughness [24].In our study, the greatest surface area and number of peaks was found for the NP-B surface, despite these samples not being the roughest as defined by R a .This difference between R a and R sa when assessing the roughness is explained by the dependence on the frequency and distribution of surface projections; while an increase in peak count may not significantly affect the average roughness, it can represent an increase of surface area difference [24].
These topographical changes at the nanolevel rebound at surface wettability.Results for water contact angle (Table 1) indicated that the initial hydrophilic surface of Ti samples was changed by nanostructures, but they still retained a hydrophilic character (<90 • ).For NP-S, we obtained the highest contact angle surface compared to the Ti control and to the NP-B group, thus making it the most hydrophobic of the studied surfaces [25].After anodic modification, NP-B surfaces showed the same hydrophilic character as the Ti control, pointing to the importance of pore size in the wettability of the surface.Bigger pore size could lead to the most favorable surface in relation to hydrophilicity, which is in accordance with previous studies, where fibroblasts show greater attachment and spreading on hydrophilic surfaces compared to hydrophobic ones [11].
Cell Response to Nanoporous Surfaces
Abutment soft tissue integration is conditioned by the adhesion and spreading ability on implant surfaces of its principal cells-fibroblasts [11].Fibroblasts are of mesenchymal origin and play a major role in the development, maintenance, and repair of gingival connective tissues.The principal function of fibroblasts is to synthesize and maintain the components of the extracellular matrix of the connective tissue [26].It is generally agreed that cell and tissue responses are sensitive to the implant surface chemical and physical features [1,27,28].Thus, a combination of an optimal surface and the mechanical properties of titanium could lead to successful dental implants [12].
In order to test the cell response to the different surfaces of the study, first, cell cytotoxicity was analyzed.Figure 2a shows that all surfaces gave cytotoxicity values lower than the 30% limit established for medical implants according to ISO-10993:5 [29].Here, it is demonstrated anodization treatment was safe and nontoxic for cells, indicating that electrochemical oxidation of Ti sheets did not change the excellent biocompatibility of Ti control surfaces on primary hGF, one of the most important factors for selecting dental abutment materials [6,30].Previous studies have shown that tube diameters of approximately 100 nm induce programmed cell death [15,31].Hence, our results manifest the importance of nanostructure diameters on cell cytotoxicity.
These topographical changes at the nanolevel rebound at surface wettability.Results for water contact angle (Table 1) indicated that the initial hydrophilic surface of Ti samples was changed by nanostructures, but they still retained a hydrophilic character (<90°).For NP-S, we obtained the highest contact angle surface compared to the Ti control and to the NP-B group, thus making it the most hydrophobic of the studied surfaces [25].After anodic modification, NP-B surfaces showed the same hydrophilic character as the Ti control, pointing to the importance of pore size in the wettability of the surface.Bigger pore size could lead to the most favorable surface in relation to hydrophilicity, which is in accordance with previous studies, where fibroblasts show greater attachment and spreading on hydrophilic surfaces compared to hydrophobic ones [11].
Cell Response to Nanoporous Surfaces
Abutment soft tissue integration is conditioned by the adhesion and spreading ability on implant surfaces of its principal cells-fibroblasts [11].Fibroblasts are of mesenchymal origin and play a major role in the development, maintenance, and repair of gingival connective tissues.The principal function of fibroblasts is to synthesize and maintain the components of the extracellular matrix of the connective tissue [26].It is generally agreed that cell and tissue responses are sensitive to the implant surface chemical and physical features [1,27,28].Thus, a combination of an optimal surface and the mechanical properties of titanium could lead to successful dental implants [12].
In order to test the cell response to the different surfaces of the study, first, cell cytotoxicity was analyzed.Figure 2a shows that all surfaces gave cytotoxicity values lower than the 30% limit established for medical implants according to ISO-10993:5 [29].Here, it is demonstrated anodization treatment was safe and nontoxic for cells, indicating that electrochemical oxidation of Ti sheets did not change the excellent biocompatibility of Ti control surfaces on primary hGF, one of the most important factors for selecting dental abutment materials [6,30].Previous studies have shown that tube diameters of approximately 100 nm induce programmed cell death [15,31].Hence, our results manifest the importance of nanostructure diameters on cell cytotoxicity.We also evaluated the release of nanoparticles from the developed surfaces.Curiously, only the NP-B group, the more biocompatible of the tested surfaces, showed nanoparticle release, exhibiting two peaks between 50 and 1000 nm (Figure 2b), with a mean size of 568.3 nm and a concentration of 2.85 × 10 8 particles/mL.These results disagree with previous results from our group [32], where NP We also evaluated the release of nanoparticles from the developed surfaces.Curiously, only the NP-B group, the more biocompatible of the tested surfaces, showed nanoparticle release, exhibiting two peaks between 50 and 1000 nm (Figure 2b), with a mean size of 568.3 nm and a concentration of 2.85 × 10 8 particles/mL.These results disagree with previous results from our group [32], where NP surfaces obtained on Ti disks were toxic for the cells due to high nanoparticle release, with their levels being 1.08 × 10 9 particles/mL.It is to be highlighted that the method for obtaining those structures was different from the one used here; for instance, Ti disks of 2 mm in height were used instead of the Ti foils of 0.127 mm in height that are used here, and the electrolyte was not aged in contrast to the one used for the present work.Our results underline the importance of all the methodological details for nanostructure fabrication on the final results achieved.In fact, the use of two-step procedures in anodization methods might also lead to the presence of impurities [2], which can be harmful for the cultured cells [19,33].
Increased bone cell functions may rely on the degree of the nanostructured surface roughness [9].It has been hypothesized that osteoblasts recognize different surface roughness through the interaction of proteins in the extracellular matrix [27].In the same way, studies suggested that the addition of rough surface features may increase connective tissue attachment [34], and that it is required for the formation of a stable soft tissue seal around the implant abutments [35,36].Moreover, fibroblasts are sensitive to changes in surface roughness and hydrophilicity too [37].Such events are possibly mediated by nanotopography-induced mechanotransduction pathways.Nanostructuring of appropriate size and arrangement may provide the necessary physical cues that cell receptors require to organize the cytoskeleton, and to propagate mechanical signals towards the nucleus, modulating the cell fate [14].It is suggested that the mechanisms of mechanotransduction could play a role in the relation between adhesion and osteogenic differentiation [13].
Therefore, due to their highly defined geometry combined with a large surface area, it was expected that NP-B presented better cell interactions than the smallest nanoporous NP-S.Previous works on size-dependent cell interactions has shown that mesenchymal stem cells (MSCs) respond in a pronounced way to the diameter of nanotubes [31].
The nanostructuring of implant surfaces provides a mechanism to encourage and direct cell adhesion to the implant surface [27], providing an effective substrate for cell contact and proliferation [12,38].As illustrated in Figure 3a, cell adhesion after 30 min was increased significantly by surface nanostructuration compared to the Ti control, and showed a tendency to better adhesion for higher porous diameters, although significance was only found compared to the Ti control.
Cell metabolic activity at days 7 and 14 were also evaluated (Figure 3b,c).When comparing different surfaces with Ti, results showed a different response depending on the hGF donor.Donor B showed increased metabolic activity when seeded on the NP-B group at day 14.In contrast, donor A seeded on the NP-B group showed higher metabolic activity at day 7, while no differences were found at day 14.According to previous studies, TiO 2 nanotubular surfaces with a tube diameter of ∼120 nm improved adhesion and proliferation of hGFs [1].However, others showed better results with diameters lower than 100 nm, with 15 nm being the best pore size [8,31].All in all, there are some controversial discussions because there is no consensus about the optimal geometric parameters of nanostructures [39], and although many studies of different pore size have been carried out, there is an absence of comparable studies [8].Moreover, it has been suggested that various cell types respond differently to similar nanostructures [19], and here we show that differences are even found among different donors of the same cell type.
Thus, our results show that the NP-B surface is the one inducing a better cell adhesion and metabolic activity over time, and topographically we have proved that it differs from NP-S and the Ti control on the surface area, in agreement with other studies that have demonstrated a relation between surface roughness, cell proliferation, and cell adhesion [40,41].Previously, generation of nanotopographies and surface roughness have been used to improve osteoblast cell attachment and osseointegration of oral dental implants [42].In the present study, we have demonstrated that formation of TiO 2 nanoporous structures could also represent a technique for improvement of the initial attachment and spreading of the cells on the abutment implant surface for a better soft tissue attachment. in a pronounced way to the diameter of nanotubes [31].
The nanostructuring of implant surfaces provides a mechanism to encourage and direct cell adhesion to the implant surface [27], providing an effective substrate for cell contact and proliferation [12,38].As illustrated in Figure 3a, cell adhesion after 30 min was increased significantly by surface nanostructuration compared to the Ti control, and showed a tendency to better adhesion for higher porous diameters, although significance was only found compared to the Ti control.We also analyzed the gene expression levels of different genes related to the extracellular matrix (ECM), and the total amount of collagen deposition after 14 days of cell culture (Figures 4 and 5).We wanted to evaluate the effect of nanoporous structures in promoting collagen rich cell culture, a key factor for ECM synthesis, and consequently an effective connective tissue after installation of a dental implant [43].Cell metabolic activity at days 7 and 14 were also evaluated (Figure 3b,c).When comparing different surfaces with Ti, results showed a different response depending on the hGF donor.Donor B showed increased metabolic activity when seeded on the NP-B group at day 14.In contrast, donor A seeded on the NP-B group showed higher metabolic activity at day 7, while no differences were found at day 14.According to previous studies, TiO2 nanotubular surfaces with a tube diameter of ∼120 nm improved adhesion and proliferation of hGFs [1].However, others showed better results with diameters lower than 100 nm, with 15 nm being the best pore size [8,31].All in all, there are some controversial discussions because there is no consensus about the optimal geometric parameters of nanostructures [39], and although many studies of different pore size have been carried out, there is an absence of comparable studies [8].Moreover, it has been suggested that various cell types respond differently to similar nanostructures [19], and here we show that differences are even found among different donors of the same cell type.
Thus, our results show that the NP-B surface is the one inducing a better cell adhesion and metabolic activity over time, and topographically we have proved that it differs from NP-S and the Ti control on the surface area, in agreement with other studies that have demonstrated a relation between surface roughness, cell proliferation, and cell adhesion [40,41].Previously, generation of nanotopographies and surface roughness have been used to improve osteoblast cell attachment and osseointegration of oral dental implants [42].In the present study, we have demonstrated that formation of TiO2 nanoporous structures could also represent a technique for improvement of the initial attachment and spreading of the cells on the abutment implant surface for a better soft tissue attachment.
We also analyzed the gene expression levels of different genes related to the extracellular matrix (ECM), and the total amount of collagen deposition after 14 days of cell culture (Figures 4 and 5).We wanted to evaluate the effect of nanoporous structures in promoting collagen rich cell culture, a key factor for ECM synthesis, and consequently an effective connective tissue after installation of a dental implant [43].As shown in Figure 4, a differential response to the surface depending on the cell donor was also found.For donor B we observed increased expression levels of the ECM components COL1A1, COL3A1, and DCN (Decorin) when seeded onto the nanostructured surfaces, showing a tendency to higher expression levels while increasing the nanoporous diameter.While for donor A, we observed a better response to NP-S surfaces, and a decrease in COL1A1 and COL3A1 mRNA expression levels when seeded onto NP-B surfaces, and the same tendency was found for DCN.Differences between donors could be a consequence of the difference in their age (20 years apart) and sex.In fact, the effects of age and sex differences on the collagen turnover profile have been reported, showing the importance to address both age and sex when interpreting ECM data [44].As shown in Figure 4, a differential response to the surface depending on the cell donor was also found.For donor B we observed increased expression levels of the ECM components COL1A1, COL3A1, and DCN (Decorin) when seeded onto the nanostructured surfaces, showing a tendency to higher expression levels while increasing the nanoporous diameter.While for donor A, we observed a better response to NP-S surfaces, and a decrease in COL1A1 and COL3A1 mRNA expression levels when seeded onto NP-B surfaces, and the same tendency was found for DCN.Differences between donors could be a consequence of the difference in their age (20 years apart) and sex.In fact, the effects of age and sex differences on the collagen turnover profile have been reported, showing the importance to address both age and sex when interpreting ECM data [44].
However, we found higher collagen deposition in cells cultured onto NP-B surfaces for both donors, although statistical significance was only achieved for donor B. One interpretation of differences in results in gene expression and collagen deposition is the fact that the extracellular collagen results from the total accumulation during the whole cell culture period, meanwhile gene expression levels were just those corresponding to the levels found at day 14 of cell culture.Thus, it may be that gene expression presents a changeable pattern during cell culture, and that the gene expression levels found at day 14 are not equal to final extracellular collagen deposition.
Preparation of Nanoporous Layers on Ti Foil
Highly ordered TiO2 nanoporous structures were grown on Ti foils with a two-step electrochemical anodization assay.Prior to the anodization process, Ti foils (99.7% purity, 0.127 mm thick, Sigma-Aldrich, St. Louis, MO, USA) pre-cut in to 0.7 cm × 8.5 cm pieces, were degreased by sonicating in acetone, ethanol, and distilled water for five minutes each in an ultrasonic bath (Branson 5510, Sigma-Aldrich).Finally, the samples were dried under a nitrogen flow.
The anodizations were conducted in a two-electrode cell with a potentiostat instrument (Metrohm Autolab BV, Utrecht, The Netherlands).Ti foil was used as an anode, and platinum electrode as a cathode (Metrohm).The electrodes were kept parallel and were submerged into the electrolyte solution in a Teflon beaker (Sigma-Aldrich).Table S1 (Supplementary Material) shows the different anodization conditions and protocols tested in order to achieve nanostructures with different porous diameters.To achieve the nanostructures further used in this study, an organic electrolyte containing 0.1 M NH4F (Sigma-Aldrich), 1 M H2O, and ethyleneglycol 99.5% pure (Sigma-Aldrich), previously aged for six and a half hours, was used as the anodizing electrolyte.
Experiments were conducted at room temperature under agitation.After a first anodization, the created oxide films were removed by mechanical detachment using Scotch ® Magic TM adhesive tape (3 M, Cergy-Pontoise, France), and the Ti foil was assembled back with the electrolytic cell for a second anodization.Table 2 shows the conditions of voltage and the interspace between electrodes However, we found higher collagen deposition in cells cultured onto NP-B surfaces for both donors, although statistical significance was only achieved for donor B. One interpretation of differences in results in gene expression and collagen deposition is the fact that the extracellular collagen results from the total accumulation during the whole cell culture period, meanwhile gene expression levels were just those corresponding to the levels found at day 14 of cell culture.Thus, it may be that gene expression presents a changeable pattern during cell culture, and that the gene expression levels found at day 14 are not equal to final extracellular collagen deposition.
Preparation of Nanoporous Layers on Ti Foil
Highly ordered TiO 2 nanoporous structures were grown on Ti foils with a two-step electrochemical anodization assay.Prior to the anodization process, Ti foils (99.7% purity, 0.127 mm thick, Sigma-Aldrich, St. Louis, MO, USA) pre-cut in to 0.7 cm × 8.5 cm pieces, were degreased by sonicating in acetone, ethanol, and distilled water for five minutes each in an ultrasonic bath (Branson 5510, Sigma-Aldrich).Finally, the samples were dried under a nitrogen flow.
The anodizations were conducted in a two-electrode cell with a potentiostat instrument (Metrohm Autolab BV, Utrecht, The Netherlands).Ti foil was used as an anode, and platinum electrode as a cathode (Metrohm).The electrodes were kept parallel and were submerged into the electrolyte solution in a Teflon beaker (Sigma-Aldrich).Table S1 (Supplementary Material) shows the different anodization conditions and protocols tested in order to achieve nanostructures with different porous diameters.To achieve the nanostructures further used in this study, an organic electrolyte containing 0.1 M NH 4 F (Sigma-Aldrich), 1 M H 2 O, and ethyleneglycol 99.5% pure (Sigma-Aldrich), previously aged for six and a half hours, was used as the anodizing electrolyte.
Experiments were conducted at room temperature under agitation.After a first anodization, the created oxide films were removed by mechanical detachment using Scotch ® Magic TM adhesive tape (3 M, Cergy-Pontoise, France), and the Ti foil was assembled back with the electrolytic cell for a second anodization.Table 2 shows the conditions of voltage and the interspace between electrodes used to obtain the different pore diameter nanoporous structures.Conditions of voltage and time were modified and monitored with Nova 2.0 software (Metrohm).The samples were rinsed with ethanol, dried under a nitrogen flow, cut using scissors in to 1 × 0.7 pieces, and disinfected with dry heat (90 • C, 30 min) before cell seeding.3.2.Characterization of Ti Nanopore Arrays
SEM
For the structure and morphology characterization of nanoporous layers, a scanning electron microscope was used to acquire images (SEM, HITACHI S-3400N, Hitachi High-Technologies Europe, Krefeld, Germany), applying secondary electrons, low vacuum conditions, and 15 kV of voltage.The samples were sputtered with gold prior to analysis.The pore diameter (nm) was calculated using ImageJ software (v1.51 k, Rasband, W.S., ImageJ, US National Institutes of Health, Bethesda, MD, USA).
AFM
Surface roughness of the nanopores was examined with atomic force microscopy (AFM, VECCO model multicode, VECCO, Plainview, Oyster Bay, NY, USA) in air tapping mode, with a scan size of 5 µm, in combination with HQ: NSC35/Al probes (Mikromasch, Lady's Island, SC, USA) with a nominal spring constant of 16 N/m, and resonant frequency of 300 kHz.Topographical analysis was performed by importing the resulting AFM data files into Nanoscope software (v5.10,VEECO) and selecting the roughness tool.
Contact Angle or Surface Wettability
Surface wettability was evaluated using a static sessile water-drop method.The experiment was performed using 2 µL of ultrapure water (wetting agent).Images were taken using a Nikon D3300 (AF-P DX 18-55 mm lens, Tokyo, Japan).The contact angle was calculated using ImageJ software (Rasband, W.S., ImageJ, US National Institutes of Health, Bethesda, MD, USA).
Nanoparticle Release
To measure nanoparticle release, the different nanoporous samples and the Ti control were incubated at 37 • C with ultrapure water for seven days.NP suspensions were analyzed by DLS (Dynamic light scattering) using Zetasizer ZS90 (Malvern Panalytical Ltd., Malvern, UK).The nanoparticle concentration was analyzed by NTA (Nanoparticle Tracking Analysis) using PMX 120 Scanning ZetaView ® (Particle Metrix Inc., Mebane, NC, USA).Experiments were run in duplicate for each group.Nanoparticle concentration was measured only for those samples that presented nanoparticle release, as measured by DLS.
Cell Culture
Primary human gingival fibroblasts (hGF) from two different donors were purchased from Provitro (GmbH, Berlin, Germany): hGF-A (27 years, female, Caucasian, 313X100401) and hGF-B (47 years, male, Caucasian, 323X070501).Provitro assures that cells were obtained ethically and legally, and that all donors provided written informed consent.
Cells were seeded at a density of 7.0 × 10 3 cells/well, using the flexiPERM ® micro12 system (growth area 0.3 cm 2 ) (Sarstedt, Nümbrecht, Germany) on the different nanoporous and Ti control groups.
Cell Cytotoxicity
To estimate cytotoxicity of Ti nanoporous samples, the presence of lactate dehydrogenase (LDH) in culture media after 48 h of cell incubation onto the different surfaces was used as an index of cell death.Following the manufacturer's instructions (Cytotoxicity Detection kit, Roche Diagnostics, Mannheim, Germany), LDH activity was determined spectrophotometrically after 30 min of incubation at room temperature (RT) of 50 µL of culture media and 50 µL of the reaction mixture, by measuring the oxidation of nicotinamide adenine dinucleotide (NADH) at 490 nm in the presence of pyruvate.
The results were presented relative to the LDH activity of the media of cells seeded on tissue culture plastic (TCP) without treatment (low control, 0% of cell death), and on cells grown on TCP treated with 1% Triton X-100 (high control, 100% of death), using the following equation: Cytotoxicity (%) = (exp.value-lowcontrol)/(high control-low control) × 100.Experiment was run in six sample replicates (n = 6) for each group and donor.
Cell Adhesion and Metabolic Activity
Cell adhesion and metabolic activity were quantified using PrestoBlue ® (ThermoFisher, Waltham, Massachusetts, USA), a live-cell resazurin-based viability reagent (Life Technologies, Carlsbad, CA, USA).To determine cell adhesion, six sample replicates of one donor (hGF-B) (n = 6) were used.Thirty minutes after seeding, the medium was aspirated, cells were washed twice with PBS (Phosphate buffered saline), and 100 µL of fresh culture medium were added to each well with 10 µL of PrestoBlue.After an overnight incubation, the absorbance of the medium was read at 570 and 600 nm.A standard was grown in the same conditions in parallel to the samples.
Metabolic activity was determined at days 7 and 14 of hGF culture on the surfaces, using PrestoBlue reagent following the manufacturer's protocol.The results were presented relative to the Ti control.
Gene Expression by Real-Time Polymerase Chain Reaction (RT-PCR)
Total RNA was isolated from the 14 days cell culture using Tripure ® reagent (Roche Diagnostics, Mannheim, Germany) according to the manufacturer's protocol, and quantified at 260 nm using a Nanodrop spectrophotometer (NanoDrop Technologies, Wilmington, DE, USA).
For reverse-transcribe to cDNA, a High Capacity RNA to cDNA kit (Applied Biosystems, Foster City, CA, UA) was used, using the same amount of RNA (140 ng) for all the samples.Each cDNA was diluted, and aliquots were frozen (−20 • C) until the RT-PCRs were carried out.
Real-time PCR was performed in the Lightcycler 480 ® system using SYBR green detection (Roche Diagnostics).It was performed for two reference genes and five target genes.The primer sequences used are detailed in Table 3.Each reaction contained 7 µL of master mix (Lightcycler 480 SYBR Green I Master, Roche Diagnostics), the sense and the antisense specific primers (0.5 µM), and cDNA sample (3 µL), in a final volume of 10 µL.The amplification program consisted of a preincubation step for denaturation of the template cDNA (5 min, 95 • C), followed by 45 cycles consisting of a denaturation step (10 s, 95 • C), an annealing step (10 s, 60 • C), and an extension step (10 s, 72 • C).After each cycle, fluorescence was measured at 72 • C. A negative control without cDNA template was run in each assay [45].
To allow relative quantification after PCR, standard curves were constructed from standard reactions for each of the target and reference genes.The crossing point readings for each of the unknown samples were used to calculate the amount of either the target or the reference relative to a standard curve, using the Second Derivative Maximum Method provided by the LightCycler 480 analysis software version 1.5 (Roche Diagnostics, Mannheim, Germany).All samples were normalized by the mean of the expression levels of the reference genes [46].
Collagen Quantification
After 14 days of cell culture, cells were washed with H 2 O, dried for 1 h at RT, followed by incubation for 1 h at −80 • C. Cells were then incubated overnight at 37 • C in a humidified atmosphere, followed by 24 h at 37 • C in a dry atmosphere.Collagen was stained with 0.1% Sirius Red F3BA (Sigma-Aldrich) in saturated picric acid (Sigma-Aldrich) for 1 h at RT. Unbounded dye was removed with 10 mM HCl washes, and dye was solubilized with 0.1 M NaOH.Absorbance was measured at 540 nm.Readings were compared with a calf-skin collagen standard (Calf skin type I Collagen, Sigma-Aldrich) included in the assay.Experiment was run in triplicate (n = 3) for each group and donor.
Statistical Analysis
All data are presented as mean values ± standard error of the mean (SEM).The Kolmogorov-Smirnov test was done to assume parametric or non-parametric distributions for the normality tests.Differences between groups were assessed, depending on their normal distribution, by the Mann−Whitney U test or the two-way analysis of variance (ANOVA) test, followed by post-hoc pairwise comparisons using the LSD test.SPSS software (version 24.0, Chicago, IL, USA) and GraphPad Prism (version 7, La Jolla, CA, USA) were used.Results were considered statistically significant at p values < 0.05.
Conclusions
The present work shows the potential of anodic oxidation of Ti surfaces to induce an improved cell response, which could be attributed to the improvement of surface properties.We establish the electrochemical condition to achieve two different pore diameters, by controlling the applied potential and the interspacing of the electrode, showing how these two conditions determine the pore diameter.
Larger porous diameters were obtained by increasing the voltage used during anodization.To obtain the smallest diameter, apart from lowering the voltage, a lower interelectrode spacing was needed.Our results show quite different topographical characteristics on surface properties-reflected on surface area-for the bigger nanoporous structures of ~74 nm compared to nanoporous structures of ~48 nm or the Ti control.
These changes of surface properties are reflected in the improvement of gingival fibroblast response for nanoporous structures of ~74 nm compared to unstructured Ti control surfaces or nanoporous surfaces with a smaller ~48 nm porous structure.With the presence of nanoporous surfaces we have stimulated cell growth and proliferation, obtaining the best results for the bigger pore diameter of ~74 nm, probably due to the higher surface area and number of peaks.Therefore, we found the optimal pore diameter for the improvement of human gingival fibroblast response, although these effects had a significant dependence on the cell donor, probably affected by the age and sex of the donors.
Thus, although further work is needed, we believe that the possibility of diameter control and surface modification with just two simple parameters holds great potential for clinical applications, having an impact in the improvement of soft tissue integration, and increasing dental implant efficacy.
Figure 1 .
Figure1.In order to describe roughness, parameters such as average surface roughness (Ra), root mean square surface roughness (Rq), and maximum surface roughness (Rmax) are commonly used; however, these parameters might be insufficient for the description of the nanoarchitecture of surfaces, as they give no indication of the shape or spatial density of peaks[23].For this reason, additional horizontal parameters are used: Skewness (Rskw), kurtosis (Rkur), surface area difference (Rsa), and peak counts (Rpc)[23].
Figure 1 .
Figure 1.Surface characterization by scanning electron microscope (SEM) and atomic force microscopy (AFM) images of nanoporous arrays (NP-S, NP-B) and the Ti control surface.(a) SEM images of the surfaces (scale bar = 2 µm).(b) Three-dimensional reconstructions based on 5 µm × 5 µm scans obtained by AFM.(c) Two-dimensional images and cross-sectional profiles obtained by AFM.Red triangle represented the highest and lowest position of the surface.
Figure 1 .
Figure 1.Surface characterization by scanning electron microscope (SEM) and atomic force microscopy (AFM) images of nanoporous arrays (NP-S, NP-B) and the Ti control surface.(a) SEM images of the surfaces (scale bar = 2 µm).(b) Three-dimensional reconstructions based on 5 µm × 5 µm scans obtained by AFM.(c) Two-dimensional images and cross-sectional profiles obtained by AFM.Red triangle represented the highest and lowest position of the surface.
Figure 2 .
Figure 2. Analysis of cytotoxicity of human gingival fibroblast cells seeded on Ti and TiO2 nanoporous surfaces.(a) Lactate dehydrogenase (LDH) activity measured from culture media of hGF cells seeded on different surfaces after 48 h, was measured for evaluation of cytotoxicity.The positive control (100%) was cell culture media from cells seeded onto tissue culture plastic and incubated with 1% Triton X-100.The negative control (0%) was cell culture media from cells seeded on tissue culture plastic without any treatment.Mean ± S.E.M (n = 12) are represented.Differences between groups were assessed by ANOVA and post-hoc LSD test: * p < 0.05 versus Ti.(b) Particle-size distribution (nm) versus percentage of particle intensity determined by dynamic light scattering (DLS) of NP-B nanoparticle test release.
Figure 2 .
Figure 2. Analysis of cytotoxicity of human gingival fibroblast cells seeded on Ti and TiO 2 nanoporous surfaces.(a) Lactate dehydrogenase (LDH) activity measured from culture media of hGF cells seeded on different surfaces after 48 h, was measured for evaluation of cytotoxicity.The positive control (100%) was cell culture media from cells seeded onto tissue culture plastic and incubated with 1% Triton X-100.The negative control (0%) was cell culture media from cells seeded on tissue culture plastic without any treatment.Mean ± S.E.M (n = 12) are represented.Differences between groups were assessed by ANOVA and post-hoc LSD test: * p < 0.05 versus Ti.(b) Particle-size distribution (nm) versus percentage of particle intensity determined by dynamic light scattering (DLS) of NP-B nanoparticle test release.
Figure 3 .Figure 3 .
Figure 3. (a) Cell adhesion of hGF-C on the different surfaces.Number of cells adhered to the control and modified surfaces after 30 min.Data represent the mean ± SEM (n = 6 for donor hGF-B).Differences between groups were assessed by ANOVA and post-hoc LSD test: * p < 0.05 versus Ti.(b) Figure 3. (a) Cell adhesion of hGF-C on the different surfaces.Number of cells adhered to the control and modified surfaces after 30 min.Data represent the mean ± SEM (n = 6 for donor hGF-B).Differences between groups were assessed by ANOVA and post-hoc LSD test: * p < 0.05 versus Ti.(b) Metabolic activity of hGF-A (plain) and hGF-C (striped) at day 7 and (c) 14 of cell culture onto different surfaces.Data are expressed as percentage of Ti control for each day, which was set to 100%.Values represent the mean ± SEM (n = 6).Significant differences were assessed by ANOVA and post-hoc LSD test: * p ≤ 0.05 versus Ti.
Int. J. Mol.Sci.2018, 19, x 6 of 14 Metabolic activity of hGF-A (plain) and hGF-C (striped) at day 7 and (c) 14 of cell culture onto different surfaces.Data are expressed as percentage of Ti control for each day, which was set to 100%.Values represent the mean ± SEM (n = 6).Significant differences were assessed by ANOVA and posthoc LSD test: * p ≤ 0.05 versus Ti.
Figure 4 .
Figure 4. mRNA levels of gingival fibroblast differentiation markers after 14 d of cell culture.Plain bars correspond to donor hGF-A, and striped bars correspond to donor hGF-B.Data represents fold changes of target genes normalized to reference genes (Gapdh and β-Actin), expressed as percentage of control, which was set to 100%.Values represent the mean ± SEM (n = 3) for each donor.Differences between groups were assessed by ANOVA and post-hoc LSD test: * p < 0.05 versus Ti; † p < 0.05 versus NP-S.
Figure 4 .
Figure 4. mRNA levels of gingival fibroblast differentiation markers after 14 d of cell culture.Plain bars correspond to donor hGF-A, and striped bars correspond to donor hGF-B.Data represents fold changes of target genes normalized to reference genes (Gapdh and β-Actin), expressed as percentage of control, which was set to 100%.Values represent the mean ± SEM (n = 3) for each donor.Differences between groups were assessed by ANOVA and post-hoc LSD test: * p < 0.05 versus Ti; † p < 0.05 versus NP-S.
14 Figure 5 .
Figure 5.Total collagen content after 14 days of cell culture.Plain bars correspond to donor hGF-A, and striped bars correspond to donor hGF-B.Values represent the mean ± SEM (n = 3) for each donor.Significant differences were assessed by ANOVA and post-hoc LSD test: † p < 0.05 versus NP-S.
Figure 5 .
Figure 5.Total collagen content after 14 days of cell culture.Plain bars correspond to donor hGF-A, and striped bars correspond to donor hGF-B.Values represent the mean ± SEM (n = 3) for each donor.Significant differences were assessed by ANOVA and post-hoc LSD test: † p < 0.05 versus NP-S.
Table 1 .
Summary of TiO 2 nanoporous and Ti control surface characteristics.
1Pore diameter results represent the mean ± S.E.M (Standard Error of the Mean), with n = 120 of each group.
Table 2 .
Conditions for the two-step anodizing process carried out.Voltage conditions applied and the interspace between the electrodes used during anodization.
Table 3 .
Sequence of sense (S) and antisense (A) primers used in the real-time polymerase chain reaction (PCR) of reference and target genes.Base pairs (bp). | 9,580.2 | 2018-09-22T00:00:00.000 | [
"Materials Science",
"Medicine",
"Engineering"
] |
Topological data analysis model for the spread of the coronavirus
We apply topological data analysis, specifically the Mapper algorithm, to the U.S. COVID-19 data. The resulting Mapper graphs provide visualizations of the pandemic that are more complete than those supplied by other, more standard methods. They encode a variety of geometric features of the data cloud created from geographic information, time progression, and the number of COVID-19 cases. They reflect the development of the pandemic across all of the U.S. and capture the growth rates as well as the regional prominence of hot-spots. The Mapper graphs allow for easy comparisons across time and space and have the potential of becoming a useful predictive tool for the spread of the coronavirus.
Introduction
Topological data analysis (TDA) is a method for understanding data clouds that attempts to gain insight into the data by treating it as a geometric object and extracting information based on its "shape". There are several TDA instruments available, and the one we use in this paper is called the Mapper algorithm. Introduced by Singh, Mémoli, and Carlsson [14], Mapper is a way to simplify data while preserving many of its topological features. The idea is to project complicated data in a way that makes the projection tractable, then use topology to cover the projection with certain sets, and then look at the preimages of those sets under the projection map. This information is then used to construct a graph that retains many of the topological information about the original data set such as clustering, connectedness, 1dimensional holes, etc. This graph lends itself more readily to analysis than the original data set, but is still complicated enough to provide more insight than some other dimension-reduction and clustering techniques such as principal component analysis. Mapper has been used in a variety of situations such as breast cancer data [10], image processing [13], and patent development [7]. 1 For overviews on Mapper and TDA more generally, see [1,2].
In this paper, we apply the Mapper algorithm to the United States COVID-19 data. We consider the 4-dimensional data cloud consisting of the longitudes and latitudes of the U.S. counties and territories, number of days elapsed in the pandemic (starting on January 22, 2020), and the number of cumulative COVID-19 cases recorded by that day in each county.
The resulting Mapper graphs are rich in information and more complete than other more standard methods of visualization. They reflect important aspects about the development and spread of the COVID-19 pandemic across the U.S. and capture dramatic growth rates, trends, and regional prominence of hot-spots. In particular, compared with traditional ways of data visualization that similarly provide real-time monitoring of the spread of COVID-19 across the U.S. (e.g. Johns Hopkins University's COVID-19 dashboard), Mapper visualization has the following advantages: • It captures the entire developmental process of the outbreak in the U.S., including the emergence of hot-spots across time. Instead of displaying a plain snapshot of all the COVID-19 cases in the U.S. at certain point in time or a time series data for a single location, a Mapper representation captures the evolution of the spread in the entire country.
• The Mapper graph makes it easy to compare data across time and space. A certain U.S. county's position in the graph is in part determined relative to the surrounding counties so, when irregular patterns appear in the visualization, it is because the data it represents has some volatility in comparison with its neighboring counties. This makes it easier to spot regional hot-spots while retaining an overarching view of the United States.
In Section 2.1, we give background on the Mapper algorithm, trying to keep the technical aspects to the minimum for clarity and readability. Sections 2.2 and 2.3 explain how we apply the Mapper algorithm to the U.S. COVID-19 data.
Section 3 contains the analysis of the various Mapper images we obtain. We discuss the meaning of connected components in Section 3.1 and explain how each of the coordinates -time, geography, and the number of cases -influence the connectedness of the graph. We turn to the other most prominent feature of the Mapper, its branches, in Section 3.2, and discuss how these structures indicate the appearance and development of COVID-19 hot-spots. Finally in Section 3.3, we take a look at the evolution of the Mapper with respect to time and examine how the succession of graphs provides useful information about the development of the pandemic across time and space.
In Section 4, we make some remarks about the many ways in which the work in this paper could be extended and generalized. For example, the Mapper could be overlaid with dates of stay-at-home orders so that the effectiveness and timeliness of such directives could be assessed. Socio-economic factors could also be read into the Mapper so that the COVID-19 spread could be correlated with such data. In addition, our analysis can be replicated for any region in the world, and, for some of them, travel restrictions between countries and border closings could also be incorporated into the data to gauge their efficacy. In addition, the branches that indicate hot-spots could be given more rigorous graph-theoretic structure that could lead to a robust predictive model. Finally, another TDA tool called persistent homology could be employed and used in conjunction with Mapper to gain an even deeper understanding of the spread of the COVID-19 pandemic.
Due to its relatively recent emergence, topological data analysis has not yet been applied extensively to epidemiology. The idea that this methodology could be useful in the study of viral evolution was put forth in [3]. The paper [15] provides a general framework for using TDA to study contagion across networks. Applications to particular diseases can be found in [4], which studies the spread of influenza, and [8], which does the same for Zika. All of the above papers use the persistent homology arm of TDA. The Mapper algorithm does not appear to have been used in the context of epidemiology yet, except in [5], which uses the Ball Mapper to examine economic and COVID-19 case data in England; this also seems to be the only article that studies the coronavirus pandemic through TDA to date.
Here I is some finite indexing set. Sets U i are also sometimes called bins. 3. Apply some clustering algorithm to each preimage f −1 (U i ) (preimages are also called fibers).
This produces i j clusters C i 1 , C i 2 , . . . , C i j in the ith preimage. 4. Create a graph whose vertices or nodes consist of the set of all clusters An (unoriented) edge between nodes C j i and C k l is added if and only if This collection of nodes and edges is the Mapper graph, denoted by M (X, U, f ). 2 Figure 1 illustrates this procedure on a simple dataset X that lives in R 2 , with the projection function f : X → R mapping to the real line by forgetting the first coordinate.
The projection function depends on the context and on the aspect of the data that is deemed most relevant. It could simply forget coordinates, but most useful projections are statistically more meaningful, such as the kernel density estimator, distance to measure, or graph Laplacian. In the applications of Mapper to the study of cancer, for example, a Healthy State Model which decomposes tumor cells into the "disease" and "healthy" components is used to project to the disease part [10]. The domain of the projection is often R, i.e. the projection simply assigns a real number to each data vector.
The image f (X) is usually covered by hypercubes (products of intervals), but open covers of any shape are a priori allowed. The number and the amount of overlap of the open sets determines many features of the graph, including the number of its connected components.
2 If one were to create the more complicated simplicial complex that carries even more information than the Mapper graph, then one would also look at all n-fold intersections for n > 1 to determine the n-simplices of the complex. The task of a clustering algorithm is to determine which subsets of data points in each preimage appear to be closer to each other than to others. This is essentially the discrete version of identifying the connected components of a topological space. Clustering can be done by selecting an appropriate metric or distance function (Euclidean, correlation, etc.) on the data and using it to decide the "closeness" via a procedure such as single-linkage hierarchical clustering. In fact, there is no need for the distance function to satisfy the triangle inequality, so technically only a semimetric is required. Clustering can also be done in the image f (X), but some information loss due to projection is possible.
The size of the nodes can also be regulated and it corresponds to the number of data points in that cluster. Nodes can also be colored according to some chosen characteristics of the data. For example, in the applications of Mapper to breast cancer data [10], the nodes are colored according to survival rates.
Note that there are many choices that are left to the user -the projection function, the number of open sets in the cover of f (X) and the amount of their overlap, and the clustering algorithm. This ad hoc nature of the algorithm is in many ways its advantage because it allows for great generality and flexibility in the types of data that can be treated and the questions that can be asked about it.
Mapper is an unsupervised data analysis method, but is more subtle than other ones such as principal component analysis. It performs dimensionality reduction and clustering while preserving the shape of data. This is possible because Mapper clusters locally but then extrapolates back to global data via the edges. The graph is often a lot easier to interpret than other visualizations like scatterplots and it effectively captures many facets of the original, high-dimensional data.
There are various free technical implementations of the Mapper algorithm, such as Python Mapper [9], TDAmapper (R package) [12], and Kepler Mapper (which we use in this paper) [16].
2.2. COVID-19 data. The data we use in this study comes from the COVID-19 Data Repository by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University. 3 Specifically, we use a collection of time series data on the number of confirmed COVID-19 cases reported by 3155 U.S. counties (and territories) starting on 1/22/20. The end date for most of our analysis will be 6/19/20, but some of it will go up to 7/24/20 (see Figures 9 and 10 in particular). Each data point encodes information on the geographic location of a certain U.S. county, a date, and the number of cumulative confirmed COVID-19 cases reported in that county on that date.
More precisely, suppose a U.S. county is located at latitude x and longitude y. Here x is positive when it represents a northern latitude and negative when it represents a southern one (e.g. American Samoa). Similarly y is positive if it is an eastern longitude (e.g. Guam) and negative otherwise.
Let the coordinate w encode the date information as relative to the first date in our time series, namely 1/22/20, which is itself given value 0. Lastly, let z be the number of cumulative confirmed COVID-19 cases in that county on date w. Then our cloud consists of data points p = (x, y, z, w) = (latitude, longitude, cumulative confirmed cases, day). 2.2.1. Normalization. Since the four coordinates of our data vector use varying systems of measurement, which cause their numerical values to have different orders of magnitude, modeling them directly into a 4-dimensional Euclidean space to serve as the basis for Mapper clustering means that the four coordinates will take on disproportionate weights in determining the shape of the Mapper graph. More specifically, coordinates that have relatively small numerical values, such as the time coordinate, will barely be evident in the Mapper graph. We hence need to offset the distortion effect induced by the numerical discrepancies in the coordinates of our data point. We achieve this by separately scaling each coordinate of the input row vectors to column-wise unit norm. This means that if we square a chosen coordinate of all the scaled vectors and then sum them within its feature, the total would be 1.
From now on, we will be working with this normalized point cloud. Note that, as the time interval is modified (by changing the end date), the values of the coordinates of the vectors change under the normalization as well, i.e. the same vector might be normalized to different values after the time frame is extended to a later date.
2.3.
Application of the Mapper algorithm to the COVID-19 data. Recall the general description of the Mapper algorithm from Section 2.1. To construct a Mapper graph for our point cloud X, we use the Python implementation of the Mapper, KeplerMapper by Van Veen and Saul [16], along with the following specifications: 1. The projection function is the identity map, namely f : X → R 4 simply sends a vector to itself. That is, since our data is not high-dimensional nor are we after any particular statistical features of the data cloud, there is no need to project it.
2. To cover the data (equivalently, the image of f ), we use the standard Euclidean metric in R 4 and the default KeplerMapper cover procedure with Euclidean 4-dimensional cubes and the parameter n = 10. This means that the projections of the data cloud onto each of the axes in R 4 is covered by 10 overlapping intervals, and then each cube is formed as the cartesian product of those intervals. The degree of the overlap will be δ = 8%. These values are chosen because empirically they appear to give the most informative Mapper graphs. These parameters will be modified slightly for some of the pictures in Section 3.3. 3. For the clustering algorithm, we use the default DBSCAN 4 clustering offered by KeplerMapper.
The advantage of DBSCAN is that it identifies clusters of any shape (unlike other methods which only look for convex clusters).
The colors in our Mapper graphs do not have any mathematical meaning. The nodes are colored according to how the data is ordered, and this is done based on the geographic information, so that nearby counties are colored similarly. We can thus use the colors to add coherency among graphs and to help distinguish geographic locations.
A representative Mapper visualization of our data can be found in Figure 2. This is a visualization of the COVID-19 data from all 3155 U.S. counties and territories as of 6/19/20. 2.3.1. Filtering. As mentioned in Section 2.2.1, each of the four vectors consisting of a particular coordinate of each data point is scaled to unit norm. The resulting normalization for a data point is determined by the relative numerical significance of each coordinate within their own coordinate peers. This normalization is hence substantially affected by outliers in the data set, which can obscure a significant amount of data that would otherwise be numerically distinguishable. For example, looking at To solve this issue, we filter our data set to produce different graphs either with or without these outliers. In the rest of the paper, we refer to data sets that include the aforementioned outlier counties as "unfiltered" and the ones without as "filtered". For instance, Figure 2 makes it visually obvious that New York, Cook and Los Angeles "stand out" in the form of branches in comparison with rest of the data that congregate in the main trunk. On the other hand, Figure 5 provides the Mapper graph representing the same data set, but filtered to exclude data from Los Angeles, Cook, IL and the entire New York state. It is evident from this graph that more branches emerge as a result of filtering. This provides more helpful information on places that are not top-ranked in terms of total COVID-19 cases but still display worrisome trends or regional significance, such as King, WA and Maricopa, AZ. The rest of the paper works with both unfiltered and, for the most part, filtered data sets. We mostly use the former for New York-related analysis and the latter for other prominent U.S. COVID-19 clusters like the Maricopa County in Arizona.
For the filtered and unfiltered graphs that contain more recent dates (up to 7/24/20), see Figures 9 and 10.
Results
In this section, we introduce several significant topological features that could be observed from the Mapper graph, including components and branches, and illustrate what could be inferred from the presence and structure of these features.
3.1.
Components. It is clear from Figure 2 that the original point cloud clusters into several disjoint components under the Mapper algorithm. Most of the data lives in the main "trunk," but there are also several smaller isolated components. These disjoint components are formed either by a single isolated node or by a group of nodes connected by edges. Recall that an edge is created between two nodes when there is a data point living in both, and that a node is formed by covering the projections of the point cloud to each axis in R 4 with overlapping, equal-length intervals and then clustering the data points within each 4-dimensional cube determined by these intervals. Connected clusters of nodes are thus formed when the data in each node displays close similarity or proximity, and disjoint components emerge when they are numerically dissimilar, so that the data points are "far away," i.e. there are gaps between the projections of data points into any of the four dimensions, resulting in empty overlaps between covers. Significant variations in any of the four coordinates among data points therefore break the nodes or clusters of nodes into disjoint components.
Because the projection of the point cloud is linear along the time dimension before the normalization, this distribution does not result in empty overlaps when covers are applied. Therefore, instead of breaking the Mapper into isolated components, the time dimension rather accounts in part for the connectedness features of the Mapper and, furthermore, to the internal distribution of nodes within each component. Therefore, we can usually explain the existence of isolated components by variations in the other three coordinates, which concern two factors: (a) Geographic variations of the counties represented by the data points; and (b) Variations in the number of cumulative confirmed cases reported by each county.
We will discuss the ramifications of these two factors below in Sections 3.1.2 and 3.1.3. Before that, we will say more about how the time component plays a role in keeping the nodes connected and in shaping the internal structure of most components.
3.1.1. Time. The time coordinate is unique in the sense that its values follow a linear progression before normalization. Because of this linearity, normalized coordinates stay relatively dense and empty overlaps will thus almost never emerge when covers are applied to the projection of the data into the time axis, notwithstanding most choices of n and δ. This implies that time hardly explains the disconnectedness of the graph but in fact produces a considerable number of the edges between nodes and clusters.
Moreover, when covered by equal-length intervals, this dense distribution results in highly predictable, segmented internal structures within most clusters in the Mapper graph, so that the main trunk and most other substructures could even be seen as having been formed by weaving together clusters or nodes with data from successive time segments. To see this, let us first remove the time coordinate and produce a Mapper graph with data from a single day as in Figure 6. Without the time component, we can clearly observe the impact of geography on the shape of the graph. If we compare this graph with Figure 7, which is generated with the time coordinate, we see that the latter graph could be regarded as having been formed by horizontally juxtaposing and connecting graphs produced on consecutive dates, each looking like Figure 6, and then applying the clustering algorithm along the time axis to further cluster the points.
The result, as we can observe in Figure 7, is that our Mapper graph displays the evolution of cases through the progression of time in an intuitive manner, where the trunk and most other substructures appear to have "grown" in accordance with time data like the trunk of a tree, building up on data from the first few days in our data set. The right side of the trunk contains most of the U.S. counties at the beginning of the time period in January, and the time progresses toward the left. The left end of the trunk contains the data for June. The "branches" that stick out of the main trunk also follow this pattern. The nodes that make up the branches in Figure 7 are clearly arranged in accordance with the progression of time. Hence each branch represents the development of COVID-19 cases within a certain period of time. We will study these branches in depth in Section 3.2.
3.1.2. Geography. Geographic information is an important factor that affects the distribution of the data throughout the Mapper graph and is readily reflected through this visualization. For example, the nodes representing Hawaii, Alaska, Guam, Puerto Rico and American Samoa in Figure 2 naturally form independent clusters. This is explained by their geographic separation from the U.S. mainland, which is for the most part represented in the Mapper graph as the main trunk. Specifically, the geographic separation for these regions is reflected in the data in the relative numerical disparity in the first two coordinates. With appropriate values of n and δ, this numerical disparity will give rise to disjoint clusters lying outside of the main trunk in the Mapper graph, as they do in Figure 2.
The geography is also more transparent when we remove the time coordinate and produce a Mapper graph for a single day only, as we do in Figure 6. The grouping of data points into clusters highly correlates with U.S. geography. However, we can also spot an isolated node in the graph representing several counties in New York, Pennsylvania and Massachusetts. This result should clearly not be attributed to geography and we thus know that geographic information is not deterministic to the shape of the Mapper graph. Since we have already removed the time component from our data before generating this graph, the only coordinate left that could explain this distribution is the number of cases reported.
Number of COVID-19 cases.
As explained in Section 2.2, the third coordinate of our data points records the cumulative number of COVID-19 cases reported daily from each county. In comparison with geographic and temporal information, this coordinate usually has the largest standard deviation among the four features. 5 Before normalization, its numerical value can range from a few hundred for less-affected areas to several hundred thousand for hot-spots like the New York City, or, more recently, some areas in Florida and Texas. Therefore, this coordinate also adds the most variability into the shape of the Mapper graph. This is mostly evident in two aspects. Similar to what happens with the geographic information, a large disparity in the numerical values of this coordinate can distribute data points into disjoint components outside of the main trunk. For example, we see in Figure 6 that there is an isolated node in the upper left corner whose presence could not be explained by geographic isolation. In fact, this node houses several counties with the highest numbers of cumulative COVID-19 cases reported by the time this graph was generated. The Mapper representations of such counties thus correspondingly stand out from those of the places that report only a mediocre number of cases. This type of distribution is also observed in ordinary Mapper graphs like Figure 2 where temporal data is included.
For the most part, however, the elevations of COVID-19 cases are reflected through the emergence of "branches" sticking out of the main trunk. Under this point of view, the free-floating clusters or nodes discussed above can be seen as fragmented segments of an otherwise connected branch. We will examine these branches in more detail next.
3.2.
Branches. We see in Figure 2 that the data for Los Angeles, CA, Cook, IL and New York, NY presents itself in the form of chains of nodes branching off the main trunk. These three counties ranked top three in terms of COVID-19 cases reported by the time this graph was generated. As discussed in previous sections, because geographic and temporal data have relatively small standard deviation, they cause the graph to stay connected in these dimensions rather than producing features like these branches. This is evident in the "branch-free" part of the trunk, which represents data from early days of the outbreak when there is no significant disparity in the numbers of cases reported from different regions. It is therefore the high number of cases reported in these places that forces the nodes representing these counties to depart from the ones representing their geographic neighbors.
With appropriate choices of n and δ, we can thus produce Mapper graphs where regions with relatively large numbers of COVID-19 cases are no longer bound to others through edges in the geographic dimensions but remain connected solely in the time dimension, resulting in the branching feature that we see in Figure 2. 6 Hence, the growth of these branches tracks the incremental increase of cases in these counties.
Additionally, because different segments of the main trunk are built with their own timestamps as discussed in Section 3.1.1, the place where each branch starts to grow is therefore indicative of the onset of the outbreak that it represents. That is to say, a branch that emerges later in the trunk signifies a more recent outbreak. In this way, we can see that the Mapper graphs encodes the development of the pandemic in these hot-spots in an intuitive manner. In the following sections, we will offer several case studies on various branches of different shape and elaborate on what could be learned from them.
3.2.1. Segmented branches: New York and others. As we just explained, the emergence of branches signifies potential or existing COVID-19 hot-spots. However, some hot-spots like New York are more prominent than others; this empirical phenomenon also has a representation in the Mapper graph. Namely, looking at Figure 2, what we observe is that the branch representing data from the New York County is in fact broken into several segments, with each segment representing COVID-19 cases reported during a certain period of time. This distribution can be regarded as reflecting several stages in the development of the pandemic in New York during which several dramatic spikes of cases occurred.
In particular, 4/9/20, 4/13/20, and 4/24/20 are the critical dates underlying these disruptions in the branch. As is visible in Figure 8, these dates (labeled in red) correspond to either spikes in the number of daily new cases reported in New York County or a nadir before the next peak. Hence, the disruptions in the branch usually occur as a result of an extended period of elevated daily incidence, which forces the projections of the succeeding series of data points along the "number of cases" dimension to land in a hypercube far enough so that it breaks the continuity established by densely populated data points along the time dimension. Additionally, we notice that earlier peaks of daily new cases did not break up the branch. Instead, they contribute to the growth in the "length" of the branch by adding a new node. That is, several such peaks that occurred on 3/26/20, 3/31/20, and 4/4/20 (labeled in yellow) correspond to the critical dates that cause a new node to be added to the branch. Therefore, many noteworthy dates in the development of the pandemic in New York County are reflected in the graph either in the form of new nodes or disruptions of the branch. Several snapshots of the dynamic process through which segments of this branch gradually emerged can be found in Figure 13. Similarly, in the Mapper graphs generated from filtered data sets, some prominent COVID-19 hot-spots are also exposed in the form of segmented branches. They are Miami-Dade, FL and Maricopa, AZ, and can be seen in Figure 5. Similarly to New York, these places also experienced aggressive growths in daily incidence. Broken branches therefore generally symbolize high levels of daily growths and signify alarming trends in the regions they represent.
3.2.2. Branch complex. A branch complex is formed when a number of closely-situated counties report similar high numbers of cases and display elevated daily incidence. In this case, each county is represented as a branch that departs from the main trunk. However, because of the geographic and temporal proximity of these outbreaks, nodes in their respective branches tend to be connected through the geographic dimension as well. This explains the various entangled branching structures emanating from the main trunk in Figure 5. 7 For example, one can see the complex formed by a number of counties in the Northeast and Wayne, Michigan as well as a less complicated one formed by counties in California, Nevada, and Arizona. Another small complex is formed by two counties in Florida, i.e. Palm Beach and Broward.
Because of their prominence, these complexes tend to develop into trunks of their own so that a resurgence of the pandemic in these existing hot-spots will be reflected in the Mapper graph in the form of new branches that grow out of the branch complex instead of the main trunk. On the other hand, the emergence of a new hot-spot in nearby regions enlarges the complex by entangling a new branch into it.
3.2.3.
More recent hot-spots. As mentioned in Section 3.2, the timing of the onsets of regional flareups is indicated by the position of the resulting branches relative to the main trunk. For example, we can see from Figure 2 that the outbreak in New York occurred prior to the ones in Cook and Los Angeles, because its branch emerged earlier in time. We can similarly distinguish more recent clusters of outbreaks from older ones in this way. For instance, it is clear in Figure 5 that the outbreaks in Dallas, Texas and Nevada occurred after the ones in Palm Beach and Broward, Florida or in King, Washington as their branches sit closer to the end of the trunk that corresponds to later dates. They therefore represent a new generation of COVID-19 hot-spots, distinct from more mature ones in Northeast U.S. or Washington state.
To ensure the coherency in our analysis, we only studied graphs produced with data collected prior to 6/20/20 in previous sections. For a more recent update in these graphs, see Figures 9 and 10. It is noteworthy that these graphs make several new hot-spots readily visible. They include Maricopa County in Arizona, Harris and Dallas Counties in Texas, and several places in Alaska. Additionally, there appear to be resurgences in several existing hot-spots so that their branches now become more distinguishable in the graph. Such places include Miami-Dade and Broward Counties in Florida and several Californian counties.
From these updates, what we see is thus that, while embodying a coherent developmental progress of the pandemic, Mapper graphs are also in themselves fluid objects capable of change. We can get a more holistic picture of the entire progress by generating Mapper graphs with real-time data and follow its path of evolution. In the next section, we will discuss the evolution of Mapper graphs and the branches.
3.3. Evolution of Mapper graphs. Because our data incorporates temporal and geographic information, the resulting Mapper graphs are capable of conveying useful information regarding the gradual development of the COVID-19 pandemic across time and space. In Section 3.2, we studied this by inspecting different clusters as well as the branches sticking out of the main trunk of a particular Mapper graph. In this section, we demonstrate how one can obtain a more holistic picture of the growth of COVID-19 cases in the U.S. by studying the evolution of these features across Mapper graphs generated at different points in time.
For example, Figure 11 shows the evolution of Mapper graphs for unfiltered data through series of graphs generated from case data reported by 3/6/20, 4/10/20, 5/22/20, and 7/3/20, respectively. Figure 12 shows the evolution of Mapper graphs for filtered data. Since the number of total data points in each graph varies significantly, we adopt different resolution levels n and degrees of overlap δ to preserve relative visual coherence among graphs.
As the number and distribution of U.S. COVID-19 cases evolve over time, the corresponding Mapper graphs evolve accordingly. New components might break off from the main trunk as the regions they represent experience aggressive increase in the number of reported cases. Existing isolated components might also reconnect to the main trunk through the creation of new edges due to a reduced disparity between the number of cases reported in the former and its neighboring regions. Additionally, branches might grow longer as new cases add up.
To illustrate these features, we will in the following sections shift the focus to a more local level and provide some detail on the evolution of Mapper graphs representing the local development of the pandemic in two places. Figure 13 gives a few evolutionary snapshots of the branch representing hotspots in New York State. In earlier snapshots (Figures 13(A)-13(D)), the branch is unstable, with new disconnected nodes emerging frequently through each update so that the branch appears more fragmented. This reflects the acceleration of the pandemic in New York during that period. Each update in the data set brings significantly higher case numbers so that variability of projected data onto the "number of cases" dimension causes the break in the branch.
In ensuing phases, however, the number of cumulative cases is so large that the relative relevance of each date's numbers gets reduced as a result of the normalization (see Section 2.2.1). It thus becomes Figure 11. The evolution of Mapper graphs over time for unfiltered data.
harder for new data to affect the shape of the branch. As a result, only the most impactful developments are picked up and reflected though every update. Hence, when the cases stabilize in New York, as we see from Figures 13(D)-13(F), the shape of its branch also tends to be less volatile. Instead of causing the branch to further break into more segments as in Figures 13(B) and 13(C), new nodes continue to build up on the latest segment. This shows that the pandemic no longer accelerates, notwithstanding a slowed growth. In the end, the structures that remain in the most recent graph, Figure 13(F), mark the most numerically relevant turning points in the development of the pandemic in New York, thus presenting a stable picture documenting the entire process.
In addition, it is also clear that, in contrast to earlier graphs in which New York is the only prominent branch, we can spot other branching structures in later graphs, easily discernible in Figure 11. This displays not only the development of the pandemic in other areas in parallel with New York, but also the declining relevance of New York on the national scale. Nevertheless, unlike other hot-spots whose branch almost disappeared from the graph because of their minimized relevance (e.g. most Connecticut hot-spots and King County in Washington), New York's branch remains highly visible throughout the updates. This is due to the absolute numerical relevance of New York State's data, which, by 7/23/20, ranks top in the U.S. in terms of state-level cumulative COVID-19 cases. Similarly, the branch representing California and Florida hot-spots also remain visible throughout the updates because of their absolute national relevance.
3.3.2.
Massachusetts-New Jersey complex. As elaborated in Section 3.2.2, because of their geographical proximity and national prominence, hot-spots in states like Massachusetts, Connecticut and New Jersey get reflected in the graph in the form of an entangled system of branches. The evolution of this system can be found in Figure 14. Similar to New York, these hot-spots have also gone through a process of acceleration and decline. In contrast, however, the size of this system is significantly smaller in later snapshots (see Figures 14(E) and 14(F)). This is because of the reduced national relevance of most of the hot-spots in this complex, so that their nodes no longer stand out of the main trunk as they are no longer distinguishable from their neighboring areas. As a result, only the most severe and most long-lasting hot-spots remained in this agglomeration, found among more recently developed systems of branches (see Figure 14(F)).
Conclusions and future work
The present paper provides a case study on the application of the Mapper algorithm to COVID-19 data collected in the United States. We have shown that Mapper captures a number of trends in the spread of COVID-19 and provides a more complete picture than those offered by more standard techniques of data visualization. The existence of (segmented) branches indicates hot-spots and the emergence of such branches correlates with troubling trends in the development of the pandemic in the regions they represent. Geographic proximity between hot-spots is also taken into account, so that branch complexes arise if nearby counties are similarly experiencing significant increase in the number of cases. The Mapper algorithm is also capable of tracking the gradual development of COVID-19 because it incorporates time data, supplying a more geographically and temporally complete picture of the spread of the virus. Continued observation on the evolution of the Mapper graphs is therefore clearly desirable.
There are various directions in which future analysis could be employed. For example, one could replicate the same study for data from different parts of the world. In addition, one can take into account border closures between countries, for example within EU or between EU and the rest of Europe.
One could additionally correlate stay-at-home orders issued by each U.S. state with critical changes observed in the Mapper graphs in hopes of evaluating the impact and effectiveness of such orders. Data about travel could also be included in hope that the Mapper reveals something new about how COVID-19 is spread by travelers through the U.S. In addition, one could overlay socio-economic data onto the Mapper graphs, along the lines of what was done in [5], providing important insight into how economic factors correlate to the spread of the disease across the U.S.
Unfortunately, the visualizations that we so far produced through the Mapper algorithm cannot offer strict and accurate predictions for the future development of the pandemic, since such predictions require the consideration of more real-life factors than we studied in this paper. But those factors can be incorporated by imposing more structure onto the Mapper. For example, we could define branches more rigorously using graph-theoretic notions. The length of the branches and the degree of the nodes in them can be assigned real-life meaning, along the lines of what was done by Escolar et. al. in [7]. This would provide more insight and understanding into the development of hot-spots, and could lead to a more predictive model given by the Mapper.
Another direction is to apply a different popular version of topological data analysis called persistent homology. In this setup, the data is turned into a topological space and one then studies the space's connected components as well as "holes" of various dimensions. Persistent homology has been used to great effect in a number of settings, including biology, neuroscience, medicine, materials science, sensor networks, financial networks, and many others. For introductions to persistent homology and more details about its applications, see [1,2,6,11]. The usefulness of persistent homology as a predictive epidemiological tool was demonstrated by Lo and Park [8] in their study of the Zika virus; emulating this approach might also lead to the construction of a useful predictive model for the coronavirus. | 9,366 | 2020-08-13T00:00:00.000 | [
"Mathematics"
] |
A decision-making model for non-traditional machining processes selection
Article history: Received March 14, 2014 Accepted July 25, 2014 Available online July 31 2014 Non-traditional machining (NTM) refers to a variety of thermal, chemical, electrical and mechanical material removal processes, developed to generate complex and intricate shapes in advanced engineering materials with high strength-to-weight ratio. Selection of the optimal NTM process for generating a desired feature on a given material requires the consideration of several factors among which the type of the work material and shape to be machined are the most significant ones. Presence of a large number of NTM processes along with their complex characteristics and capabilities, and lack of experts in NTM process selection domain compel for development of a structured approach for NTM process selection for a given machining application. Thus, the objective of this paper is set to develop a decision-making model in Visual BASIC 6.0 to automate the NTM process selection procedure with the help of graphical user interface and visual decision aids. It is also integrated with quality function deployment technique to correlate the customers’ requirements (product characteristics) with technical requirements (process characteristics). Four illustrative examples are also provided to demonstrate the potentiality of the developed model in solving NTM process selection problems. © 2014 Growing Science Ltd. All rights reserved.
Introduction
Non-traditional machining (NTM) processes are defined as a group of processes that remove excess material from the workpiece surface by various techniques, involving mechanical, thermal, electrical or chemical energy or combinations of these energies (Pandey & Shan, 1981).They do not use sharp cutting tools, as it needs to be used for conventional machining processes.Material removal rate of the conventional processes is constrained by the mechanical properties of the workpiece material.In conventional machining processes, the relative motion between the tool and workpiece is typically rotary or reciprocating.Thus, the shape of the work surfaces is limited to circular or flat shapes, and except in CNC systems, machining of three-dimensional surfaces is still a difficult task.In contrast, NTM processes harness energy sources, and material removal is basically accomplished with electrochemical reaction, high temperature plasma, and high velocity jets of liquids and abrasives.In these processes, as there is no physical contact between the tool and workpiece, they can easily deal with the present day difficult-to-cut materials, like ceramics and ceramic-based tool materials, fiber reinforced materials, tungsten carbides, stainless steels, high speed steels, carbides, titanium-based alloys etc. for generating small cavities, slots, slits, blind or through holes at micro-and even at nanolevel (Jain, 2005).Now the conventional machining processes are being substituted by these NTM processes in response to increased demands in industry for better, more consistent workpiece quality; higher production efficiency in processing of hard, tough materials, workpieces with unusual finishing requirements; and capability of machining of parts with complex shapes that require processing beyond the normal capabilities of the conventional machining processes.
In order to exploit the full potential of the NTM processes to generate complex and intricate shape features with the required dimensional accuracy, tolerance and surface finish on the difficult-tomachine materials, it is always recommended that the best NTM is to be selected for the given machining application.Selecting the most appropriate NTM process for a given shape feature and work material combination is often a time consuming and challenging task as it requires consideration of several conflicting criteria (like maximization of material removal rate and minimization of surface finish, maximization of efficiency and minimization of power requirement, etc.), and a vast array of machining capabilities and characteristics of NTM processes.A particular NTM process found suitable under the given conditions may not be equally efficient under other conditions.Therefore, a careful selection of NTM process for a given machining problem is essential while considering the following important attributes: a) physical and operational characteristics of NTM processes, b) capability of machining different shapes of work material, c) applicability of different processes to various types of materials, and d) economics of various NTM processes.Thus, the selection procedure involves identifying the relevant possible alternatives among the NTM processes and grading them according to their performance.As the NTM process selection is quite difficult requiring human expertise and being affected by several criteria, there is always a need for a structured approach for appropriate NTM process selection for a given machining application.In this paper, a decision-making model is framed and developed in Visual BASIC 6.0 to automate the NTM process selection procedure for a specified work material and shape feature combination.It is also integrated with quality function deployment (QFD) technique to take into account the customers' requirements (product characteristics) as well as technical requirements (process characteristics) for a given NTM process selection problem.Coğun (1993) used an interactively generated 16-digits classification code to eliminate unsuitable NTM processes from consideration and rank the remaining efficient processes.Yurdakul and Coğun (2003) presented a multi-attribute selection procedure integrating technique for order performance by similarity to ideal solution (TOPSIS) and analytic hierarchy process (AHP) to help the manufacturing personnel in determining suitable NTM processes for given application requirements.Chakraborty and Dey (2006) designed an AHP-based expert system with a graphical user interface for NTM process selection.It would depend on the logic table to discover the NTM processes lying in the acceptability zone, and then select the best process having the highest acceptability index.Chakraborty and Dey (2007) proposed the use of a QFD-based methodology to ease out the optimal NTM process selection procedure.Das Chakladar and Chakraborty (2008) developed an expert system while combining TOPSIS and AHP methods for selecting the most appropriate NTM process for a specific work material and shape feature combination.
Literature review on NTM process selection
Edison Chandrasselan et al. (2008) developed a web-based knowledge base system for identifying the most appropriate NTM process to suit specific circumstances based on the input parameter requirements, like material type, shape applications, process economy and some of the process capabilities, e.g.surface finish, corner radii, width of cut, length-to-diameter ratio, tolerance etc. Das Chakladar et al. (2009) presented a digraph-based approach to solve the NTM process selection problems with the help of graphical user interface and visual aids.Das and Chakraborty (2011) developed an analytic network process (ANP)-based approach to select the most appropriate NTM process for a given machining application taking into account the interdependency and feedback relationships among various criteria affecting the NTM process selection decision.An ANP solver was also developed to automate the entire NTM process selection decision procedure.Sadhu and Chakraborty (2011) applied the input-minimized-based Charnes, Cooper and Rhodes (CCR) model of data envelopment analysis to shortlist the efficient NTM processes for a given application, and then employed a weighted-overall efficiency ranking method to rank those efficient processes.Chakraborty (2011) employed multi-objective optimization on the basis of ratio analysis (MOORA) method to select the most suitable NTM process for a given work material and shape feature combination.
Karande and Chakraborty (2012 a ) integrated PROMETHEE (preference ranking organization method for enrichment evaluation) and GAIA (geometrical analysis for interactive aid) methods for NTM process selection for a specific machining application.Karande and Chakraborty (2012 b ) applied reference point approach for choosing the most suitable NTM processes for generating cylindrical through holes on titanium and through cavities on ceramics.Chatterjee and Chakraborty (2013) explored the applicability of evaluation of mixed data (EVAMIX) method for solving the NTM process selection problems with the help of three demonstrative examples.Temuçin et al. (2014) proposed a decision support model to assess potentials of seven distinct NTM processes in the cutting process of carbon structural steel with the width of plate of 10 mm.Although the past researchers have designed and augmented different expert system models/decision support systems for selecting the best NTM processes for varying machining applications, but no attempt has been put till date to identify the most desirable characteristics of the selected NTM processes and guide the process engineers while providing the possible parametric settings of those processes.In this paper, a decision-making model is thus developed to reduce the gap between the prediction of the best NTM processes and real time machining requirements.
Development of a QFD-based NTM process selection framework
QFD is a 'method to transform user demands into design quality, to deploy the functions forming quality, and to deploy methods for achieving the design quality into subsystems and component parts, and ultimately to specific elements of the manufacturing process', as described by Akao (1990).QFD is thus a way to assure the design quality while the product is still in the design stage.From this definition, QFD can be seen as a process where the consumer's voice is valued to carry through the whole process of production and services in order to achieve the highest customer satisfaction.Thus, QFD helps in bringing the customer's voice into the production process to reduce the unnecessary cost and time by designing the product right at the first time itself.Customers' requirements and their relationships with technical requirements are the driving force for the QFD-based methodology.It enables an organization to build quality in a product or service.The primary tool used in QFD is the House of Quality (HOQ), because of its ability to be adapted to the requirements of a particular product.HOQ employs a series of matrices to quantify customer requirements, product ratings and technical requirements.HOQ looks like a house, made up of six major components.These include customers' requirements -what do customers want, phrases customers use to describe products and product characteristics; technical requirements -how customers' needs can be achieved; a planning matrix -shows the weighted importance of each requirement that the organization attempts to fulfill; an interrelationship matrix -establishes a connection between the customer's requirements and technical requirements of a product; a technical correlation matrix -referred as roof of the matrix, depicts relationship among technical requirements; and a technical priority matrix -shows the priorities assigned to technical requirements (Hauser & Clausing, 1988;Govers, 1996).The importance weight of each technical requirement can be calculated through simple mathematical expression, while identifying the correlation among all these factors (Chan & Wu, 2005).An excellent overview on the applications of QFD technique in diverse fields of engineering and management can be available in (Chan & Wu, 2002).Although the HOQ matrix may take different forms depending on the type and complexity of the problem, a simplified form of HOQ matrix is considered here taking into account only the prioritized technical requirements at the base of the matrix.
The opening window of the developed decision-making model is shown in Fig. 1 to guide the process engineers selecting the most suitable NTM process for a given work material and shape feature combination.In the HOQ matrix, as shown in Fig. 2, type of the work material, machining cost, toxicity/contamination, machining time, shape feature, accuracy, aesthetics, power consumption, easiness of use, availability of the consumables and tool wear are shortlisted to be the major customers' requirements (product characteristics).These customers' requirements are placed along the rows of the HOQ matrix.On the other hand, along the columns of the same HOQ matrix, tolerance (in mm), surface finish (in μm), surface damage (in μm), corner radii (in mm), taper (in mm/mm), power requirement (in kW), material removal rate (mm 3 /min), work material, safety and cost are considered as the technical requirements (process characteristics).These product characteristics and process characteristics for the developed HOQ matrix for NTM process selection are shortlisted only after considering the valuable opinions of the process experts and after a detailed review of the past research works.Among these process characteristics, work material (1-3), safety (1-5) and cost (1-9) are expressed using qualitative scales, and the remaining has absolute numerical values.
In the HOQ matrix, the beneficial or non-beneficial characteristic of the customers' requirements is identified by the corresponding improvement driver value (+1 for beneficial criteria and -1 for nonbeneficial criteria).Thus, among the considered product characteristics, machining cost, power consumption and tool wear, being non-beneficial attributes, always require minimum values for the selection of the NTM process.On the other hand, in the HOQ matrix, power requirement and cost are identified as the non-beneficial process characteristics.In this HOQ matrix, the relative importance (priority) of the product characteristics can be evaluated using a fuzzy priority scale having triangular membership function with scale values set as 1 -not important, 2 -important, 3 -much more important, 4 -very important and 5 -most important.For filling up the HOQ matrix and developing the interrelationship matrix between product characteristics and process characteristics, again a fuzzy priority scale is proposed as 1 -very very weak relation, 2 -very weak relation, 3 -weaker relation, 4 -weak relation, 5 -moderate relation, 6 -strong relation, 7 -stronger relation, 8 -very strong relation and 9 -very very strong relation.These triangular fuzzy numbers for providing the relative importance of product characteristics and process characteristics are later defuzzified using the centroid method.Once the HOQ matrix is filled up with the necessary information, the weight for each process characteristic is computed using the following equation: where w j is the weight for j th process characteristic, n is the number of product characteristics, Pr i is the defuzzified priority assigned to i th product characteristic, ID i is the improvement driver value for i th product characteristic and correlation index is the defuzzified value obtained from the HOQ matrix with respect to j th process characteristic for i th product characteristic.Instead of setting the priorities of the product characteristics, and inputting the relative associationship between product characteristics and process characteristics in terms of triangular fuzzy numbers, the developed QFD-based decision-making model can also be used under a group decision-making environment involving the opinions of three process engineers.In that case, the aggregated preferences can be obtained using a simple arithmetic averaging technique.
In this NTM process selection model, the following NTM processes, work materials and shape features are considered based on which the best NTM process is to be chosen for a given machining application.
Shape feature: a) deep through cutting, b) shallow through cutting, c) double contouring, d) surface of revolution, e) precision small holes (diameter ≤ 0.025 mm), f) precision small holes (diameter > 0.025 mm), g) standard holes with L/D ratio ≤ 20 (L/D = slenderness ratio), h) standard holes with L/D ratio > 20, i) precision through cavities, and j) standard through cavities.
Demonstrative examples
In order to demonstrate the application modality of the QFD-based decision-making model, developed in Visual BASIC 6.0 in an Intel® Core™ i5-2450M CPU @ 2.50 GHz, 4.00 GB RAM operating platform, the following four NTM process selection examples are cited.
Example 1: standard holes on super alloys
In this example, where standard holes are to be generated on super alloys, the process engineer needs to first fill up the HOQ matrix taking into account the interrelations between various product characteristics and process characteristics using the adopted fuzzy scale, as exhibited in Fig. 2. Now, when the user clicks the 'Weight' functional key, the priority weights of all the process characteristics are automatically calculated, based on Eq. ( 1).These priority weights are subsequently used for the final ranking and selection of the feasible NTM processes.This selection procedure is based on the computation of the performance scores of the feasible NTM processes, while applying the following expression: Performance score where m is the number of the feasible NTM processes and n is the number of process characteristics.The normalized values are obtained from the decision matrix of a given NTM process selection problem.
In Fig. 3, the work materials are chosen as super alloys and generation of standard holes (diameter = 0.9 mm and depth = 1.1 mm) is the required machining operation.Now, when the 'Feasible NTM process(es)' functional key is pressed, AJM, CHM, EBM, ECM, EDM, LBM and USM are selected as the candidate NTM processes that can generate standard holes on super alloys.In this window, the user also needs to choose a list of criteria from the drop-down menu for the final selection of the NTM process for the specified application.In this example, cost, material removal rate, taper, surface damage, surface finish and work material are identified as the six influencing criteria affecting the NTM process selection decision.Now, on pressing of the 'Next' key, the final NTM process selection window of Fig. 4 is displayed along with the corresponding decision matrix showing the relevant characteristics of the shortlisted NTM processes.In this figure, the performance scores and ranks of the feasible NTM processes, as computed using the QFD methodology, are also provided.It is observed that for generating standard holes of the specified dimensions on super alloys, EBM is the most appropriate NTM process, followed by ECM.AJM and USM processes have limited capabilities for this machining application.For the same machining operation, using an integrated TOPSIS and AHP method, Yurdakul and Coğun (2003) derived the ranking of the feasible NTM processes as ECM-LBM-EBM-CHM-AJM-USM-EDM.On the other hand, using the developed QFD-based model, a ranking of the feasible NTM processes is achieved as EBM-ECM-CHM-LBM-EDM-AJM-USM.In Fig. 4, it is observed that there is a negligible difference in the performance scores between EBM and ECM processes, and thus, these two NTM processes can be treated to have almost the same capability to generate standard holes on super alloys.Now, when the 'Display' functional key is pressed, the feasible NTM processes are graphically ranked, the detailed characteristics of the finally chosen NTM process (EBM) are shown, a typical EBM setup is displayed, and the parametric settings of the EBM process are provided to guide the concerned process engineer.In this example, it is found that the finally selected EBM process has the characteristic values as cost = 1 (minimum), material removal rate = 2 mm 3 /min, taper = 0.02 mm/mm, surface damage = 100 μm, surface finish = 3 μm and work material = 2 (moderate).The output of the developed model also guides the process engineers to set values of different EBM process parameters as pulse duration = 0.05-15 ms, beam current = 0.02-1 A, accelerating voltage = 150-200 kV and energy per pulse = 50-150 J.These process parameter values are only the tentative settings of the EBM process, the final parametric combinations would entirely depend on the requirements of the process engineers and technical specifications of the EBM setup.Finally, the process engineer has to be fine tune all these settings to attain the optimal machining performance of the EBM process.
Example 2: standard through cavities on ceramics
Here, the process engineer wants to select the most suitable NTM process in order to generate standard through cavities (with t/w < 10, where t = through cavity depth and w = through cavity width) on ceramic work materials.The corresponding HOQ matrix is exhibited in Fig. 5, from which the priority weights of all the technical requirements (process characteristics) of NTM processes are obtained.Negative priority weights for power requirement and cost criteria signify that amongst the ten process characteristics, these two are of non-beneficial type, always requiring lower values.In the next window, as shown in Fig. 6, the user needs to input the type of the work material (ceramics) and sub-shape feature (standard through cavity) combination.Now, the developed model automatically identifies AJM, CHM, EBM, LBM, USM and WJM as the feasible NTM processes for generating standard through cavities on ceramics.In Fig. 6, the user then selects corner radii, cost, tolerance, material removal rate, surface finish and work material as the final set of criteria based on which the most suitable NTM process would be identified from the set of six feasible processes.In Fig. 7, the final decision matrix for this NTM process selection problem is exhibited, as developed from the database of the model.It is shown that USM is the best NTM process, followed by AJM for the generation of standard through cavities on ceramics.Among the six feasible processes for the given application, CHM is the least preferred choice.For the same machining application, Karande and Chakraborty (2012 b ) observed the ranking of the feasible NTM processes as USM-AJM-WJM-LBM-EBM-CHM while employing a reference point-based approach.On the other hand, this QFD-based model provides a rank ordering of the NTM processes as USM-AJM-EBM-LBM-WJM-CHM.It is interesting to note that in both the cases, the best and the worst choices of NTM process exactly match.
A photograph of an USM setup along with the values of its various process parameters as grit size = 10-70 μm, slurry concentration = 25-50 %, power rating = 40-60 % and feed rate = 0.75-1.5 mm/min are also provided in Fig. 7.These are the tentative settings of the USM process parameters only to guide the process engineers.For attaining the desired machining performance, the identified USM process parameter settings need to be more preciously chosen.The list of the identified process parameters may also vary with the specifications and make of the USM setup.
Example 3: deep through cutting on titanium
In this example, deep through cutting operation on titanium material needs to be performed.In Fig. 8, when the process engineer selects the material type as titanium, and shape and sub-shape feature combination as deep through cutting operation, the developed model automatically shortlists AJM, CHM, EBM, ECM, EDM, LBM, PAM and USM as the feasible NTM processes for the specified machining application.Then based on the capabilities of those NTM processes and end requirements of the final product, the desired process characteristics are chosen from the technical requirements list as cost, material removal rate, power requirement, surface finish, tolerance and work material which would ultimately drive towards the selection of the most appropriate NTM process.In Fig. 9, the corresponding decision matrix along with performance scores and ranks of the shortlisted NTM processes are automatically generated.In this case, cost and power requirement are the non-beneficial attributes having negative priority weights.It is observed that for performing deep through cutting operation on titanium, PAM is the most suitable NTM process, followed by EDM process.The capabilities of PAM process are also extracted as cost = 2 (low), material removal rate = 7500 mm 3 /min, tolerance = 1.3 mm, power requirement = 50 kW, surface finish = 100 μm and work material = 2 (moderate).This output from the developed model would also guide the process engineers in selecting the related PAM process parameters as plasma velocity = 250-750 m/s, nozzle diameter = 0.5-10 mm, electrode gap = 1-5 mm and gas flow rate = 5-20 l/min, although fine tuning of those displayed process parameters is necessary to obtain the enhanced machining performance.
Example 4: double contouring on duralumin
Here, double contouring operation on duralumin (an aluminium alloy) needs to be performed using a suitable NTM process.Double contouring is a sub-feature of surfacing operation.In Fig. 10, after selecting the suitable work material and shape feature combination for the given machining application, when the user presses the 'Feasible NTM process(es)' functional key, ECM, EDM and USM get shortlisted as the feasible NTM processes being capable to perform the specified machining operation.Now the performances of these three NTM processes are evaluated based on cost, material removal rate, surface damage, surface finish, tolerance and work material criteria as set by the user in Fig. 10.Pressing of the 'Next' key helps the user to jump to the final NTM process selection window, as provided in Fig. 11.The priority weights of the identified criteria would automatically be supplied from the corresponding HOQ matrix.A negative priority weight for cost criterion identifies it as a non-beneficial attribute for the given problem.The original decision matrix, extracted from the database, is also shown in Fig. 11 along with the calculated performance scores and ranks of the three feasible NTM processes.It is observed that ECM is the best method, amongst the three NTM processes, for double contouring operation on duralumin alloy.The photograph of a typical ECM setup is also provided in Fig. 11.Now, the process engineer may select the ECM process along with its various machining parameters suggested as electrolyte concentration = 10-75 g/l, electrolyte flow rate = 5-15 l/min, inter-electrode gap = 0.1-1 mm, applied current = 200-300 A and applied voltage = 5-35 V.However, a fine tuning of all these ECM process parameters is finally required for achieving the desired machining performance.
Conclusions
In this paper, a decision-making model is developed while integrating quality function deployment for selecting the most suitable NTM process from a large number of available alternatives for generating a desired shape feature on a given work material.It also acts like an expert system to ease out and automate the NTM process selection procedure.It not only helps in selecting the best NTM process but also provides a comparative study among the alternative processes.Its main advantage is that it does not require, from the point of view of the process engineers, to have any in-depth technological knowledge regarding the applicability of various NTM processes.Moreover, it relieves the process engineers from committing errors in the decision-making procedure while considering the process and product characteristics related to the selection of the optimal NTM process.It can also be implemented in a group decision-making environment involving the opinions of three process engineers/decision makers.It can be made more dynamic and versatile by including the hybrid NTM processes, shape features and materials yet to come in near future. | 5,808.6 | 2014-01-01T00:00:00.000 | [
"Engineering"
] |
Global Responsibility in a Historical Context
Contemporary theories of globalization seldom mention history. This is surprising, because ‘globalization’ is essentially a historical term, describing as it does a historical process. There is less mention still of the philosophy of history, especially given that it has been discredited. And yet, if one probes the accounts in question more deeply, there is no overlooking that nearly all of the relevant discourses operate more or less explicitly with patterns of interpretation borrowed from the philosophy of history. The authors speculate upon which general tendencies of globalization are recognizable, and whether it is more indicative of ‘progress’ or of the ‘downfall’ of human civilization. Moreover, the questions of when globalization actually began,what is ‘new’ about the state of globality achieved thus far and what developments can be expected in future cannot possibly be answered without reflecting on history. After all, the ethical problem of global justice, which demands compensatory measures to alleviate historic harms, requires us to take into account the course of history thus far. Such topics underline that recourse to history, with all of its historico-philosophical implications, is essential if we are to resolve the problems resulting from globalization. Globalization and history Considering the phenomenon of globalization from a philosophical viewpoint, one must first note that the global has always been a theme in philosophy (Figuera 2004, p. 9; cf. Negt 2001, pp. 42; Toulmin 1994, p. 281). The search for universal concepts and principles that could claim validity for all of humankind is part of the philosophical tradition. From the (early) modern period onward, philosophically grounded human rights were intended to apply to all of the earth’s inhabitants equally and universally. In particular, the history of philosophy as it has developed since the Enlightenment proclaimed the existence of a universal or world history in which all peoples and cultures participate (Rohbeck 2010, p. 54; Brauer 2012, p. 19; Roldán 2012, pp. 83–84). This also applies to subsequent philosophies of history that distanced themselves from the ideas of progJohannes Rohbeck, Technische Universität Dresden (TUD) OpenAccess. © 2018 Johannes Rohbeck, published by De Gruyter. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. https://doi.org/10.1515/9783110492415-014 ress and teleology, and even to the later position of posthistoire, which posits the ‘end’ of history. Contemporary theories of globalization seldom mention history. There is less mention still of the philosophy of history, especially given that it has been discredited. And yet, if one probes the accounts in question more deeply, there is no overlooking that nearly all of the relevant discourses operate more or less explicitly with patterns of interpretation borrowed from the philosophy of history. The authors speculate upon which general tendencies of globalization are recognizable, and whether it is more indicative of ‘progress’ or of the ‘downfall’ of human civilization (Scholte 2005, p. 49; versus Hardt / Negri 2003, p. 296; Kehoane / Joseph 2005, p. 76; Baudrillard 2007, p. 22; Groß 2007, p. 16). This shows that globalization is largely understood as a historical process. The very questions of when globalization actually began, what is ‘new’ about the state of globality achieved thus far, and what developments can be expected in future cannot possibly be answered without reflecting upon history. This global history perspective in turn changes the way history is viewed. In traditional theories of history, the focus was on historical time, whose concepts and structures the authors explored (Koselleck 1979/2004; Ricœur 1984). History was equated with ‘temporalization’, and corresponding studies focused on historical times with their continuities and ruptures, as well as changes in the tempo of history such as stagnation and acceleration. In the context of globalization, the focus is increasingly on historical spaces, so that history is not merely ‘temporalized’ but also ‘spatialized’ (Osterhammel 1998, p. 374; Schlögel 2003, pp. 12– 13).When we analyze how economic, political, social and cultural spaces are created with time, history comes to appear as a spatial-temporal construct. My thesis is that the ethics of globalization, too, could benefit from the reflections of historiography and the philosophy of history. For there can be no doubt that catastrophic climate change and global poverty, which are to some degree connected, were ‘made’ by human beings in the course of their history. From this we may draw the ethical conclusion that the harms caused should be rectified through compensatory measures. The current debate over such measures shows what a central role the treatment of history plays. Those who generally reject the industrial nations’ moral duty towards the poor countries already consider the historical context to be irrelevant. But even those who believe that rich countries have an obligation to help make their arguments independent of history. A farther-reaching responsibility that includes compensation for the effects of harmful behavior, in contrast, can only be justified with reference to the course of history thus far. For that reason, I call this type of responsibility ‘historical responsibility’. It follows, in turn, that the recourse to history, with all of 180 Johannes Rohbeck
Globalizationa nd history
Considering the phenomenon of globalization from ap hilosophical viewpoint, one must first note that the globalh as always been at heme in philosophy ( Fig-uera2004,p.9;c f. Negt 2001, pp. 42;Toulmin 1994,p.281). The search for universalc oncepts and principles that could claim validity for all of humankind is part of the philosophical tradition. From the (early) modern period onward,philosophicallygrounded human rights werei ntended to applyt oa ll of the earth's inhabitants equallyand universally. In particular, the history of philosophyasit has developed since the Enlightenment proclaimed the existenceo fauniversal or world history in which all peoples and cultures participate (Rohbeck2 010, p. 54;B rauer 2012,p .1 9; Roldán2 012,p p. 83 -84). Thisa lsoa pplies to subsequent philosophies of history that distanced themselvesf rom the ideas of prog-ress and teleology,and even to the laterposition of posthistoire,which posits the 'end' of history.
Contemporary theories of globalization seldom mention history.There is less mention still of the philosophyo fh istory,e speciallyg iven that it has been discredited. And yet, if one probes the accounts in question more deeply, there is no overlooking that nearlyall of the relevant discoursesoperate more or less explicitlyw ith patterns of interpretation borrowed from the philosophyo fh istory. The authors speculate upon which general tendencies of globalization are recognizable, and whether it is more indicative of 'progress' or of the 'downfall' of human civilization (Scholte2 005,p .4 9;versus Hardt/N egri 2003,p .2 96;K ehoane /J oseph 2005,p .7 6;Baudrillard 2007,p .2 2;Groß 2007,p .1 6). This shows thatg lobalization is largely understood as ah istoricalp rocess. The very questions of when globalization actuallyb egan, what is 'new' about the state of globality achievedthus far,and what developments can be expected in future cannot possiblyb ea nswered without reflectingu pon history.
This global history perspective in turn changes the wayhistory is viewed. In traditionalt heories of history,the focus was on historicalt ime, whose concepts and structuresthe authors explored (Koselleck 1979(Koselleck /2004Ricoeur 1984). History was equated with 'temporalization',a nd corresponding studies focused on historical times with their continuities and ruptures,a sw ell as changes in the tempo of history such as stagnation and acceleration. In the context of globalization, the focus is increasingly on historicalspaces,sothat history is not merely 'temporalized' but also 'spatialized' (Osterhammel 1998, p. 374;Schlögel 2003, pp. 12-13). When we analyze how economic, political,social and cultural spaces are created with time, history comes to appear as as patial-temporal construct.
My thesis is that the ethics of globalization, too, could benefit from the reflections of historiography and the philosophyo fh istory.F or there can be no doubt that catastrophic climate changea nd globalp overty,w hich are to some degree connected, were 'made' by human beingsi nt he course of their history. From this we mayd rawt he ethical conclusion that the harms causeds hould be rectified through compensatory measures.The current debate over such measures showswhat ac entral role the treatment of history plays.Those who generallyr eject the industrial nations' moral duty towards the poor countries already consider the historical context to be irrelevant.B ut even thosew ho believet hat rich countries have an obligation to help make their arguments independent of history.Afarther-reaching responsibility that includes compensation for the effects of harmful behavior, in contrast,can onlybejustified with reference to the course of history thus far.F or that reason, Ic all this type of responsibility 'historical responsibility'.I tf ollows, in turn,that the recourse to history,with all of its historico-philosophical implications, is indispensable for ar esolution of the problems resulting from globalization.
Historical responsibility
Leaving aside extreme libertarian and nationalist positions,there is aconsensus that people living in rich countries have an obligation to help the needyinpoor countries.Thisexpresslyalso applies to states on aglobal scale. To be sure, one can distinguish between certain degrees of remedial responsibility,a llowing for special obligations towards members of one'so wn family or nation, which produces ag raduated conception of justice (Walzer 1999,p .3 8;Zurbuchen 2005, p. 139;Nida-Rümelin /Rechenauer 2009,p.314,319).Itdoes not follow,however, that there is no basis for farther-reaching obligations towards people who live in distantparts of the world. The objection thatsuch aredistribution of goods from rich to poor countries presupposes a 'world state',with all its potential for abuses (Nusser 1997, p. 92), is alson ot convincing,b ecause,a sw as explained, individual states and transatlantico rganizations are also in ap osition to do this.
Nevertheless,weneed to ask whypeople are obligated to help otherpeople. Opinions on this question differ: on the one side, we have the position of socalled remedial duties,b ased on the argument that human beingsa ss uch are obliged to help others to the extent that they are able (Previous cooperation or even historic connections between these people expresslyp layn or ole here.); and on the other side, we find the position of outcomer esponsibility,which assumes that the plight of individuals in poor countries should be viewed as the 'consequence' of acts performedbythe inhabitants of rich or powerful countries (Atthis point the historical aspect comes into play, for such an outcome responsibility is, after all, rooted in ahistorical process that led to great injustices in the past.). Ip ropose thatg lobal responsibility for unjustlyt reated people and peoples is, in turn, in need of bothhistorical and historico-philosophical reflection.
If one examines the argumentation on remedial dutiesm ore closely, its position on history appears paradoxical. One argument is that human suffering and death is something fundamentallyb ad thatn eeds to be overcome in all cases, without creatingapractical-historicalr elationship between the givers and seekers of assistance (Singer 2007,p.39;Schlothfeld 2007,p.77;Schaber 2007,p.139). Those with aduty to assist function merelyaswitnesses capable of helping,who observethe needyfrom afar.Because what is at stake is ultimatelyananthropological principle (and thus the unity of the human species, which has ad uty to assist) this is ac ase of abstract cosmopolitanism. The meta-ethical approach of universalist morality,which calls for general assent to the moral norm of reme-dial measures from an impartial perspective,o ffers as imilar argument (Birnbacher 2007,p.139).Accordingtothis approach, the globalmoral community assumes responsibility, as efficientlya sp ossible, for the well-being of all human beings, whose standard of living must not fall below the minimum subsistence level.
Af urther argument states thatp eople are obligated to prevent or alleviate suffering, whatever its origins,a sf ar as possible. The pragmatic boundary consists in limiting remedial duties to preventing the bad rather than promotingthe good, so as to demandn oe xcessive sacrifices.R epressed history reappears in this argument; for the power to assist even distantp eople is largely dependent on the technological means of communication and transporta st hey have evolved( Singer2007, p. 40). It is therein that the real conditions for the possibility of global remedial duties exist.And because these conditions changeoverthe course of history,t he position of the remedial duty assumesa nu nanticipated historical dimension. In contrast to traditional ethics, which was limited to the narrow circleso ff amily, regiono rn ation-state, an ew ethics of global remedial duty is emerging.
Reversing this argumentation,o ne could also formulate it as follows: because people have access to novel technological meansfor assisting very distant people, theyshould do so. To the extent that one assumes that the alleviation of sufferingi sd esirable in general, this entails no naturalistic fallacy,b ut it does implyr ecognition thatt he new technological instruments create new moral aims or historicallyc onditioned norms, which amountst oatechnologically mediated transformation of values.Thus, from this position, the historical refers not to the previous history of the sufferer'sp light,b ut rather to the historically evolvedp ower of the helper.
The second position of outcomeresponsibility raises the question of whether the argument of pure remedial dutiesissufficient,and whether farther-reaching duties cannot be justified.
On the side of the helpers, the problem alreadyexists in the subjects who are obligated to assist.The impression ariseshere thatitisprimarilyindividuals who decide to provide assistance without anyprevious agreement.Moreover,there is no social differentiation among the affluent or assignment to social systems. One could object that collective actorsh avef ar greater significancei ng lobal aid actions. Even an appeal for donations to which individuals spontaneouslyrespond represents ac oordinated action (Schlothfeldt 2007,p .7 7). This applies all the more to states and trans-state organizations, which take action as social institutions.
On the side of those requiringassistance, the problem is that the needya ppear solelya sv ictims. They remain passive and anonymous sufferers (moral strangers), to whom no particularrelationship of responsibility exists (Birnbacher 2007,p.132). They figure as mere objects of ad onation, which therefore risks becominga uthoritarian and arbitrary.I ti ss triking abovea ll that while this assigns particular obligations to the wealthy, it accords no rights to the poor.Duties are thus accompanied by no rights thatc ould be asserted in the form of legitimate demands. There is in fact no preceding interaction between givers and takers. This also cancels all criteria of commutative justice.
If, however,o ne insistst hat the poor have certain rights beyond universal human rights,one is referred to concrete contextsassociated with the social imbalance that standstoberemedied or alleviated. People frequentlyspeak of cooperation here, which can establish ag lobal obligation (Nida-Rümelin /R echenauer 2009,p.316). Those who baulk at referringtointeractions with the poor as cooperation (Birnbacher 2007,p .1 35) can speak neutrallyo fa na ction context that preciselye ncompasses discrimination against foreign peoples, including their exploitation. Positive or negative cooperation forms the basisofrich, industrialized nations' moral responsibility towards poor,d eveloping countries.
This relationship yields obligations that extend beyond mere remedial duties. To be sure,w ealthyc itizensa nd states have the positive moralo bligation to assist people in conditionsoflife-threatening poverty.A tthe same time, however,they have the negative obligation to minimize the harms they cause (Pogge 2003,p.243). Because the world order is not just,the poor must be compensated for the disadvantagest hey suffer.S uch compensation is not aid,but the lessening of harm; it is not redistribution from the rich to the poor,but ac orrective to an unjust social structure between poor and rich. Not helping the disadvantaged is less reprehensible than denying them justified profits by exploiting their disadvantaged condition (Pogge 2003,p .2 44).
This conception of outcomer esponsibility,h owever,h as historical implications-so Iwould liketospeak of ahistoricalresponsibility;for behind this argument is the recognition that world poverty is the consequenceo fa nh istorical process, which since colonialization has included enslavement,g enocide,a nd exploitation (Pogge2003,p.222).Tolimit oneself, in contrast,tomere assistance, is to overlook the roots of the West'senormous economic superiority over centuries of common history that devastated four continents (Nida-Rümelin /R echenauer 2009,p .3 00). The great majorityofp roperty rights came about in an unacceptable manner, through violence, conquest and oppression. From this derivesthe call to demand farther-reachingduties to compensatefor the injustices suffered. The legal principle of fault-based liability applies here: thosew ho actively cause distress are responsible for alleviating it,f or the creation of an evil generates ap articularlyh ighd egree of responsibility.
An umbero fo bjectionsh aveb een raised to the historicala rgument in particular.T he scope of the duty to compensatei sa llegedlyl imited. On the one hand, it presupposes thatt he previous harmsa re actuallyd emonstrable. On the other, it must be proven that past damages continue to affect the present. The first objections tates thatp resent-day governments in the developing countries are as much to blame for current poverty as past colonialists.Moreover,the duty to compensatefor harmful behavior applies onlytothose statesthat actually participatedi nt he injustices committed at an earlier time (Birnbacher 2007, p. 136). Finally, there is the problem of moral subjects, if one holds onlyindividuals and not collective actors accountable.
Ap articularlyw idespread objection cites so-called internal causes, stating that the essential factors for global poverty should be sought within presentdayd eveloping countries and thus in domestic difficulties.T his objection seems to be underlined by the significant differences between the developing countries stemmingfrom local factors, so that in the end the entirety of local factors weret he cause of global poverty (quoted in Pogge2 003,p .2 24,2 29;R awls 2002,p.134;N agel 2005,p.123). This appears to be confirmed by the frequency of brutal and corrupt regimes in developing countries today. By implication, the successes of some developing and newlyi ndustrialized countries seem to demonstrate thatt he harmful effects of past colonialization have since faded. Aside from the fact thatt he thesis of 'internal causes' serves as an excuse for the rich countries,this impression is also the product not least of historians' and sociologists' tendencyt of ocus moreo nn ational and regional factors than on worldwide developments.
Conversely, the duty to compensateisjustified by the fact that local circumstances cannot adequately account for global poverty.The world economic order, with all its inequities, remains responsible for the failuret othrive of economies in developing countries.T hus, even with continuous economic growth, Africa todayhas no chance of catchingupwith Europe'slead of 30 to 1. Giventhis massive advantage,c urrent inequality is not simply the effect of free choice.I na ddition,s o-called domesticf actors are themselvesc onditioned by the global order,b ecause the current world order contributes significantlyt oc orruption and oppression in developing countries (Pogge2 003,p .2 33). This includes the international resource privilege, the disequilibrium between rich natural resources and economic growth, as well as dependence on the globalr ealm (Pogge 2003,p .2 35;K esselring 2005,p .4 8). The consequencei st hat it is not merely amatter of distributing goods fairly, but aboveall alsoofeliminating unfair conditionso fp roduction.
If nothing else, we need to repudiatethe fallacy that the duty to compensate calls remedial responsibilities into question, as if the two typeso fd uties were mutuallyexclusive.Naturallyall rich countries have aduty to help, even if they feel no guilt,ordonot accept the concept of outcomeresponsibility.But according to the farther-reachinga rgument,t hoses tatest hat werei nvolvedi np ast harm have ap articulard uty to compensate in the present.
The whole debate underlines how essential the historicalaspect is. Afterall, it rests on the elementaryinsight that globalpoverty was 'made' by certain people in the course of history.A nd ultimately, it is preciselyt his mannero f' making' history thati ss ubject to debate. Even historical details about which global and local factors should be weighted to what extent and with which spatial and temporals cope playarole here. This harnesses the entire arsenal of historical research, down to methodological questions about the explanatory function of certain data. Even the method of counterfactual explanation comes into play, for example in discussions of whether people in developing countries are better off todaythan they would have been had they persisted in afictional state of nature, and had colonialization processes never taken place (Pogge2 003,p .2 37).
Finally, there is the pragmatic question of to what extent referringtothe past produces quantifiable compensatorymeasures.With respect to actual assistance, it is far removed from anyp ractical dimension wheret he theoretical distinction between remedial and outcome responsibilities might playarole. But in fact,the historical aspect is important in the discourses because it leads to payments that,while not trulycompensations,can still be understood as partial acknowledgementso fi njustices suffered. This is clearlyc onnected with the intention to heighten awareness of specificallyhistorical responsibility.Evenifsuch an argument is not directlyr eflected in appropriatef igures, it can contribute to doing more for the poor countries than strictlyr equired in ag iven situation. Above all, the historical argument calls upon us not justt oa lleviate suffering, but also to attack its causesb yp romotingamore just world than the one that has evolvedo vert he course of history.Thisp resents us with the task of seeking alternative possibilities for globald evelopments in the future.
Conclusion
The problem with most ethics devoted to the problem of global poverty is that they are conceivedw ithout reference to history.T hisl ack of history represents ad ouble difficulty.O nt he one hand, universalp rinciples provide no sufficient argument for the remedial duties invoked. On the other,these principles remain so abstract that it is impossiblet od ifferentiateb otht hose who need help and those who have ad uty to provide it.T he fight against globalp overty should not be mistaken for universalism. Rather,t oa ssume the global ethical stand-point is to analyze the historical processofglobalization, and to deriveconcrete and differentiated responsibilities from the conditions in which global poverty arises.
As we have seen, the globalization process takesp lace in diverse historical spaces and times. In the field of the world economy, this meansenvisioning the new spatial economyo fc apitalism with the developmento fn atural resources, the increase in sales markets, and the search for favorable conditionsofproduction in distant lands. While it is legitimate to point to local factors as well when making international comparisons,t his should not tempt us to underestimate the influenceo ft he world economic system. In the political arena, this means that the role of nation-states needsdifferentiation. This also applies in particular to developing countries,which werea nd still are dependenti ns pecific ways on the old colonial powers.Thus, if one points to the inadequacies of these states in order to shift the blame for global poverty onto them, one should equallyr ecall that these domestic deficiencies are also conditioned by the global order.I n order to overcome hardship, associations of states are increasinglye merging within the groups of developing and newlyi ndustrialized countries thatf ulfill as imilar functiont ot he European Union. In these cases, too, the national and transnationallevels overlap to form asupra-territorial space-time structure. It is preciselyt hese new assemblages that requires pecial support from the old industrial nations.
Thus when we consider the history of globalization, we no longer need only to enquire into the historical beginningso ft he process, but also to determine more preciselyw herea nd when to seek the roots of global poverty.I ft his question is in the foreground, we can surelyplace the beginning of globalization earlier-if not alreadyinthe earliest phases of colonialization from the sixteenth to the eighteenth centuries,then at least in nineteenth-century colonialism. And if we declare the emergence of electronic networks in the twentieth century as the birth of modernglobalization, we need to ask to what degree these networks promote or hinder development in the poor countries.There is no question that this development,too, disadvantageso re xcludes entire regions.
Finally, addressingt he long-term effects of these developments on living conditions in poor countries raises the historico-philosophical question of the continuities and discontinuities in the globalization process. It makes sense that postulatinghistorical responsibilityi sp ossibleo nlyi fo ne assumes am inimum of historicalc ontinuity,f or those who treat the ruptures in the globalization process as absoluter un the risk of underestimatingt heirl ong-term effects and playing into the handso ft hose who denya ny historical responsibility.A dhering to sustained development in no waym eans naively believingi nalinear processofmodernization, let alone in homogeneous progress.Affirming ahisto-ry with shared responsibility is perfectlycompatible with criticism of the downsides of the course taken by history.Ultimately, the struggle against global poverty presupposes aconviction that living conditions can be improved in the long term. The world'sp oor are not the onlyo nes who consider such improvements 'progress'.A ne thics of globalr esponsibility is quite inconceivable without this regulative idea. These considerations show the fundamental natureofhistorical patterns of interpretation, and demonstrate just how indispensable historical and historico-philosophical reflections on globalization are. | 5,566.4 | 2018-06-11T00:00:00.000 | [
"Philosophy",
"History",
"Political Science"
] |
Mono-Component Feature Extraction for Condition Assessment in Civil Structures Using Empirical Wavelet Transform
This paper proposes a methodology to process and interpret the complex signals acquired from the health monitoring of civil structures via scale-space empirical wavelet transform (EWT). The FREEVIB method, a widely used instantaneous modal parameters identification method, determines the structural characteristics from the individual components separated by EWT first. The scale-space EWT turns the detecting of the frequency boundaries into the scale-space representation of the Fourier spectrum. As well, to find meaningful modes becomes a clustering problem on the length of minima scale-space curves. The Otsu’s algorithm is employed to determine the threshold for the clustering analysis. To retain the time-varying features, the EWT-extracted mono-components are analyzed by the FREEVIB method to obtain the instantaneous modal parameters and the linearity characteristics of the structures. Both simulated and real SHM signals from civil structures are used to validate the effectiveness of the present method. The results demonstrate that the proposed methodology is capable of separating the signal components, even those closely spaced ones in frequency domain, with high accuracy, and extracting the structural features reliably.
Introduction
The data in Structural Health Monitoring (SHM) from civil structures contains essential information on their condition. Instantaneous features shall be desirable to be extracted for further structural condition assessment. However, under operating condition, the dynamic responses of civil structures are usually stored in time domain and are non-stationary because of complex excitation. They include complex components induced by distinct loads and intricate load-structure interactions. In addition, the acquired data includes unavoidable noises, spikes and trends, further challenging the extraction of useful information.
Time-frequency (TF) approaches were developed to obtain the instantaneous features from time-varying signals. These methods can provide information of signals in both time and frequency domains, becoming more competitive to process SHM data. Various TF methods including Short Time Fourier Transform (STFT) [1], Wigner-Ville Distribution (WVD) [2], Wavelet Transform (WT) [3], and Empirical Mode Decomposition (EMD) [4] were raised. Even these methods can determine some useful results, their limitations are also clear. For instance, the window length fixes the spectral resolution of the STFT. Despite the WVD can provide a good resolution, the cross-term inference restricts its application. With the merit of multi-resolution, the WT has been one of the most widely characteristics. Simulated signals with different levels of noise and real SHM signals from two civil structures, namely, a high-rise building and a footbridge, are used to demonstrate the effectiveness of the proposed signal processing procedure. The usefulness and accuracy of the scale-space EWT in signal decomposition is validated by the synthetic signals. To show the advantages of the scale-space EWT, the TF representations are compared with those from SWT and EMD. In the experimental study, the extracted instantaneous structural features are affirmed by comparing with results of the previous studies.
Empirical Wavelet Transform
The EWT aims to extract the individual modes from a signal by an adaptively designed wavelet filter bank. The modes extracted by EWT are amplitude modulated-frequency modulated (AM-FM) signals that have a compact support Fourier spectrum [17]. Separating different modes equals dividing the Fourier spectrum first and then to perform WT using the empirical wavelets constructed based on the detected supports.
Fast Fourier Transform (FFT) is implemented to the signal s(t) to obtain the frequency spectrum s(ω). Segmenting the Fourier spectrum is the crucial step for the adaptability of EWT. The local maxima of s(ω) is estimated first. Then, the Fourier axis is segmented to individual portions corresponding to different modes that are centered around a local maximum. The number of continuous segments the Fourier axis divided into is denoted as N. The limits between each segment are denoted as ω i (ω 0 = 0, and ω N = π), and each segment is denoted as Λ i = [ω i−1 , ω i ]. A transition phase T i of width 2τ i is defined surrounding each ω i . The simplest choice of τ i is Similar to constructing Littlewood-Paley and Meyer's wavelets [3], the empirical scaling function and the empirical wavelets can be defined by the following expressions of Equations (2) and (3), respectively.
of 23
The signal is reconstructed by The empirical mode s k is given by
Methodology
The flowchart of the methodology is shown in Figure 1. The steps are illustrated as follows.
Definition of Meaningful Modes
The number of minima with respect to ω of L(ω, σ) is a decreasing function of the scale parameter σ [32]. In the scale-space plane, a curve is produced by each initial minima. Let us denote the number of initial minima as N0, and the 'scale-space curve' defined as Ci (i∈[1, N0]), with a length of Li. Li indicates the life span of the minimum i. A mode in a spectrum is defined as
Scale-Space Boundary Detection
As the most important step in EWT, boundary detection to build the wavelet filter bank provides adaptability to the analyzed signal. Gilles [17] estimated the boundaries based on the local maxima and minima of the signal's Fourier spectrum. However, the Fourier spectrum is very sensitive to noises, as is usual the case for SHM signals of civil structures. These noises may produce redundant local maxima, leading to false boundaries. The Gilles' method requires the predefined frequency-band number. Other spectra immune to noises [29,30] can be employed, but special attention should be paid to the probability of missing useful modes. This study takes an alternative way, to use a different method other than the original local-maxima-minima one to segment the Fourier spectrum. The scale-space approach, which automatically detects meaningful modes in a spectrum based on the behavior of local minima in a scale-space representation [32], is employed. Finding meaningful modes is equivalent to a binary clustering problem on the length of minima scale-space curves. To reduce the influence of noises, a wavelet-based approach [7] is employed for denoising before the EWT procedure.
Scale-Space Representation of a Spectrum
Let function f (ω) define over an interval [0, ω max ]. Its discrete scale-space representation is defined as where g(n; σ) = 1 √ 2πσ e −n 2 /2σ (11) which is a sampled Gaussian kernel, M is large enough so that the approximation error of the Gaussian is negligible. The scale parameter σ is sampled in the following manner where k = 1, . . . , k max are integers, √ σ 0 is set to be 0.5, and √ σ max equals ω max .
Definition of Meaningful Modes
The number of minima with respect to ω of L(ω, σ) is a decreasing function of the scale parameter σ [32]. In the scale-space plane, a curve is produced by each initial minima. Let us denote the number of initial minima as N 0 , and the 'scale-space curve' defined as C i (i ∈ [1, N 0 ]), with a length of L i . L i indicates the life span of the minimum i. A mode in a spectrum is defined as meaningful if its support is delimited by two local minima corresponding to two scale-space curves C i above a certain length T [32]. Consequently, detecting meaningful modes is a two-class clustering problem on The key point is to automatically determine a threshold T.
Determination of Threshold
The Otsu's method separates a spectrum into two classes of C 1 and C 2 , and finds T that maximizes the between class variance where and Details of the Otsu's method can be found in [34].
Time-Frequency Representation of Extracted Modes
The modes extracted by the EWT are AM-FM signals s j (t) = S j (t)cos(ϕ j (t)) (j = 0, 1, . . . , N). Following Hilbert-Huang transform (HHT), the HT of a function s j (t) is defined as where p.v. represents the Cauchy principle value. The analytical form s ja (t) of s j (t) can be derived by the HT In AM-FM signals the HT provides where the instantaneous amplitude S j (t) and frequency ϕ j (t) can be extracted. The TF representation of the signal is obtained by plotting each curve ϕ j (t) with intensity of S j (t) in the TF plane. The time varying of the frequency and amplitude of each mode can be observed from this TF representation.
Modal Characteristics
For a time-varying SDOF structure under free vibration, if its parameters vary slower than the dynamic response, both the natural frequency ω 0 (t) and the damping coefficient h 0 (t) are slowly varying functions of time. They can be evaluated by [33] and where A and ω are the instantaneous amplitude and frequency of the response, respectively. When it comes to a time-varying SDOF system under forced vibration, the instantaneous frequency of the vibration signal is For ambient vibration, the second term of Equation (21) is a zero mean fast time-varying function [35]. The instantaneous frequency and amplitude of the dynamic signals will obtain time-varying parameters.
The FREEVIB method [33] relies on HT. It can obtain the stiffness and damping characteristics, and identify the instantaneous modal parameters of free vibration SDOF systems. This method includes the following steps [36]: (1) Taking the HT of the measured dynamic responses and calculating the envelope and the instantaneous frequency; (2) identifying the instantaneous parameters; (3) low-pass filtering of the modal parameters, and scaling the smooth modal parameters; and (4) plotting the backbones of the frequency, damping curves, frequency response functions (FRF), and force static characteristics.
Backbone and Damping Curve
From Equations (19) and (20), the instantaneous modal parameters are functions of the first and second deviations of the signal envelop and the instantaneous frequency of the dynamic response. Linking the modal frequency and the envelope gets a skeleton curve or backbone. Similarly, linking the modal damping and the envelope obtains a damping curve. Backbones and damping curves are used as a traditional instrument in nonlinear vibration analysis [36].
For small and slow nonlinear variations, The instantaneous modal frequency of the system will be close to the instantaneous frequency of the dynamic response, and the instantaneous damping coefficient equals the ratio between the envelope and its derivative.
Numerical Study
A synthetic signal consisting of three frequency components of 1, 3 and 6 Hz is used in this section to investigate the advantages of EWT. White Gaussian noise included into the signal to study the noise effect. The exponential function is embedded to simulate the signal attenuation with time. This simulated signal is expressed as where n(t) is the white Gaussian noise. High-level noises with three different signal-to-noise ratios (SNRs), i.e., −2, −6 and −10 dB, are used to test the efficacy of the scale-space EWT method. The 20-s simulated signal with an SNR of −2 dB is shown in Figure 2a. The sampling frequency is 50 Hz. The boundaries for the spectrum segmentation are detected using the scale-space method stated in Section 3.1.1, and the result is shown in Figure 2b. The threshold for the boundary detection is calculated to be 14 by the Otsu's method. Thus, minima scale-space curves with length longer than 14 are the boundaries. The empirical wavelets are defined according to Equations (2) and (3) based on these boundaries. Using these basis functions, WT is applied to decompose the signal. The extracted signal components, x rec1 (t) to x rec3 (t) are displayed in Figure 3. All the three components are extracted without redundant modes. parameters; (3) low-pass filtering of the modal parameters, and scaling the smooth modal parameters; and (4) plotting the backbones of the frequency, damping curves, frequency response functions (FRF), and force static characteristics.
Backbone and Damping Curve
From Equations (19) and (20), the instantaneous modal parameters are functions of the first and second deviations of the signal envelop and the instantaneous frequency of the dynamic response. Linking the modal frequency and the envelope gets a skeleton curve or backbone. Similarly, linking the modal damping and the envelope obtains a damping curve. Backbones and damping curves are used as a traditional instrument in nonlinear vibration analysis [36].
For small and slow nonlinear variations, The instantaneous modal frequency of the system will be close to the instantaneous frequency of the dynamic response, and the instantaneous damping coefficient equals the ratio between the envelope and its derivative.
Numerical Study
A synthetic signal consisting of three frequency components of 1, 3 and 6 Hz is used in this section to investigate the advantages of EWT. White Gaussian noise included into the signal to study the noise effect. The exponential function is embedded to simulate the signal attenuation with time. This simulated signal is expressed as where n(t) is the white Gaussian noise. High-level noises with three different signal-to-noise ratios (SNRs), i.e., −2, −6 and −10 dB, are used to test the efficacy of the scale-space EWT method. The 20-s simulated signal with an SNR of −2 dB is shown in Figure 2a. The sampling frequency is 50 Hz. The boundaries for the spectrum segmentation are detected using the scale-space method stated in Section 3.1.1, and the result is shown in Figure 2b. The threshold for the boundary detection is calculated to be 14 by the Otsu's method. Thus, minima scale-space curves with length longer than 14 are the boundaries. The empirical wavelets are defined according to Equations (2) and (3) based on these boundaries. Using these basis functions, WT is applied to decompose the signal. The extracted signal components, xrec1(t) to xrec3(t) are displayed in Figure 3. All the three components are extracted without redundant modes. Performing HT on each extracted component, their instantaneous frequencies are obtained. The TF plane is shown in Figure 4a, where the brightness of the instantaneous-frequency lines represents the amplitude of the corresponding components. The time varying of frequencies and amplitudes for each component can be noted. The added noise makes the frequencies fluctuate continuously. The brightness of these lines decays with time, meaning that the magnitudes of the components decrease with time. This phenomenon coincides with the fact that signal components attenuate with time due to the exponential function in Equation (23).
As comparison, SWT and EMD are also employed. In SWT, the analytical Morlet wavelet is used as the prescribed basis function. The EMD self-adaptively decomposes the signal into N+1 IMFs f k (t), which are AM-FM components under the assumption that S j (t) and ϕ (t) vary much slower than ϕ(t). TF planes obtained by these two methods are shown in Figure 4b,c. The three signal components are separated by SWT with satisfactory frequency resolution. However, in the EMD results the instantaneous frequencies of the second and third components are not so legible. Redundant modes below 1 Hz are produced. Applying the FREEVIB method to extracted signal components, the instantaneous frequencies and damping ratios for each mode are obtained. To compare the three methods of EWT, SWT, and EMD, the most probable values of these two parameters are selected as the indicators. The results of are shown in Table 1. Recognizing that the accuracy of these three methods is different, the coefficients of variation (CVs) are also listed in the table for comparison. Performing HT on each extracted component, their instantaneous frequencies are obtained. The TF plane is shown in Figure 4a, where the brightness of the instantaneous-frequency lines represents the amplitude of the corresponding components. The time varying of frequencies and amplitudes for each component can be noted. The added noise makes the frequencies fluctuate continuously. The brightness of these lines decays with time, meaning that the magnitudes of the components decrease with time. This phenomenon coincides with the fact that signal components attenuate with time due to the exponential function in Equation (23).
As comparison, SWT and EMD are also employed. In SWT, the analytical Morlet wavelet is used as the prescribed basis function. The EMD self-adaptively decomposes the signal into N+1 IMFs fk(t), which are AM-FM components under the assumption that Sj(t) and φ'(t) vary much slower than φ(t). TF planes obtained by these two methods are shown in Figure 4b,c. The three signal components are separated by SWT with satisfactory frequency resolution. However, in the EMD results the instantaneous frequencies of the second and third components are not so legible. Redundant modes below 1 Hz are produced. It can be observed that by this means all the three methods can identify the modal frequency with a value very close to the theoretical one. Larger discrepancy is found between the analyzed and theoretical damping ratios. Nonetheless, EWT performs better than the other two methods in the value estimation, especially for the third mode. As to the CVs, the EWT is superior in those of the frequency, but not so excellent in the damping ratio. It should be admitted that estimating the damping ratio accurately is difficult for all the three methods. An appropriate processing of the noises may improve the performance of the methods.
To test the immunity of the method to noises, higher-level noises with an SNR of −6 dB and −10 dB are added to the signal, respectively. The thresholds for spectrum detection are 74 and 155 for each. The detected frequency boundaries are shown in Figure 5. As well, the TF planes for the extracted mono-components are shown in Figure 6. The accuracy of the EWT method degrades by the noises. The boundary detection shown in Figure 5 is still reliable. The instantaneous frequencies of the three modes can be well separated even when the SNR is −10 dB. the value estimation, especially for the third mode. As to the CVs, the EWT is superior in those of the frequency, but not so excellent in the damping ratio. It should be admitted that estimating the damping ratio accurately is difficult for all the three methods. An appropriate processing of the noises may improve the performance of the methods.
To test the immunity of the method to noises, higher-level noises with an SNR of −6 dB and -10 dB are added to the signal, respectively. The thresholds for spectrum detection are 74 and 155 for each. The detected frequency boundaries are shown in Figure 5. As well, the TF planes for the extracted mono-components are shown in Figure 6. The accuracy of the EWT method degrades by the noises. The boundary detection shown in Figure 5 is still reliable. The instantaneous frequencies of the three modes can be well separated even when the SNR is −10 dB. the value estimation, especially for the third mode. As to the CVs, the EWT is superior in those of the frequency, but not so excellent in the damping ratio. It should be admitted that estimating the damping ratio accurately is difficult for all the three methods. An appropriate processing of the noises may improve the performance of the methods.
To test the immunity of the method to noises, higher-level noises with an SNR of -6 dB and -10 dB are added to the signal, respectively. The thresholds for spectrum detection are 74 and 155 for each. The detected frequency boundaries are shown in Figure 5. As well, the TF planes for the extracted mono-components are shown in Figure 6. The accuracy of the EWT method degrades by the noises. The boundary detection shown in Figure 5 is still reliable. The instantaneous frequencies of the three modes can be well separated even when the SNR is −10 dB.
A High-Rise Building
The Canton Tower, a 610 m high TV tower in Guangzhou, China, is another test bed for the
A High-Rise Building
The Canton Tower, a 610 m high TV tower in Guangzhou, China, is another test bed for the feasibility of EWT. It is composed of a 454 m high main tower and a 156 m high antenna mast, as shown in Figure 7. The main tower has a tube-in-tube geometry consisting of a reinforced concrete inner structure and a steel lattice outer structure. Its construction was completed in May 2009.
A High-Rise Building
The Canton Tower, a 610 m high TV tower in Guangzhou, China, is another test bed for the feasibility of EWT. It is composed of a 454 m high main tower and a 156 m high antenna mast, as shown in Figure 7. The main tower has a tube-in-tube geometry consisting of a reinforced concrete inner structure and a steel lattice outer structure. Its construction was completed in May 2009. A long-term SHM system was deployed on the tower for real-time monitoring of the structure [37]. As part of this system, 20 uniaxial accelerometers (Tokyo Sokushin AS-2000C) are installed on eight different cross-sections of the main tower ( Figure 7) to measure the structural dynamic responses. Accelerometers 01, 03, 05, 07, 08, 11, 13, 15, 17, and 18, are used to collect the response in the short-axis direction, while others measure in the long-axis direction. The sampling frequency for the acceleration data is 50 Hz 24-h (18:00 p.m. 19 January 2010to 18:00 p.m. 20 January 2010) data recorded during one construction stage are provided for a benchmark study [38]. A long-term SHM system was deployed on the tower for real-time monitoring of the structure [37]. As part of this system, 20 uniaxial accelerometers (Tokyo Sokushin AS-2000C) are installed on eight different cross-sections of the main tower ( Figure 7) to measure the structural dynamic responses. Accelerometers 01, 03, 05, 07, 08, 11, 13, 15, 17, and 18, are used to collect the response in the short-axis direction, while others measure in the long-axis direction. The sampling frequency for the acceleration data is 50 Hz 24-h (18:00 p.m. 19 January 2010 to 18:00 p.m. 20 January 2010) data recorded during one construction stage are provided for a benchmark study [38].
This study used the data collected by the accelerometer 11 from 05:00 a.m. to 05:10 a.m. on 20 January 2010. The time history of the data and the corresponding frequency boundaries detected by the scale-space approach are shown in Figure 8. The first five components extracted by EWT are displayed in Figure 9. This study used the data collected by the accelerometer 11 from 05:00 a.m. to 05:10 a.m. on 20 January 2010. The time history of the data and the corresponding frequency boundaries detected by the scale-space approach are shown in Figure 8. The first five components extracted by EWT are displayed in Figure 9. Applying the FREEVIB method to the extracted mono-components, the instantaneous modal parameters of the tower are derived. The instantaneous frequencies of these modes are shown in Figure 10a. For comparison, SWT and EMD are also used to analyze this signal, the results of which are displayed in Figure 10b,c, respectively. The frequency resolution of EWT is much higher than that of the SWT and EMD. The instantaneous frequencies identified by EWT are obviously more concentrated. The second and the third modes close to each other are clearly discriminated by EWT. In contrast, the instantaneous frequencies obtained by the other two methods are relatively Applying the FREEVIB method to the extracted mono-components, the instantaneous modal parameters of the tower are derived. The instantaneous frequencies of these modes are shown in Figure 10a. For comparison, SWT and EMD are also used to analyze this signal, the results of which are displayed in Figure 10b,c, respectively. The frequency resolution of EWT is much higher than that of the SWT and EMD. The instantaneous frequencies identified by EWT are obviously more concentrated. The second and the third modes close to each other are clearly discriminated by EWT. In contrast, the instantaneous frequencies obtained by the other two methods are relatively scattered, especially for those determined by EMD. The performance of SWT is much better than EMD, but it cannot separate the two closely spaced modes clearly as EWT.
The histograms for the instantaneous frequencies and damping ratios of the extracted modes are shown in Figure 11a,b, respectively. The most probable value for each mode is considered as the modal parameter of the tower. They are listed in Table 1 with the corresponding CVs. In [39], the CVs for the modal frequencies are less than 0.005, and those for the damping ratios are less than 0.90. The CVs in Table 2 are much larger. The main reason is that in this study they are derived directly from a randomly selected signal with a duration of only 10 min, where there may be disturbances from environment, loads and so on. In contrast, the previous study uses data segment length long enough (one-hour) to reduce the noise effects. Moreover, in this study the 24 measurements of one-hour duration were not used directly but first decomposed into 70 overlapping data sets of one-hour duration with a 20-min shift. In Table 1, another observation is that the CVs of the damping ratio are much larger than those of the modal frequency. It is mainly because that the identified damping ratio of a structure is usually not as stable as the natural frequency.
are less than 0.90. The CVs in Table 2 are much larger. The main reason is that in this study they are derived directly from a randomly selected signal with a duration of only 10 min, where there may be disturbances from environment, loads and so on. In contrast, the previous study uses data segment length long enough (one-hour) to reduce the noise effects. Moreover, in this study the 24 measurements of one-hour duration were not used directly but first decomposed into 70 overlapping data sets of one-hour duration with a 20-min shift. In Table 1, another observation is that the CVs of the damping ratio are much larger than those of the modal frequency. It is mainly because that the identified damping ratio of a structure is usually not as stable as the natural frequency. Figure 11. Histograms for the instantaneous modal parameters of Canton Tower identified by EWT: (a) Instantaneous frequency; (b) damping ratio.
As verification, the identified modal frequencies and damping ratios are compared with those obtained by the vector autoregressive (ARV) technique [40], the data-driven stochastic identification (SSI-DATA) method [40], the enhanced frequency domain decomposition (EFDD) algorithm [41], and an improved automatic modal identification method based on NExT-ERA [41], respectively. This comparison is shown in Figure 12 using bar plots. The modal frequencies identified by EWT agree quite well with those obtained by the other four methods. Though relatively big differences exist in the damping ratios identified by distinct methods, the values obtained from EWT have a satisfactory accordance with those determined from ARV and SSI-DATA. As verification, the identified modal frequencies and damping ratios are compared with those obtained by the vector autoregressive (ARV) technique [40], the data-driven stochastic identification (SSI-DATA) method [40], the enhanced frequency domain decomposition (EFDD) algorithm [41], and an improved automatic modal identification method based on NExT-ERA [41], respectively. This comparison is shown in Figure 12 using bar plots. The modal frequencies identified by EWT agree quite well with those obtained by the other four methods. Though relatively big differences exist in the damping ratios identified by distinct methods, the values obtained from EWT have a satisfactory accordance with those determined from ARV and SSI-DATA. The backbones estimated by the FREEVIB method for each mode and the corresponding damping curves are displayed in Figure 13a and Figure 13b, respectively. The identified instantaneous frequencies almost do not vary with the vibration amplitude. On the other hand, for the damping coefficients no obvious varying trends exist with the amplitude, though they are more scattered than the instantaneous frequencies. This implies that under the analyzed condition the The backbones estimated by the FREEVIB method for each mode and the corresponding damping curves are displayed in Figure 13a,b, respectively. The identified instantaneous frequencies almost do not vary with the vibration amplitude. On the other hand, for the damping coefficients no obvious varying trends exist with the amplitude, though they are more scattered than the instantaneous frequencies. This implies that under the analyzed condition the Canton tower can be considered as a linear system. This is affirmed by the identified elastic-force characteristics illustrated in Figure 13c. The backbones estimated by the FREEVIB method for each mode and the corresponding damping curves are displayed in Figure 13a,b, respectively. The identified instantaneous frequencies almost do not vary with the vibration amplitude. On the other hand, for the damping coefficients no obvious varying trends exist with the amplitude, though they are more scattered than the instantaneous frequencies. This implies that under the analyzed condition the Canton tower can be considered as a linear system. This is affirmed by the identified elastic-force characteristics illustrated in Figure 13c.
A Footbridge
A vibration-based continuous SHM system with 8 accelerometers and 10 thermocouples has been deployed on the Dowling Hall Footbridge (DHF) at Tufts University in Medford. This footbridge is 44 m long and 3.7 m wide. As shown in Figure 14a, it is a two-span continuous steel frame bridge. The eight uniaxial accelerometers were permanently installed on the underside of the bridge [42] to measure the structural vibration under ambient stimulations, as displayed in Figure 14b. More details about this bridge and its monitoring system can be found in [42,43].
Data of seventeen weeks have been released to the public. A set of 300-s data was recorded once an hour with a sampling frequency of 2048 Hz. This study analyzes acceleration data collected by accelerometer number 1 (Figure 11b) Figure 15. Figure 16 displays the first six components extracted by EWT.
A vibration-based continuous SHM system with 8 accelerometers and 10 thermocouples has been deployed on the Dowling Hall Footbridge (DHF) at Tufts University in Medford. This footbridge is 44 m long and 3.7 m wide. As shown in Figure 14a, it is a two-span continuous steel frame bridge. The eight uniaxial accelerometers were permanently installed on the underside of the bridge [42] to measure the structural vibration under ambient stimulations, as displayed in Figure 14b. More details about this bridge and its monitoring system can be found in [42,43]. Figure 15. Figure 16 displays the first six components extracted by EWT.
The instantaneous modal characteristics of the bridge corresponding to the six extracted modes are obtained by the FREEVIB method. The identified instantaneous frequencies are shown in Figure 17, compared with those from SWT and EMD. All the six modes have been clearly separated by EWT, even the two (modes 5 and 6) quite close to each other. However, no clear and reliable modes are extracted by SWT and EMD. That is to say, the frequency resolution of EWT is much higher than that of SWT and EMD, implying that it is better to separate the SHM signals of a civil structure into meaningful mono-components. Figure 18 shows histograms of the instantaneous frequencies and damping ratios for the extracted modes obtained by the FREEVIB method. As in the Canton tower case study, the most probable values corresponding to each mode is adopted as the modal parameter of the structure. The instantaneous modal characteristics of the bridge corresponding to the six extracted modes are obtained by the FREEVIB method. The identified instantaneous frequencies are shown in Figure 17, compared with those from SWT and EMD. All the six modes have been clearly separated by EWT, even the two (modes 5 and 6) quite close to each other. However, no clear and reliable modes are extracted by SWT and EMD. That is to say, the frequency resolution of EWT is much higher than that of SWT and EMD, implying that it is better to separate the SHM signals of a civil structure into meaningful mono-components. The modal parameters derived in this way are listed in Table 3 as well as the corresponding CVs. The CVs for the fifth and sixth modal frequencies are much larger than those for others. The reason is that these two modes are more sensitive to disturbances, and the corresponding instantaneous frequencies fluctuate relatively greatly with time, as shown in Figure 14a. Similar to the CVs of the damping ratio for Canton Tower, those of DHF are also very large because of the inherent instability. Figure 16. The first six components extracted from the acceleration signal of DHF by EWT (The Acc. represents acceleration). Figure 18 shows histograms of the instantaneous frequencies and damping ratios for the extracted modes obtained by the FREEVIB method. As in the Canton tower case study, the most probable values corresponding to each mode is adopted as the modal parameter of the structure. The modal parameters derived in this way are listed in Table 3 as well as the corresponding CVs. The CVs for the fifth and sixth modal frequencies are much larger than those for others. The reason is that these two modes are more sensitive to disturbances, and the corresponding instantaneous frequencies fluctuate relatively greatly with time, as shown in Figure 14a. Similar to the CVs of the damping ratio for Canton Tower, those of DHF are also very large because of the inherent instability.
To verify the modal frequencies identified by the proposed methodology, they are compared with those from a preliminary dynamic test on 4 April 2009 [44], for which the modal parameters were identified by the SSI-DATA method. The comparison bars are shown in Figure 19. The EWT results are extremely close to those determined from the modal test. The closely spaced modes, i.e., the fifth and the sixth ones, are well identified by EWT. In contrast, the SSI-DATA method requires selecting the order carefully.
Using the extracted mono-components, the backbones, damping curves and elastic forces corresponding to each mode of the bridge are also analyzed by the FREEVIB method ( Figure 20). The backbones can be considered as vertical lines, implying that the identified instantaneous frequencies do not vary with the vibration amplitude. Though the damping coefficients are quite decentralized, they also do not change evidently with the amplitude. It is deduced that the DHF can be considered as a linear system in the analyzed condition, which is further validated by the identified elastic-force characteristics shown in Figure 20c. Figure 18. Histograms for the instantaneous modal parameters of DHF identified by EWT: (a) Instantaneous frequency; (b) damping ratio.
To verify the modal frequencies identified by the proposed methodology, they are compared with those from a preliminary dynamic test on 4 April 2009 [44], for which the modal parameters were identified by the SSI-DATA method. The comparison bars are shown in Figure 19. The EWT results are extremely close to those determined from the modal test. The closely spaced modes, i.e., the fifth and the sixth ones, are well identified by EWT. In contrast, the SSI-DATA method requires selecting the order carefully.
Using the extracted mono-components, the backbones, damping curves and elastic forces corresponding to each mode of the bridge are also analyzed by the FREEVIB method ( Figure 20). The backbones can be considered as vertical lines, implying that the identified instantaneous frequencies do not vary with the vibration amplitude. Though the damping coefficients are quite decentralized, they also do not change evidently with the amplitude. It is deduced that the DHF can
Discussion
Due to the large-scale and complexity, together with the intricate structure-load interactions, SHM signals acquired from civil structures are complex but contain rich information about the structural condition. For structural evaluation, it is desirable to extract instantaneous structural features from these complex signals. EWT is a promising tool because it is an advanced TF method that combines the merits of both EMD and WT. However, in the processing of real SHM signals, the original EWT method proposed by Gilles [17] is very susceptible to noises, leading to false signal decomposition. Another problem is that after separating the signals, the traditional way is to convert the mono-components into free decaying vibration function subjected to ambient vibrations. Consequently, the time-varying features of the structure are ignored. This paper proposes a systematical procedure to extract the instantaneous structural features based on the improved EWT. The accuracy and boundary detection capability of the scale-space method using the Otsu's algorithm are demonstrated by both the numerical study and experimental study on real civil structures (Figures 2, 4-6, 8, 10, 15 and 17). Its immunity to noises is also tested by the synthetic signals with different levels of noises. Compared with EMD and SWT, the proposed scale-space EWT can extract more accurate modes. All the concerned modes were separated from the studied signals, without extra ones. However, from Figures 4, 10 and 17, some redundant modes that are difficult to be interpreted are produced by EMD. The reason is that EMD forces the extraction of IMFs through an ad-hoc process even if the initial components are not. The frequency resolution of EWT is much higher than that of traditional WT methods including the SWT, a smart utilization of the output of the classical WT (Figures 4, 10 and 17). Even the closely spaced modes can still be well discriminated by the scale-space EWT.
By applying FREEVIB to the mono-components extracted from the vibration signals of civil structures, the structural features, including the instantaneous modal parameters and the linearity characteristics, are derived. In the case study, the obtained modal parameters are proved to be reasonable by comparing with those identified by the traditional methods such as SSI-DATA and EFDD. In contrast with the usual way that considers the extracted modes as free decaying vibrations, the FREEVIB retains the instantaneous features of these parameters. It implies that the time-varying performance of structure can be tracked, which provides a new perspective for SHM-based structural condition assessment. For example, if there is any anomaly in the structure, these characteristics or their derivations such as the parameters for their statistical distributions may changes correspondingly. In addition, the linearity of the structural system can also be judged by the FREEVIB results.
Spectra instead of the Fourier one, such as the standardized autoregression power spectrum [29] and the pseudo-spectrum obtained by the multiple signal classification approach [30,31], have been used in some studies to reduce the disturbance of noises on the boundary detection in EWT. Nevertheless, it may not be so convinced that no useful structural information is overlooked. This study still uses the Fourier spectrum to build the wavelet filter bank, though it is susceptible to noises. For this study, the improvement of EWT lies in the boundary detection method, or the definition of boundaries. The Fourier spectrum is transformed to the scale-space, and the boundaries are those corresponding to scale-space curves above a certain length.
In this study, some noises embedded in the signal were removed at the beginning of the EWT procedure, and the results are as expected. However, the denoising method is trail-based. A thorough investigation on how to remove noises effectively is necessary in the future. The best spectrum segmentation method to extract different modes, including but not limited to spectra to be used, and algorithms for boundary definition, is still an open question to be addressed.
Though the instantaneous structural characteristics of the civil structures are obtained by the proposed methodology, how to take advantages of these results in the structural condition assessment is also another direction to explore.
Conclusions
This paper proposes a systematic methodology to extract instantaneous features about the structural condition from SHM signals of civil structures. The signal is decomposed into individual components using a scale-space EWT method first. Subsequently, the FREEVIB method is applied to the extracted mono-components to obtain the time-varying structural characteristics.
The scale-space EWT means an EWT method detecting the frequency boundaries for the wavelet filter bank in the scale-space representation of the traditional Fourier transform. The boundaries are defined as those with scale-space curves above a certain threshold. To find the threshold, the Otsu's method is adopted in this study. The scale-space EWT aims to improve the original EWT method proposed by Gilles [17] that is sensitive to noises and requires a predefined boundary number.
For modal identification of civil structures, the traditional way to utilize the decomposed signal components is to regard them as free decaying vibrations. However, the time-varying effect of the modal parameters is ignored consequently. This paper employs the FREEVIB method to process the EWT-extracted mono-components. By this means, the instantaneous modal parameters are obtained. Moreover, this method can analyze the linear characteristics of the structure based on backbones, damping coefficients and elastic forces.
Both numerical and experimental studies are conducted to validate the proposed method. Different levels of noises are added to the simulated signals to test its immunity. The performance of the scale-space EWT in boundary detection, and the accuracy of the extracted instantaneous frequencies are verified by comparing with the results of EMD and SWT. Real SHM signals from a high-rise building and a footbridge are analyzed. The instantaneous modal parameters are reasonable after being compared with results in the previous studies.
The proposed signal processing procedure is effective to identify the instantaneous modal parameters and obtain the linearity characteristic of civil structures. Moreover, this method can deal with the high-level noisy signals. Studies on the optimal spectrum segmentation, and the utilization of the obtained instantaneous features in structural condition assessment, will be carried out in the future. | 9,583 | 2019-10-01T00:00:00.000 | [
"Engineering"
] |
A Timeline of Biosynthetic Gene Cluster Discovery in Aspergillus fumigatus: From Characterization to Future Perspectives
In 1999, the first biosynthetic gene cluster (BGC), synthesizing the virulence factor DHN melanin, was characterized in Aspergillus fumigatus. Since then, 19 additional BGCs have been linked to specific secondary metabolites (SMs) in this species. Here, we provide a comprehensive timeline of A. fumigatus BGC discovery and find that initial advances centered around the commonly expressed SMs where chemical structure informed rationale identification of the producing BGC (e.g., gliotoxin, fumigaclavine, fumitremorgin, pseurotin A, helvolic acid, fumiquinazoline). Further advances followed the transcriptional profiling of a ΔlaeA mutant, which aided in the identification of endocrocin, fumagillin, hexadehydroastechrome, trypacidin, and fumisoquin BGCs. These SMs and their precursors are the commonly produced metabolites in most A. fumigatus studies. Characterization of other BGC/SM pairs required additional efforts, such as induction treatments, including co-culture with bacteria (fumicycline/neosartoricin, fumigermin) or growth under copper starvation (fumivaline, fumicicolin). Finally, four BGC/SM pairs were discovered via overexpression technologies, including the use of heterologous hosts (fumicycline/neosartoricin, fumihopaside, sphingofungin, and sartorypyrone). Initial analysis of the two most studied A. fumigatus isolates, Af293 and A1160, suggested that both harbored ca. 34–36 BGCs. However, an examination of 264 available genomes of A. fumigatus shows up to 20 additional BGCs, with some strains showing considerable variations in BGC number and composition. These new BGCs present a new frontier in the future of secondary metabolism characterization in this important species.
Introduction
Filamentous fungi are renowned for the synthesis of small bioactive compounds commonly called secondary metabolites (SMs) or natural products.Although the biological role of SMs in producing fungus was originally discarded or, at best, unknown, studies in the last two decades have shown that SMs were critical fitness factors for fungal success in varied environments.For example, in considering the opportunistic pathogen A. fumigatus alone, its SMs play various roles [1] in defending against or killing host cells [2], acquiring essential micronutrients [3], mediating interacts with other microorganisms [4,5], and protection from environmental extremes such as UV radiation [6].
Each fungal SM is typically synthesized by a specific biosynthetic gene cluster (BGC), where genes required for the SM are co-regulated and arranged contiguously in a locus [7].The current understanding of BGCs was only realized with genome sequencing of filamentous fungi, first observed in Aspergillus spp.[8].Depending on genus and species, the number of BGCs in any fungus may range from 0 to 80, with filamentous Ascomycetes containing, on average, the highest numbers [9].In most fungi, the products of their BGCs are unknown but are of interest due to potential useful bioactivities.Several Aspergillus spp.stand out as having the most defined BGCs, with A. fumigatus and A. nidulans having the highest percentage of chemically defined BGC products [10,11].
Efforts to identify products of A. fumigatus BGCs first centered on those metabolites commonly expressed in growth media and deemed to be important in invasive aspergillosis, such as DHN-melanin, gliotoxin, and fumitremorgin [12].However, many A. fumigatus BGCs and their products were found to be silent in standard laboratory conditions, and later studies employed either endogenous overexpression strategies, the use of heterologous hosts, or more creative growth conditions such as co-culture with bacteria to activate these BGCs.With many efforts from multiple laboratories, 20 A. fumigatus BGCs have now been defined.
Here, we present a historical journey of when each of these 20 BGCs was linked to their products.We posit that the current commonly used A. fumigatus isolates, Af293 and A1163, are unlikely to yield new compounds without considerable genetic manipulation and demonstrate that common parameters known to impact SM production, such as an epigenetic mutation or changes in temperature and media composition, primarily alter the titers of known SMs.However, an analysis of BGC composition across 264 sequenced isolates of A. fumigatus suggests that new BGC products may be obtained from diverse strains of this fungus.
Strains and Culture Conditions
The A. fumigatus strains used in this study type (A1160 and ∆sirE [13]) were maintained in glycerol stocks and activated on glucose minimal medium GMM [14] with added 0.12% (w/v) uracil/uridine.Czapek yeast extract agar (CYA) (Thermo Fisher Scientific, Waltham, MA, USA), regular GMM, and GMM replacing the nitrogen source NaNO 3 to NH 4 Cl were used as the 3 media for secondary metabolite extraction.
Secondary Metabolite Extraction and LC-MS/MS Analysis
For metabolomic analysis, 1 × 10 5 spores were point-inoculated on GMM, CYA, and NH 4 Cl agar in triplicates and cultivated at 37 • C for 7 days and at 25 • C for 14 days.After culturing, the total contents of each Petri dish were freeze-dried, lyophilized, and extracted with 30 mL methanol (Sigma Aldrich, St. Louis, MO, USA).The supernatant was then vacuum-filtered, and the solvent was removed under reduced pressure.The same procedure was performed for the control culture media.The final extracts were stored at −20 • C. For sample preparation, HPLC-grade methanol was added to reach 5 mg/mL for each extract, and the samples were sonicated until complete dissolution.LC−MS was performed on a Thermo Fisher Scientific-Vanquish UHPLC system coupled with a Thermo Q-Exactive HF hybrid quadrupole-orbitrap high-resolution mass spectrometer equipped with a HESI ion source (Thermo Fisher Scientific, Waltham, MA, USA).Each sample was analyzed in negative and positive ionization modes using an m/z range of 100 to 1500.A Waters XBridge BEH-C18 column (2.1 × 100 mm, 1.7 µm) (Waters, Milford, MA, USA) was used with 0.1% formic acid in acetonitrile (organic phase) and 0.1% formic acid in water (aqueous phase) as solvents at a flow rate of 0.2 mL/min.An amount of 5 µL of samples was injected.A 20-minute solvent gradient scheme was used, starting at 5% organic with a linear increase to 98% for 10 min, holding at 98% organic for 5 min, decreasing back, and holding at 5% organic for 5 min for a total of 20 min.
Feature Detection and Characterization
LC−MS RAW files were converted to mzXML format (centroid mode) using RawConverter (v.1.2.0.1)(The Scripps Research Institute, San Diego, CA, USA) followed by analysis using the XCMS analysis.Positive and negative ionization mode data were processed separately.XCMS features with p-values > 0.05 were filtered out, and the volcano plots were created using Excel (2024).The intensities of m/z values of known SMs were extracted out of mass spectra, and graphs were plotted using GraphPad Prism version 10.2.0.
Dataset Generation and Secondary Metabolite Annotation
All annotated and publicly available A. fumigatus genomes (264 genomes) were downloaded from the NCBI database on 1 December 2022 using the NCBI's Dataset tool (v14.15.0).To generate the BGC predictions, we ran fungal antiSMASH (v6; default settings) [15] on every A. fumigatus genome.For all 19 known clusters, cBlaster (v1.3.18)[16] was utilized to check for homologous loci across the other A. fumigatus genomes.All antiSMASH predictions can be found in this paper's to Supplementary Materials.
Grouping the A. fumigatus BGC Predictions into Gene Cluster Families
Homologous BGCs are thought to produce identical or closely related SMs and are referred to as gene cluster families (GCFs) [17].To determine which of our detected BGCs were members of shared GCFs, all antiSMASH predictions were run through the BiG-SCAPE (e.g., Biosynthetic Gene Similarity Clustering and Prospecting Engine; v1.1.5)[18].A total of seven BiG-SCAPE cutoff values between 0.1 and 0.7 by increments of 0.1 were tested.Values greater than 0.5 were found to be too relaxed, leading to the major merging of large GCFs, which were separated at lower cutoffs.In the end, an optimal cutoff value of 0.3 (which is also the default value) was chosen for generating initial GCF classifications.A network visualization was created with Cytoscape (v3.9.1) [19] for each natural product class and can be seen in Supplementary Figure S2.Each GCF classification was handchecked using the produced visualizations to resolve bridge clusters or closely related GCFs.
Creating the Estimated Aspergillus fumigatus Species Complex Phylogeny
We employed a coalescent model-based approach to construct the phylogeny from 264 A. fumigatus genomes due to this method's scalability and ability to handle missing data [20].Two outgroup species, Aspergillus lentulus and Aspergillus fischeri, were added to the dataset based on their known close relation to Aspergillus fumigatus [21].All gene trees were made using the following pipeline.BUSCOs (e.g., 'Benchmarking Single Copy Orthologs'; 4194 genes in total) were identified using Orthofinder v2.5.2 (default settings) [22,23].A smaller set of 700 highly conserved BUSCOs was filtered from the larger database for computational scalability.These 700 shared BUSCOs were aligned using MAFFT (v7.520) with the '-auto' parameter [24].All alignments were trimmed using trimAl (v1.2) and the '-gappyout' parameter [25].The optimal model of sequence evolution was chosen using the built-in version of ModelFinder found in IQ-Tree (v2.2.0) [26].Lastly, the phylogenic trees were constructed using IQ-Tree [27] and run with 1000 ultrafast bootstrap replicates.The resulting 700 gene trees were used as the input for ASTRAL (e.g., Accurate Species TRee Algorithm; v5.7.8) [28], which was run with default settings.All trees created for this publication can be found in Supplemental Table S3.
Timeline of Linkage of Natural Products to Specific BGCs
Dihydroxynapthalene (DHN) melanin was the first metabolite to be linked to a specific BGC (Figures 1 and 2, Table S1).DHN-melanin is a negatively charged, hydrophobic pigment that coats the asexual spore cell wall, and similar pigments are found in plant pathogenic fungi [29].A requirement for this pigment for full virulence in A. fumigatus was first reported in 1997 [30].The relationship of melanin biosynthesis across fungi was noted when it was observed that the agricultural fungicide tricyclazole inhibited the conidial pigmentation of A. fumigatus [31].The first gene encoding a polyketide synthase, pksP/alb1, was first identified in 1998 [32], and the whole BGC consisting of six genes was discovered in 1999 [33].Since then, scores of papers have been published on the role of DHN-melanin in virulence [34], as well as a UV protective agent [6].
was first reported in 1997 [30].The relationship of melanin biosynthesis across fungi was noted when it was observed that the agricultural fungicide tricyclazole inhibited the conidial pigmentation of A. fumigatus [31].The first gene encoding a polyketide synthase, pksP/alb1, was first identified in 1998 [32], and the whole BGC consisting of six genes was discovered in 1999 [33].Since then, scores of papers have been published on the role of DHN-melanin in virulence [34], as well as a UV protective agent [6].Three BGCs were identified in 2005, one being the gliotoxin (GT) BGC (gli BGC) (Figures 1 and 2, Table S1).GT was first detected from A. fumigatus in 1944 [35].GT is a toxic epidithiodioxopiperazine due to a disulfide bridge in its structure that is responsible for generating reactive oxygen species (ROS) [36,37].The gli BGC was identified via a homology search using the sirodesmin BGC, sirodesmin being a similarly structured epidithiodioxopiperazine produced by the plant pathogenic fungus Leptosphaeria maculans [38,39].The gli BGC consists of twelve genes, including the positive-acting transcription factor, GliZ [40].The toxic nature of GT provides a protective shield for A. fumigatus not only against immune cells during pathogenesis [41] but also against amoebae and bacteria in the environment [42,43].
The fumigaclavine BGC (fga/eas BGC) (Figures 1 and 2, Table S1) shared a similar discovery history as gliotoxin in the sense that the BGC was found via similarities to a known ergot alkaloid BGC in the plant pathogen Claviceps purpurea in 2005 [44,45].Biosynthetic studies took a bit longer to complete where at least two genes, pesL and pes1, were suggested to be necessary for fumigaclavine C synthesis in 2012 [46], but a fuller characterization of the BGC was not completed until 2020 [47].Fumigaclavine C synthesis is required for full virulence in an insect model of invasive aspergillosis [48].
The fumitremorgin BGC (ftm BGC) (Figures 1 and 2, Table S1) was also identified in 2005 via similarity to a known BGC from C. purpurea [49].Fumitremorgin is a prenylated indole alkaloid mycotoxin with a diketopiperazine core structure.At first, it was reported that ftm BGC had ten genes [50].But, four years later, in 2009, based on the data provided by genome-mining [51], it was corrected to the fact that ftm BGC contained nine ftm genes [52].These types of molecules can inhibit cell cycle progression and demonstrate neurotoxicity [53,54], although no direct study of this metabolite in invasive aspergillosis has been conducted.S1).GT was first detected from A. fumigatus in 1944 [35].GT is a toxic epidithiodioxopiperazine due to a disulfide bridge in its structure that is responsible for generating reactive oxygen species (ROS) [36,37].The gli BGC was identified via a homology search using the sirodesmin BGC, sirodesmin being a similarly structured The first genome sequence of A. fumigatus in 2005 [51] noted an NRPS/PKS hybrid, a type of enzyme found in bacteria [55] but had not been widely reported in fungi at the time [56,57].In 2007, deletion and overexpression of the encoding gene (later called psoA/nrps14) showed it to be involved in the production of pseurotin A [58].However, it was not until 2013 that the full BGC (pso BGC) was characterized (Figures 1 and 2, Table S1) [59].In this study, it was found that genes for fumagillin and pseurotin were physically intertwined in a single supercluster.Pseurotin is not described as having a role in aspergillosis, but interestingly, this metabolite has been recently found to exhibit antibacterial properties and mediate the bacterial composition of the cheese rind microbiome [60].
The helvolic acid BGC (hel BGC) was characterized in 2009 (Figures 1 and 2, Table S1) [61].Helvolic acid, first identified from the A. fumigatus in 1943 [62], is a terpene derived mycotoxin [63].This metabolite is an effective antibacterial agent against Grampositive bacteria [64,65] that, similarly to pseurotin, appears to mediate fungal domination over bacteria in specific microbiomes [66].There are no reports of helvolic acid affecting the virulence of A. fumigatus, although the molecule has been found to slow ciliary beat frequency in mammalian cell lines, a result speculated to potentially influence colonization of the airways [67].
Fumiquinazoline was first discovered in 1995 (Figure 2) [68], and its BGC (fmq BGC) was identified in 2010 (Figure 1, Table S1) by searching for an anthranilate-activating NRPS in the A. fumigatus genome [69].The fmq BGC was expanded to five genes in 2014 [70].This metabolite selectively accumulates in A. fumigatus conidia [70,71], where it provides some protection against UV radiation [6].Although there are no studies on the potential impact of this metabolite on virulence, one study reported that fumiquinazoline could inhibit phagocytosis by both the soil amoebae Dictyostelium discoideum and murine macrophage [72].
In 2010, the BGC responsible for synthesizing pyripyropene A (pyr BGC) was identified (Figures 1 and 2, Table S1) [73].Pyripyropene A was initially identified as a potent inhibitor of acyl-CoA cholesterol acyltransferase, which is a mammalian intracellular enzyme in the endoplasmic reticulum [74,75].This metabolite also shows promise as an insecticide, although the mode of action is not fully realized [76].A recent study shows that insects may evolve resistance to pyripyropene-like insecticides [77].
Another BGC encoding a conidial SM, endocrocin, was identified in 2012 (Figures 1 and 2, Table S1) [78].The BGC (enc BGC) was identified by searching for products encoded by non-reducing PKS that lack a thioesterase/Claisen cyclase domain.Endocrocin belongs to a common chemical class called anthraquinones, which is noted for its industrial uses [79].Endocrocin was shown to inhibit neutrophil migration both in vitro, using human neutrophils, and in vivo, using the zebrafish model [80].
Three BGCs were identified and linked to metabolites in 2013 (Figure 1, Table S1).Fumagillin was identified in 1951 (Figure 2) [81] when it was found to possess amoebicidal activity.This meroterpenoid is a known inhibitor of methionine aminopeptidase 2 [82] and is used as a protectant against Nosema disease of honeybees [83].As mentioned earlier, the biosynthetic genes encoding for fumagillin synthesis are intertwined with those encoding pseurotin production [59].The resulting 'supercluster' is regulated by the BGCspecific factor, FumR/FapR [59,84].This supercluster is conserved in the distantly related insect pathogen Metarhizium, where, together, the metabolites demonstrate antibacterial activity [85].Fumagillin is involved in host cellular damage and appears to protect the fungus from phagocytosis [86].
The second BGC, characterized in 2013, encoded the iron-chelating non-ribosomal peptide hexadehydroastechrome (Figures 1 and 2, Table S1) [87].Hexadehydroastechrome is a key player in iron homeostasis in A. fumigatus [88] and also impacts the expression of several other SMs, such as gliotoxin.Overexpression of hexadehydroastechrome increased the virulence of A. fumigatus in a murine model of aspergillosis [87].
The third BGC, characterized in 2013, a silent BGC, synthesized neosartoricin, a prenylated polyphenol (Figures 1 and 2, Table S1) [89,90].This metabolite was found the same year by another research group, which called the compound fumicycline [91].The neosartoricin (or fumicycline) BGC (nsc/fcc BGC) is conserved in dermatophytic fungi.In one study, the dermatophyte BGC was heterologously expressed in the model fungus A. nidulans [90], and in another study, the pathway-specific TF, NscR/FccR was activated in A. fumigatus using the constitutive gpdA promoter, which induced expression of the other BGC genes [89].In the third study, all genes in the nsc/fcc BGC were activated, and neosartoricin/fumicycline was produced by co-culturing with Streptomyces rapamycinicus [91].Bioactivity assays showed that the purified metabolite exhibited moderate activity against S. rapamycinicus, leading the authors to speculate that neosartoricin/fumicycline may contribute to fungal defense.
A fourth conidial metabolite, trypacidin, was linked to its BGC (tpc/tyn BGC) in 2015 by two groups (Figures 1 and 2, Table S1) [92,93].Trypacidin was first discovered in 1963, and its antibiotic properties were reported at that time [94].In addition to its antibiotic and antiprotozoal properties, trypacidin possesses anti-phagocytosis characteristics against macrophages and amoebae [92].Trypacidin is a spirocoumaranone [95] and actually synthesizes endocrocin as an early precursor in the long trypacidin pathway [93].Trypacidin is an example of a temperature-regulated natural product [96].
In 2016, a BGC encoding, an unusual core synthase, was characterized [97,98].The fumisoquin BGC contains an NRPS-like enzyme that lacks a condensation domain found in canonical NRPS (Figures 1 and 2, Table S1).This BGC was found by searching for orphan BGCs that are under the control of LaeA [99].Fumisoquin biosynthesis is notable for its carbon-carbon bond formation between two amino acid-derived moieties that are directly analogous to isoquinoline alkaloid biosynthesis in plants, supporting a view of divergent evolution in fungi and plants to synthesize a similar end chemistry [97].
The year 2018 marked the addition of a relatively obscure chemical class to the compendium of characterized A. fumigatus natural products: the isocyanides, also known as isonitriles, which are characterized by the bioactive functional group R-N + ≡C − (Figures 1 and 2, Table S1) [100].The first characterized isocyanide compound, xanthocillin, was discovered in 1948 from a culture of Penicillium notatum [101].In 2011, a co-culture between A. fumigatus and Streptomyces peucetius led to the production of the isocyanide xanthocillin analog BU-4704 [102], and in 2018, a BLAST search targeting bacterial isocyanide synthases (ICS) unearthed four prospective ICS homologs distributed across three BGCs in A. fumigatus [100].Subsequently, it was revealed that one of these ICS BGCs encoded the tyrosine-derived xanthocillin.Overexpression of the BGC TF, xanC, greatly increased product synthesis, allowing the discovery of the copper chelating properties of this compound that was related to antibacterial activity [103].
In 2019, fumihopaside A was characterized as a new hopane-type glucoside (Figure 2).Fumihopaside A is synthesized by the four-gene afum BGC (Figure 1, Table S1).This cluster was identified via a search to find natural products with glycosyl modifications [104].One of the four genes, afumC, encodes glycosyltransferase, which was the key to finding this glucoside.The products of the afum BGC, fumihopaside A and B, were confirmed both by deleting members of the cluster in A. fumigatus as well as expressing the cluster in A. nidulans.This study demonstrated that fumihopaside A enhances the thermotolerance and UW resistance of A. fumigatus.
Similar to the finding that S. peucetius induced BU-4704 synthesis [102] and that S. rapamycinicus induced fumicycline A production [91], the α-pyrone polyketide fumigermin, was identified in 2020 (Figures 1 and 2, Table S1) during A. fumigatus/S.rapamycinicus coculture [105].Fumigermin was shown to inhibit the germination of S. rapamycinicus spores.Co-culturing with other Streptomyces spp.such as S. iranensis, S. coelicolor or S. lividans also induced fumigermin production.This finding added to the accruing data showing that some SMs and their BGCs are activated as defense molecules in microbiome settings.
The year 2022 brought the characterization of two more isocyanide products, fumivaline and fumicicolin, both generated from the copper-responsive metabolite (crm) BGC consisting of four genes (Figures 1 and 2, Table S1).The crm cluster is activated by low copper concentrations.Within the crm BGC, there is a core gene encoding a multi-domain ICS-NRPS-like enzyme called CrmA.This enzyme modifies L-valine into (S)-2-isocyanoisovaleric acid, commonly referred to as valine isocyanide, which serves as a crucial intermediate for the synthesis pathways of both fumivaline and fumicicolin [106].While fumivaline A and fumicicolin A lack individual antibacterial/antifungal properties, their combined treatment synergistically inhibits the growth of some bacteria and fungi.
In addition to isocyanide products, the sphingofungin BGC was also identified in 2022 (Figures 1 and 2, Table S1) [107].Sphingofungin is a polyketide-derived compound, which was first isolated from A. fumigatus in 1992 [108].Sphingofungin is known to inhibit serine palmitoyl transferase (SPT) [109], which plays a crucial role in the biosynthesis of sphingolipids (SLs) [110].Recently, using confocal microscopy, it was shown that the synthesis of sphingofungins and SLs was partially co-compartmentalized in the ER and ER-derived vesicles [111].
The most recent A. fumigatus SM to be linked to a BGC is sartorypyrone (Figures 1 and 2, Table S1) [112].Sartorypyrone was produced by heterologously expressing its six gene BGC in a production strain of A. nidulans.Like fumicycline A, satorypyrone is a meroterpenoid, a chemical class that is derived from hybrid polyketide or non-polyketide and terpenoid biosynthesis.Sartorypyrones are known metabolites produced by other fungi and have been reported to exhibit antibacterial activity against Gram-positive bacteria, including S. aureus, B. subtilis, E. coli, and P. aeruginosa [113,114].
Shifts in Secondary Metabolite Profiles Mediated by Epigenetics, Temperature, and Media Alter Titers of Commonly Expressed Compounds
Many BGC/SM linkage discoveries were in one of two commonly assessed A. fumigatus isolates, Af293 or A1163, both of which have fully annotated genomes [115].As seen by the timeline, the first BGCs to be fully characterized were those associated with the metabolites (e.g., melanin, gliotoxin, fumigaclavine, fumitremorgin, fumagillin, helvolic acid, etc.) produced in standard laboratory conditions.Greater efforts-including coculture, unusual growth conditions (metal extremes), and molecular manipulations in heterologous hosts-were needed to characterize additional BGCs and their metabolites (e.g., fumihopaside, fumivaline, fumicicolin, fumigermin, sphingofungin, sartorypyrone, etc).Thus, it appears that a specified reduced set of SMs are preferentially synthesized by A. fumigatus, at least in isolates Af293 and A1160, unless challenged by more extreme abiotic or biotic factors.
To address this observation, we grew WT and a sirtuin E deletion strain (∆sirE) in different media and temperatures to analyze any shifts in SM production.The ∆sirE was chosen as the loss of SirE, a lysine deacetylase, that had previously been reported to impact secondary metabolite profiles [13].Volcano plots of all conditions demonstrate that each variable-strain, media, and temperature-altered the SM profile (Figure 3).The significant p-values were set at <0.05, and log2 fold change was set at 1 and −1 (2-fold higher and lower).Known and putative SMs produced by A. fumigatus in different strains were identified based on the analyses via MAVEN, SIRIUS, and XCMS platforms (Figures 3 and 4).About 27 known metabolites were identified and listed in Table S2, in which their m/z values were confirmed either by comparisons with standards or via searches on the Pubmed database and previous papers.Their abundance was determined based on the height intensity of m/z values (precursor ions) displayed on ion chromatograms and mass spectra.
Of the known SMs, only nine showed significant differences across the six treatments: fumigaclavine A; fumigaclavine C; fumitremorgin C; fumagillin; fumiquinazoline A; pyripyropene A; pseurotin A; gliotoxin; and terezine D (the stable precursor to hexadehydroastechrome) (Figures 3 and 4).The overall observation was that the pattern of regulation changed for each condition; e.g., different metabolites were up-or downregulated in the ∆sirE strain compared to WT, and this was also dependent on media and temperature.For instance, fumigaclavine A was upregulated in ∆sirE in GMM at 37 • C but downregulated at 25 • C. The most noteworthy observation from the entire study was that only those SMs commonly produced in standard laboratory conditions were reliably measurable regardless of treatment.
identified based on the analyses via MAVEN, SIRIUS, and XCMS platforms (Figures 3 and 4).About 27 known metabolites were identified and listed in Table S2, in which their m/z values were confirmed either by comparisons with standards or via searches on the Pubmed database and previous papers.Their abundance was determined based on the height intensity of m/z values (precursor ions) displayed on ion chromatograms and mass spectra.Of the known SMs, only nine showed significant differences across the six treatments: fumigaclavine A; fumigaclavine C; fumitremorgin C; fumagillin; fumiquinazoline A; On the other hand, when analyzing the global features produced in these treatments, there were large differences dependent on strain, temperature, and media.Features (unknown metabolites) were characterized by analytes with specific mass-to-charge (m/z) values and retention times observed when the samples were subject to LC-MS.Whether or not these features could represent new SMs is unknown.Additionally, principal component analysis (PCA) was performed to visualize the relationships between different strains, media, and temperature conditions.The scores plot (Figure S1) illustrated that A. fumigatus WT and ∆sirE strains were grouped into five main clusters.Three main clusters were apparently grouped based on media compositions.All strains grown with NH 4 Cl as a nitrogen source were clustered together.The scores plot also suggested that the WT strain was cultivated at different temperatures, but in the same media, it shared the same feature characteristics.The most striking observation was that the ∆sirE strain grown at 25 • C in both CYA and GMM was clustered individually and separately from the rest of the growth conditions, suggestive of a different suite of metabolites unique to ∆sirE at these temperatures.
the ΔsirE strain compared to WT, and this was also dependent on media and temperature.For instance, fumigaclavine A was upregulated in ΔsirE in GMM at 37 °C but downregulated at 25 °C.The most noteworthy observation from the entire study was that only those SMs commonly produced in standard laboratory conditions were reliably measurable regardless of treatment.On the other hand, when analyzing the global features produced in these treatments, there were large differences dependent on strain, temperature, and media.Features (unknown metabolites) were characterized by analytes with specific mass-to-charge (m/z) values and retention times observed when the samples were subject to LC-MS.Whether or not these features could represent new SMs is unknown.Additionally, principal component analysis (PCA) was performed to visualize the relationships between different strains, media, and temperature conditions.The scores plot (Figure S1) illustrated that A. fumigatus WT and ΔsirE strains were grouped into five main clusters.Three main clusters were apparently grouped based on media compositions.All strains grown with NH4Cl as a nitrogen source were clustered together.The scores plot also suggested that the WT strain was cultivated at different temperatures, but in the same media, it shared the same feature characteristics.The most striking observation was that the ΔsirE strain grown at 25 °C in both CYA and GMM was clustered individually and separately from the rest of the growth conditions, suggestive of a different suite of metabolites unique to ΔsirE at these temperatures.
Variation in BGCs in A. fumigatus Strains
Could the SM output of A. fumigatus vary across different isolates?As mentioned above, almost all BGC characterization has been performed using isolates Af293 or A1160, which have typically been reported to harbor ca.34-36 BGCs [116,117].However, recent studies in other fungi have shown a significant variation in BGC composition in different isolates.For example, a recent analysis of 94 isolates of A. flavus showed BGCs could be grouped in 'core' (all or most strains contained the BGC) and 'accessory' BGCs that grouped with phylogenetic populations [118].These A. flavus genotype differences also correlated with chemotypic differences in SM production.Further, previous work revealed as many as five unique variations in BGCs across 66 strains of A. fumigatus [117].These studies indicated that more chemical diversity might exist within the A. fumigatus species complex than can be seen by studying Af293 or A1160.
We aimed to reproduce this analysis of the 264 publicly available A. fumigatus annotated genomes.The popular genome mining program, antiSMASH [15], was run on every genome to generate initial BGC predictions.For all 20 characterized BGCs, the presence or absence of these clusters in the genomes was manually double-checked using cBlaster [16], a tool that identifies homologous, co-localizing sets of genes.In such cases, we looked for the presence of the entire reference cluster, not only the key backbone synthase/synthetases. Lastly, we ran BiG-SCAPE on all the antiSMASH-generated predictions to organize the BGCs into families of evolutionarily related BGCs.Such families are often referred to as gene cluster families (GCFs) and are thought to produce the same or highly similar classes of SMs.Our generated GCFs were compared to those detected by our group in 2017 [117] to determine what new BGCs could be detected with more modern genome mining algorithms and a larger dataset of A. fumigatus genomes.
Figure 5 illustrates the presence and absence of the 20 defined BGCs described above, as well as any uncharacterized GCFs from our analysis that matched those previously identified by Lind et al. (2017) [117].Overall, we found that SM-producing BGCs within the A. fumigatus species complex were remarkably conserved across different populations (Figure 5).The two exceptions to this appeared to be the fumigermin and fumihopaside BGCs.While the sporadic distribution patterns of these clusters could indicate their more accessory role within the ecology of A. fumigatus, further work is needed to verify that the observed pattern is not a consequence of the fumigermin or fumihopaside genes not always co-localizing within genomes-a pattern that would violate the assumptions made by antiSMASH and cBlaster in detecting the SM producing genes.Additionally, as the quality of the genomic sequences varies among the 264 published genomes, some false negatives in our dataset could be the result of highly fragmented assembled genomes and poor annotations.In addition to the previously characterized BGCs, our re-analysis identified 20 novel GCFs (Figure 6) not previously found in Lind et al. (2017) [117].While the majority of these novel GCFs are present in the genomes of both Af293 and A1160, some, such as terpene-8, are found only in specific populations of A. fumigatus.Overall, 11 of the new GCFs were found in nearly all the 264 isolates, while the remaining 9 were absent in certain populations or were present in a much smaller number of genomes.We believe that the increase in GCFs can be largely attributed to six years of advancements and software updates that have been made to the antiSMASH program [15,119].Additionally, the identification of some novel population-specific GCFs underscores the importance of including more non-traditional isolates in natural product studies.In addition to the previously characterized BGCs, our re-analysis identified 20 novel GCFs (Figure 6) not previously found in Lind et al. (2017) [117].While the majority of these novel GCFs are present in the genomes of both Af293 and A1160, some, such as terpene-8, are found only in specific populations of A. fumigatus.Overall, 11 of the new GCFs were found in nearly all the 264 isolates, while the remaining 9 were absent in certain populations or were present in a much smaller number of genomes.We believe that the increase in GCFs can be largely attributed to six years of advancements and software updates that have been made to the antiSMASH program [15,119].Additionally, the identification of some novel population-specific GCFs underscores the importance of including more non-traditional isolates in natural product studies.
Discussion
Filamentous fungi, as diverse and adaptable organisms, are prolific producers of secondary metabolites, which contribute to their survival, reproduction, and interactions with other organisms in their environment [7].Among the vast array of fungal species, A. fumigatus stands out as a significant pathogenic fungus with high mortality rates and increasing antifungal resistance, leading it to be placed on the WHO pathogen list of critical importance [120].Several of its SMs have been shown to mediate various virulence traits of this pathogen [1,121].Some-if not all-of these metabolites are also important in protection from various abiotic (e.g., UV, copper starvation) and biotic (e.g., microbial competitors) stressors [1,121].The bioactivities of some A. fumigatus SMs have even led to practical applications, such as fumagillin treatment of the Nosema disease of honeybees [83].Pyripyropene A and analogs, potent and selective sterol O-acyltransferase 2 (SOAT2)/acyl-coenzyme A:cholesterol aceyltransferase 2 (ACAT2) inhibitors, have promise for the treatment of atherosclerotic disease [74] and use as insecticides [122].
The linkage of 20 BGCs to specific products places A. fumigatus, along with A. nidulans [10], as one of the most thoroughly vetted species with regard to an understanding of its natural product potential.Not surprisingly, the first defined BGCs were those that correlated with commonly produced SMs.Several SMs were first chemically characterized, and the encoding BGC was identified by either gene similarity to known biosynthetic genes in other fungi (e.g., DHN melanin [33], gliotoxin [39], fumigaclavine [44,45], and fumitremorgin [49]), or by chemistry logic (e.g., if the SM was a peptide, then researchers looked for an NRPS), which led to BGC discovery for pseurotin A [55], helvolic acid [61], and fumiquinazoline [69].
Discussion
Filamentous fungi, as diverse and adaptable organisms, are prolific producers of secondary metabolites, which contribute to their survival, reproduction, and interactions with other organisms in their environment [7].Among the vast array of fungal species, A. fumigatus stands out as a significant pathogenic fungus with high mortality rates and increasing antifungal resistance, leading it to be placed on the WHO pathogen list of critical importance [120].Several of its SMs have been shown to mediate various virulence traits of this pathogen [1,121].Some-if not all-of these metabolites are also important in protection from various abiotic (e.g., UV, copper starvation) and biotic (e.g., microbial competitors) stressors [1,121].The bioactivities of some A. fumigatus SMs have even led to practical applications, such as fumagillin treatment of the Nosema disease of honeybees [83].Pyripyropene A and analogs, potent and selective sterol O-acyltransferase 2 (SOAT2)/acyl-coenzyme A:cholesterol aceyltransferase 2 (ACAT2) inhibitors, have promise for the treatment of atherosclerotic disease [74] and use as insecticides [122].
The linkage of 20 BGCs to specific products places A. fumigatus, along with A. nidulans [10], as one of the most thoroughly vetted species with regard to an understanding of its natural product potential.Not surprisingly, the first defined BGCs were those that correlated with commonly produced SMs.Several SMs were first chemically characterized, and the encoding BGC was identified by either gene similarity to known biosynthetic genes in other fungi (e.g., DHN melanin [33], gliotoxin [39], fumigaclavine [44,45], and fumitremorgin [49]), or by chemistry logic (e.g., if the SM was a peptide, then researchers looked for an NRPS), which led to BGC discovery for pseurotin A [55], helvolic acid [61], and fumiquinazoline [69].
Where does this leave the field of SM discovery in A. fumigatus?Most laboratory methods-use of mutants, different media, temperature shifts-to assess SM production in this fungus merely result in titer changes in the commonly expressed metabolites, as illustrated not only in this work (Figures 3 and 4) but in many other studies [123][124][125][126]. A. fumigatus has been reported to contain 34-36 BGCs but with advanced computational tools, we found the species to harbor considerably more BGCs, nearing 50 for some strains (Figures 5 and 6).Furthermore, many of the BGCs, both the characterized 20 and the non-characterized, are missing in some A. fumigatus isolates or contain mutations that result in loss of product [93,117,127,128].To fully characterize the full set of BGCs in the two commonly used laboratory strains, Af293 and A1160, it is likely that overexpression technologies, be they endogenous or heterologous, will be required.But the promise of additional and unknown chemistries resides in the large number of new putative BGCs in other isolates of this fungus (Figure 6).
Figure 1 .
Figure 1.Timeline of BGC characterization in Aspergillus fumigatus.Yellow dots indicate secondary metabolites linked to BGCs and the years when their linkages were identified.
Figure 1 .
Figure 1.Timeline of BGC characterization in Aspergillus fumigatus.Yellow dots indicate secondary metabolites linked to BGCs and the years when their linkages were identified.
Figure 2 .
Figure 2. Structures of secondary metabolites linked to characterized A. fumigatus BGCs.Three BGCs were identified in 2005, one being the gliotoxin (GT) BGC (gli BGC) (Figures 1 and 2, TableS1).GT was first detected from A. fumigatus in 1944[35].GT is a toxic epidithiodioxopiperazine due to a disulfide bridge in its structure that is responsible for generating reactive oxygen species (ROS)[36,37].The gli BGC was identified via a homology search using the sirodesmin BGC, sirodesmin being a similarly structured
Figure 2 .
Figure 2. Structures of secondary metabolites linked to characterized A. fumigatus BGCs.
Figure 3 .
Figure 3. Volcano plots of significant features produced in A. fumigatus ΔsirE strain compared to WT when grown in diverse media and two temperatures.Co-regulation of numerous SMs in both strains varied based on the media compositions (GMM, CYA, NH4Cl) and temperatures (25 and 37 °C).Features with log2 fold change <1 and >1 and significant differences (p < 0.05) are shown.
Figure 3 .
Figure 3. Volcano plots of significant features produced in A. fumigatus ∆sirE strain compared to WT when grown in diverse media and two temperatures.Co-regulation of numerous SMs in both strains varied based on the media compositions (GMM, CYA, NH 4 Cl) and temperatures (25 and 37 • C).Features with log 2 fold change <1 and >1 and significant differences (p < 0.05) are shown.
Figure 4 .
Figure 4. Intensities of m/z values of known SMs found in A. flavus WT and ΔsirE strains.Different metabolites were up-or downregulated in the ΔsirE strain compared to WT, depending on media and temperature.
Figure 4 .
Figure 4. Intensities of m/z values of known SMs found in A. flavus WT and ∆sirE strains.Different metabolites were up-or downregulated in the ∆sirE strain compared to WT, depending on media and temperature.
J 20 Figure 5 .
Figure 5.The BGCs of previously characterized secondary metabolism-producing genes in different isolates of A. fumigatus.The species tree estimation was generated using a coalescent model on a dataset of 700 single-copy ortholog gene trees.The heatmap shows the presence (blue) or absence (cream) of various secondary metabolite GCFs across 264 A. fumigatus isolates.The yellow lines indicate Af293 and A1160.
Figure 5 .
Figure 5.The BGCs of previously characterized secondary metabolism-producing genes in different isolates of A. fumigatus.The species tree estimation was generated using a coalescent model on a dataset of 700 single-copy ortholog gene trees.The heatmap shows the presence (blue) or absence (cream) of various secondary metabolite GCFs across 264 A. fumigatus isolates.The yellow lines indicate Af293 and A1160.
Figure 6 .
Figure 6.Novel BGCs identified in this study.The species tree is identical to that in Figure 5.The heatmap shows the presence (red) or absence (cream) of all novel/unknown GCFs found in the A. fumigatus isolates.
Figure 6 .
Figure 6.Novel BGCs identified in this study.The species tree is identical to that in Figure 5.The heatmap shows the presence (red) or absence (cream) of all novel/unknown GCFs found in the A. fumigatus isolates. | 9,045.4 | 2024-04-01T00:00:00.000 | [
"Biology",
"Chemistry",
"Environmental Science"
] |
A Novel Design and evaluation of a low power efficient Fault-Tolerant Reversible ALU Using QCA: Applications of Nanoelectronics
: Reversible logic based on Quantum-dot Cellular Automata (QCA) is the most requirement for achieving nano-scale architecture that promises significantly high device integration density, high-speed calculation, and low power consumption. The arithmetic logic unit (ALU) is the significant component of a processor for processing and computing. The primary objective of this work is to develop a multi-layer fault-tolerant arithmetic logic unit using reversible logic in QCA technology. Additionally, the reversible ALU has divided into arithmetic (RAU) and a logic unit (RLU). A reversible 2:1 MUX using the Fredkin gate has been implemented to select either the arithmetic or logical operations. Besides, to improve the efficiency of arithmetic operations, a novel QCA reversible full adder is implemented. To build the ALU, fault-tolerant reversible logic gates are used. The proposed reversible multilayer QCA ALU is designed to carry out eight arithmetic and sixteen logical operations with a minimum number of gates, constant inputs, and garbage outputs compared to the existing works. The functional verification and simulation of the presented circuits are assessed by the QCADesigner tool.
Introduction
Following Moore's law, CMOS technology has reached the size of tens of nanometers. The scaling of transistor size results in short-channel effects, leakage power, high power dissipation, low package density, physical and economic challenges. Researchers have proposed quantum-dot cellular automata (QCA) as a substitute for the CMOS VLSI technology. The potential advantages of this technology are ultra-low-power dissipation, less area, high packaging density, fast computing, and high operating frequency when compared to the CMOS technology. In the conventional technologies, the data is carried by passing electric current whereas in QCA by polarization states. According to Landauer, an irreversible operation dissipates the energy of KTln2 Joules due to a transition from input to output where temperature T is measured in Kelvin and K represents Boltzmann's constant [1]. For avoiding energy loss during the signal transition, Benett had introduced reversible circuits made up of reversible gates [2]. The energy dissipation in reversible logic gates avoided by mapping each n bit input to an n bit output. The main factors to be considered while designing circuits using reversible logic gates are quantum cost, the number of garbage outputs, constant inputs, and the number of gates is to be minimum. Researchers have introduced reversible logic gates like Fredkin, FG, F2G, Toffoli, RUG, KMD, PG, MKG, IG, HNG, RM [6][7][8][9][10][11][12] to design a reversible logic circuit. But a few gates addressed as in above have fault tolerance property. ALU (Arithmetic Logic Unit) is the base of most quantum computers. Reliability in reversible computation is a big concern .A fault-tolerant system can keep its properties correct in the case of failure also. The purpose of this paper is to integrate fault tolerance with the property in the architecture of ALU using parity preservation gates of reversible logic. It's not only going to minimize hardware overhead, but it also prohibits additional design efforts for designing of a fault-tolerant ALU using parity conservative logic gates. The design of ALUs suggested so far has to be improved in terms of the performance metrics like quantum cost, number of gates, number of garbage inputs, constant inputs, complexity, area, and number of operations. So, we recommend a high-performance reversible fault tolerant ALU in QCA with more number of functions and less number of gates in this paper. The essential contributions from this work indicated as follows: Proposes a cost-effective reversible 2X1 MUX circuit in QCA with a parity preserving gate.
Addresses a novel reversible fault tolerant full adder circuit in QCA for optimization in designing the ALU circuit.
Designs a single bit ALU with the suggested QCA full adder and the 2X1 MUX with the improved number of operations, number of gates, garbage inputs, and constant inputs.
Compares the proposed ALU work with the related existing works.
The remaining of the article is structured as follows. The review of QCA technology and reversible logic is discussed in section 2. The related works are presented in section 3. Section 4 introduces a novel QCA based on a reversible 2:1multiplexer and single-bit full adder circuits. The proposed reversible QCA based ALU is presented in section 5. The simulation results and comparison with similar works are analyzed in section 6. The observations are concluded in section 7.
Preliminaries
Basics of QCA technology and reversible logic are discussed in this section.
QCA basics
The basic cell used in designing QCA circuits is a QCA cell which consists of four quantum dots in a squareshaped nanostructure as can be seen in Figure 1. In a QCA cell, a polarization P is formulated as Where ei denotes the electronic charge on dot i. The QCA cell with polarization -1 represents logic '0' and polarization 1 for logic '1'. When a QCA cell is positioned near a carrier cell that has a fixed polarisation, the cell would match its polarisation with its carrier cells. Hence the data interaction between the neighboring QCA cells can be transmitted across a series of cells as shown in Figure 2.
Insert Fig. 1 Insert Fig. 2 The basic logic gates used in QCA are a majority gate and an inverter and their examples are displayed in Figure 3. The basic function of a 3-input majority gate with inputs p,q,r, and output Y is given by Y=M(p,q,r)= pq + qr +rp (2)
Insert Fig. 3
An AND gate is constructed by using the majority gate by keeping one of the inputs as logic 0. The expression for an AND gate with two inputs p,q and on output Y is given by In the same way, an OR gate is constructed by keeping one of the inputs as logic 1. The 2-input OR gate is given by equation 4.
The two-wire crossings available in QCA technology are co-planner and multilayer crossings. Both standard and rotated cells are used by a coplanar crossover. The two kinds of cells do not interfere with one another if they are correctly positioned. Multilayer crossovers are implemented by using more number of layers. It is more reliable compared to the co-planner crossover. QCA circuits operate accurately if they are clocked properly. In QCA, clocking not just to regulates the flow of data, but also acts as the source of electricity. There are four types of clocking available in QCA.They are clocking zone 0,clocking zone 1,clocking zone 2 and clocking zone 3. A typical clocking scheme used in QCA is as shown in Figure 4.
Insert Fig. 4 2.2 Reversible logic
Computers available now are smaller, quicker, and more complicated. All the logical operations in these computers are considered irreversible. Some data erased each time the logic operation has performed in irreversible logic. Due to this, the power consumption has increased. Thus, the implementation of reversible logic circuits that does not delete information is one promising future computing technology to reduce power consumption. A reversible system must be able to work in a reverse direction. That is, the outputs recovered from their inputs. The measurable parameters of reversible circuits are:
Garbage outputs
Garbage outputs are required to make a function reversible. They are not the actual outputs of the system.
Constant inputs
Constant inputs are the inputs with a fixed value that are required to make a function reversible.
Logical calculations
Logical calculations of a reversible circuit give the total number of NOT, AND, and XOR operations needed for realizing a function.
Quantum Cost
The Quantum Cost (QC) of a circuit with reversible logic is the complete number of 2x2 size quantum primitives to frame a proportionate quantum circuit.
The fault-tolerant reversible gates utilized in synthesizing the proposed circuits are described as follows:
1.RUG
RUG is a 3X3 fault tolerant reversible gate. The three inputs (A, B, C) mapped to the three outputs (P, Q, R). The block diagram of the RUG is displayed in Figure 5 and its schematic diagram in Figure 6. The QCA realization of the RUG is as seen in Figure 7. Figure 8 depicts its simulation result.
2.F2G
Double Feynman gate(F2G) is a 3X3 fault tolerant reversible gate. The block diagram of the F2G and its corresponding schematic diagram is as depicted in Figure 9.a & b. The QCA realization of the F2G is as displayed in Figure 9..c. Figure 10 shows the simulation result of the F2G gate.
3.FRG
Fredkin gate(FRG) is a 3X3 fault tolerant reversible gate. The block diagram of the FRG is as seen in Figure 11 and its corresponding schematic diagram in Figure 12. The QCA realization of the FRG is as presented in Figure 13 and its simulation output in Figure 14.
4.FG
Feynman gate(FG) is a 2X2 reversible gate. The block diagram of the FG and its corresponding schematic diagram is as seen in Figure 15. The QCA realization and simulation result of the FG are displayed in Figures 16 and 17.
5.UPPG
Universal parity preserving gate(UPPG) is a 4X4 reversible gate. The structure of the UPPG is as seen in Figure 18 and its corresponding schematic diagram in Figure 19. The QCA realization and simulation result of the UPPG is as presented in Figure 20 and 21 respectively.
3.Related work
Different approaches have been implemented in recent years to enhance the performance of elements of the ALU. There are, however, only a few architectures suggested for designing QCA ALU. The authors have suggested a 4-bit QCA ALU in [3]. This approach uses three layers, 420 QCA cells, latency in three clock zones, and an area of 0.85 μm2. This layout is not using the fault-tolerant reversible gates as well as it is using more number of QCA cells to implement. The authors suggested a method to implement QCA ALU capable of performing 12 logic and arithmetic operations [4]. But, this approach uses 485 QCA cells, an area of 0.79 μm2, and latency of five clock zones. There are many drawbacks to this process, as the absence of reversibility, exclusion of fault tolerance, high consumption of cells, and also high latency. Trailokya Nath Sasamal et al. [5] designed a reversible ALU in QCA using the coplanar crossing. But the proposed design uses more number of QCA cells and performs only 20 ALU operations only. In [13], the authors constructed a reversible QCA ALU using a reversible MUX. However, the design is able to perform only 16 operations with more number of constant inputs and garbage outputs. Sasamal et al. [14] propose a QCA ALU using reversible logic. They considered a 3X3 RUG fault tolerant reversible gate as a fundamental element in synthesizing reversible ALU design. But it can perform only 16 ALU operations only. The authors [15] propose an integrated fault-tolerant QCA ALU using KMD reversible gates. The suggested design can be able to perform only 18 operations with an increased number of constant inputs, gates, and garbage outputs.
Reversible MUX
Using Fredkin's reversible logic gate, a novel 2:1 MUX has been constructed in QCA technology. All the potential benefits with this proposed new multiplexer seem to be reversible logic that is not present in the traditional MUX. One of the crucial characteristics in designing the logic circuit using reversible gates is to reduce the count of reversible gates and garbage outputs. The schematic diagram of the proposed MUX is as shown in Figure 22 and its QCA layout in Figure 23. The designed circuit utilizes only 75 QCA cells, only one gate, and an area of 0.08µm2. Figure 24 depicts the simulation result of the suggested 2:1 MUX.
Full adder
The proposed reversible fault tolerant full adder circuit is designed using RUG and F2G gates. The schematic diagram and QCA layout of the suggested circuit is as shown in Figure 25 and 26 respectively. It utilizes a very less cell count of 107 QCA cells compared to its earlier designs. The circuit is said to be fault-tolerant and less are utilization as it is implemented with the parity preserving reversible gates RUG and F2G. The simulation result of the suggested full adder is depicted in figure 27.
Proposed fault tolerant reversible ALU
An arithmetic and logic unit is an essential part of several computing systems. With minimal hardware cost, the desired requirement of an ALU should execute the maximum allowable operations. The suggested fault tolerant reversible ALU unit has separated into two sub-components: (1) a reversible arithmetic unit (AU)(2) a logic unit(LU), which is shown in Figure 28. The signals used for this ALU are A, B & Cin(three inputs), constant inputs, and selection lines (S0, S1 & S2). A 2X1 MUX is used to choose an output from any of the two components.
Arithmetic Unit
The reversible fault tolerant arithmetic unit (RAU) using the suggested full adder is seen in Figure 29. It incorporates one gate of FG, Fredkin, RUG, and F2G. It performs eight arithmetic operations like transfer, increment, decrement, copy, addition without carrying input, addition with carrying input, and addition with complement as depicted in Table 1. The reversible arithmetic module has five inputs (A, B, Cin, S0 & S1), two actual outputs(sum & Cout), and six garbage outputs (G1, G2, G3, G4, G5 & G6). The expressions for the two outputs sum and Cout are given by the following equations 5 & 6. The synthesized QCA layout of the suggested reversible AU is depicted in Figure 30 .
Logic unit
The reversible fault-tolerant logic unit (RLU) is as seen in Figure 31. It incorporates one gate of F2G and two gates of UPPGG & Fredkin. It performs sixteen logical operations like copy A, AND operation with complement B, XOR operation, OR operation, constant, AND, copy B, AND operation with complement A, NOT, OR operation with complement A and XNOR operation as depicted in Table 2. The reversible logic module has two inputs (A, B), four selection inputs (Cin, S0, S1 & S2), one actual output(Y), and twelve garbage outputs. The expressions for the output Y is given by equation 7. The synthesized QCA layout of the suggested reversible LU is as depicted in Figure 32.
6.Results and discussion
The characteristics used with QCA Designer-E to generate simulation results of the proposed circuit, along with energy dissipation, are listed. These are all the essential parameters for simulating any QCA circuit with the QCA Designer-E software. The dimension of a single QCA cell is 18 nm. The gaps between the cells from each QCA are 2 nm.The paper introduces a novel multilayer reversible QCA ALU layout which performs both the arithmetic and logical execution operations. Furthermore, a new QCA fault tolerant reversible 2:1 multiplexer is to select the arithmetic and logic unit. The proposed reversible 2:1 multiplexer built with 75 QCA cells, 0.08 μm 2 region, and four clock cycle latency that are lower compared to the best earlier design. Also, an efficient fault-tolerant reversible full adder has suggested minimizing the complexity of the arithmetic unit. Table 3 shows the results of suggested FA has 107 QCA cells and an area of 0.08 μm 2 that are less than the best existing single-layer design.
Insert Table 3 The factors measured for comparing ALU with existing are the number of operations, constant inputs, garbage outputs, number of reversible gates, and the total number of logical calculations. Table 4 discusses the comparison of the suggested ALU with the best earlier works. We may note from Table 5 that our fault-tolerant reversible ALU requires only five constant inputs, ten reversible gates, and eighteen garbage outputs, which are lower than the current works. Also, the suggested ALU performs eight arithmetic and sixteen logical operations. Therefore, regarding two very different parameters area and speed, our proposed structure is more appropriate for implementing the reversible QCA design.
Insert Table 4 Table 5 provides the improvement of the suggested ALU with the previous best works. It shows an improvement of 25% in terms of the number of operations, 29% constant inputs, 18% garbage outputs, and 9 % the number of gates used when compared to the KMD approach 2. The improvement of the proposed ALU with the earlier works is represented in graphical form as seen in Figure 33. The comparison of the performance of nbit ALU is as seen in table 6.
7.Conclusion
This paper introduces a novel multilayer fault-tolerant reversible arithmetic logic unit using QCA technology. The ALU unit has been separated into a reversible arithmetic and logic unit. Moreover, an area-efficient reversible 2:1 multiplexer has proposed to select either the arithmetic or logical operations. The results of the simulation show the suggested 2:1 multiplexer has built-in comparison to the best previous model, with 75 cells, 0.08 μm2 area, four clock cycle latency, and fault tolerance. Furthermore, an efficient fault-tolerant reversible QCA full adder with parity preserving gates has been proposed to decrease the complexity of the arithmetic operations. The suggested ALU architecture is fault-tolerant as the circuit itself has been designed using parity preserving gates.The results of the comparison demonstrate that the suggested full adder has 107 cells, and 0.08 μm2 area is much less than the best earlier models. The proposed QCA ALU shows an improvement of about 25% in the number of operations, 29% in constant inputs, 18 % in garbage outputs, and 9 % in the number of gates utilized compared to the earlier best design.
Funding: NA conflicts of interest: The authors have no conflicts of interest to declare that are relevant to the content of this article. | 4,068.4 | 2021-02-11T00:00:00.000 | [
"Computer Science"
] |
Field-free spin-orbit torque-induced switching of perpendicular magnetization in a ferrimagnetic layer with a vertical composition gradient
Current-induced spin-orbit torques (SOTs) are of interest for fast and energy-efficient manipulation of magnetic order in spintronic devices. To be deterministic, however, switching of perpendicularly magnetized materials by SOT requires a mechanism for in-plane symmetry breaking. Existing methods to do so involve the application of an in-plane bias magnetic field, or incorporation of in-plane structural asymmetry in the device, both of which can be difficult to implement in practical applications. Here, we report bias-field-free SOT switching in a single perpendicular CoTb layer with an engineered vertical composition gradient. The vertical structural inversion asymmetry induces strong intrinsic SOTs and a gradient-driven Dzyaloshinskii–Moriya interaction (g-DMI), which breaks the in-plane symmetry during the switching process. Micromagnetic simulations are in agreement with experimental results, and elucidate the role of g-DMI in the deterministic switching processes. This bias-field-free switching scheme for perpendicular ferrimagnets with g-DMI provides a strategy for efficient and compact SOT device design.
COMMENTS TO AUTHOR:
Reviewer #1: Zheng et al reported field-free spin-orbit torque switching (SOT) of a single ferrimagnetic layer with vertical composition gradient. The composition gradient induces a non-zero SOT and Dzyaloshinskii-Moriya interaction (DMI). However, it is presumably assumed that there is no lateral symmetry breaking in the layer. It has been well known (also well described in the introduction of the manuscript) that a lateral symmetry breaking is required for field-free SOT switching. Interestingly, the authors observed that the single ferrimagnetic layers without such lateral symmetry breaking show field-free SOT switching. An effective in-plane field (= -20 Oe, in Fig. 3c) is close to the DMI field so that the authors claimed that the DMI results in the additional symmetry breaking. This claim is in line with modeling results of Refs.
[ [33][34][35] and their own modeling result. This is an interesting result, but I could not understand why the field-free SOT switching is allowed even though there is no lateral symmetry breaking. Any physical phenomena must obey the symmetry rule regardless of microscopic details.
Symmetry-wise, the investigated structure is the same as ferromagnet/heavy metal bilayers, which do not allow field-free switching unless the lateral symmetry is broken. The authors ) argued that the system is laterally symmetric in equilibrium, but the symmetry is broken during dynamics. I cannot agree with this argument. The authors must show a simple symmetry argument to explain their observation without relying on the numerical simulations. Without a clear symmetry argument (how the DMI breaks the lateral symmetry breaking), it is difficult to trust the modeling result because the numerical modeling in a system with DMI would be highly nonlinear and the DMI boundary condition for actually stair-case shaped edges would be problematic.
Overall I do not support publication of this manuscript in Nat. Comm. unless a simple and clear symmetry argument supporting the field-free SOT switching is provided.
Authors' Response:
Thank you very much for this insightful comment. We performed additional experiments to further clarify the nature of the observed deterministic switching, and correspondingly, to determine the symmetry requirements that would need to be met, to be consistent with the experimental data. These new results and the associated discussion of the experiment's symmetry are discussed below. We hope that this analysis and the related updates to the revised manuscript will address your concern.
Summary of the new experimental results and their symmetry properties: Before presenting the details of the new results, we would like to summarize the main findings and conclusions: 1. We performed additional measurements on a series of newly constructed devices, all of which consistently exhibited the same type of switching reported in the original manuscript. Notably, we confirmed that the switching is both deterministic and directional in all of these samples, i.e. that a particular direction of current favors a particular direction of out-of-plane magnetization.
2. We further performed this same experiment on a series of devices on the same wafer, made at different angles with respect to the initial device axis. All of these devices exhibited the same directionality in the current-induced switching, regardless of the device angle. Collectively, these two experiments provide strong evidence that (i) the deterministic and directional switching is robust and reproducible, and (ii) that the directionality of the switching is not a result of possible unintentional asymmetries in the device structure (e.g. due to the fabrication process), since such unintentional asymmetries would not show the same directionality of the switching across a large number of devices.
3. The key remaining question, then, is what associates a particular direction of in-plane current with a particular direction of out-of-plane magnetization, if there is no structural asymmetry in the device? Our experiments, as described below, showed that the direction of the initial current pulse sent through the device (i.e. the training during the first measurement of a current-induced switching) determines this directionality. This training effect itself breaks the symmetry of the experiment. In other words, no symmetry requirements are violated when the history of the first applied current is considered in conjunction with the directionality of the loops in our devices, so that different directions of the training pulse are consistently associated with different directionalities of the subsequent current-induced switching loops.
In what follows, we first describe the experiments that support the above-stated conclusions, followed by a qualitative discussion of the possible micromagnetic explanation of these results.
Details of new experimental results:
We fabricated new samples and performed a series of additional experiments which are described below. The first question we sought to answer was to verify that the field-free SOT-induced switching in our devices is indeed directional, i.e. that each current direction favors a particular final state (up or down), and that this directionality is consistent across a large number of devices.
To do so, we first performed a time-domain switching measurement, as illustrated in Fig. R1. The purpose of this experiment was to illustrate that the final state of the device after switching is determined by the direction of the writing current, but not by the number of consecutive current pulses sent in the same direction, i.e. that the device is not undergoing toggle switching.
The results can be seen in Fig. R1 Next, we performed additional experiments to verify whether any (unintentional) in-plane asymmetry may play a role in our observations. To do so, a direct evidence would be whether the in-plane device orientation impacts the switching polarity. To answer this question, we fabricated new samples where devices with different angles are arrayed on the same chip (see Fig. R2). To verify that this inversed switching polarity is repeatable in a single device, we randomly chose a 45 degree device on the chip and measured it with both configurations, i.e. the 45 degree and 225 degree setups shown in Fig. R3, for several times. As shown in Fig. R4, the directionality of the switching can be clearly repeated, and therefore cannot be attributed to any random event.
It cannot be emphasized enough, that the results of Fig. R3 indeed seem to defy the symmetry of the structure. However, our next experiment showed that this is not the case, and in fact all symmetry considerations are indeed satisfied when the experiment is considered in its entirety, including the training effect from the initial current pulse applied to the device.
The only possible symmetry-breaking element between the first four measurements (left panel in Fig. R3) and the second four measurements (right panel in Fig. R3) is that in the second four measurements, current pulses have already been injected in the device before, i.e., the devices have been trained by the first current pulse which has been applied in the positive direction of the device, as defined on the left panel of Fig. R3.
To control for this training variable, we chose several new devices, in which no current had been applied before subsequent to device fabrication. Fig. R5 shows the corresponding SOT switching loops for three of these devices. We notice that in sharp We hypothesize that in our devices, the first current pulse applied to the device creates a chiral magnetic texture due to the simultaneous effect of SOT and g-DMI.
For example, this could be in the form of the nucleation of initial DWs with a chirality that depends on the g-DMI, at the edges of the device. These DWs are possibly pinned by defects and maintain their existence during magnetization switching.
Similar to Ref. [1.4], their presence can break the micromagnetic in-plane symmetry
of the experiment and in principle allows for deterministic switching to occur. The experimental data as a function of the in-plane field support this interpretation.
The chirality of these DWs is fixed by a competition between the g-DMI and the external in-plane field. In fact, it can be noted that reversed loops are observed for an in-plane field larger than the g-DMI field. When the two are comparable (e.g. -20 Oe in Fig. 3b and 15 Oe in Fig. 4b), the current does not bring about a deterministic change in the resistance because of the Bloch type DWs stabilized by the magnetostatic field.
Stated differently, the creation of these chiral textures at the edge of the device is a local symmetry-breaking event related to g-DMI and SOTs. Because of it, deterministic and directional field-free SOT-induced switching is in principle allowed by symmetry, even though it cannot be explained by the global symmetry of the device structure.
A remaining question, of course, is the reconfigurability of the observed training effect, i.e. whether the devices could possibly be re-trained to exhibit an opposite directionality of the switching loops, e.g. by driving a larger current through them, opposite to the initial training current pulse. We were not able to observe such a re-training effect in our devices, which may be because of the practical limit on the maximum current amplitude that we could apply before breaking the devices.
While the simulation in Fig. 5 of the revised main text explains the detailed switching process, we note and appreciate your concern over the implementation of the DMI boundary condition for stair-case shaped edges in numerical modeling. We claim that this procedure is in fact not problematic, since it has been implemented by introducing a shape function which properly identifies the boundary between the ferromagnetic and non-ferromagnetic regions. As stated in Eq. (S6), the boundary conditions for the first sublattice are A similar expression applies for the second sublattice. The numerical implementation of the boundary condition for the right edge is then Here, we would like to sincerely thank you again for your insightful comments that allowed us to carefully examine and better understand our experimental data.
Below is the summary of the modifications on this point in the revised manuscript.
Indication of changes in the revised manuscript: 1) We added the new experimental results as Supplementary Note S9.
2) Main text: Page 13. We added "In addition, we also studied the role of device orientation and applied current history in the field-free SOT switching. These experiments are described in Supplementary Note S9. They show that device orientation does not influence the field-free switching polarity, confirming that the switching is independent of conventional lateral asymmetry. Moreover, we found that the applied writing current history provides a training effect that determines the polarity of its subsequent current-induced switching loops." Reviewer #2: In this manuscript, the authors investigated spin-orbit torque (SOT) switching of perpendicular magnetizations in a ferrimagnetic layer with vertical composition gradient. Using the combination of SOT and gradient-driven Dzyaloshinskii-Moriya interaction (g-DMI), field-free magnetization switching in a single ferrimagnetic layer was demonstrated through experiments. The method used for in-plane symmetry breaking is novel. The study is timely and thus it is interesting to the spintronic research community. However, there are several points that still need to be clarified.
Authors' Response:
We greatly appreciate your positive assessment of this work, and your constructive suggestions which helped us improve the quality of this manuscript.
The field-free switching demonstrated in the manuscript is for a micron-sized
Hall-bar structure. Is it possible for it to scale down to a nano-sized pillar structure?
Please justify.
Authors' Response:
The switching is not shape-dependent, thus, we believe that scaling down to nano-sized pillars should not hinder the field-free switching. We have added the information regarding the simulated dynamics on a circular shape (400 nm diameter) device in the Supplementary Note S7, which is also reproduced below for the sake of clarity. Please see the following Fig. R6 for the detailed switching dynamics. The writing current pulse is applied from 0 ns to 20 ns. Thus, our conclusions are not affected when considering a pillar geometry. 2. The switching mechanism is yet to be clearly understood. In the micromagnetic simulation, the authors assumed an initial reversed domain at the edge of the sample, which makes the field-free switching possible.
Indication of changes in the revised manuscript:
We updated the above-mentioned figures, which correspond to Fig. 5 in the main text and Fig. S7 in Supplementary Note S7.
3. Since the experiment was conducted at finite temperature, has the temperature effect been considered in the simulation?
Authors' Response: In with , T and V being the Boltzmann constant, the temperature (set to 300 K in our case) and the computational cell volume, respectively. In this equation, is a vector whose Cartesian components are random numbers following the Gaussian distribution: Indication of changes in the revised manuscript: We added the above description of the thermal effects in Supplementary Note S6. . 1d). We assumed Co magnetization to be independent of Tb composition.
M S2
609 kA/m This work (see Fig. 1d). We assumed Tb magnetization to be independent of Co composition. K u1 ; K u2 20 kJ/m 3 This work (see Table S1 in Supplementary Note S8), assuming K u1 =K u2 . Note that the effective value is the sum. D 16 μJ/m 2 This work (see Fig. 3 Indication of changes in the revised manuscript: We added these detailed parameters in Supplementary Note S6.
5. The authors discretized the system with tetragonal cells of 10×10×6 nm3 in their simulation. What is the system size they were simulating?
Authors' Response:
As shown in the following Fig. R9, the arm width is 200 nm, the arm length is 600 nm, and the thickness is 6 nm. We used a discretization grid of 60×40×1 cells.
Indication of changes in the revised manuscript:
We included the geometrical details in the Supplementary Note S6 (Fig. S6).
Authors' Response:
Thank for pointing out this typo. In fact, the correct equation should be j CoTb θ DL = j CoTb (θ DL,top -θ DL,bottom ).
Indication of changes in the revised manuscript:
To Authors' Response: Thank for pointing out this typo.
Indication of changes in the revised manuscript:
We corrected this mistake in the revised main text, Page 11.
Equation S5
is not clearly shown.
Authors' Response:
Thank you for pointing this out. We have double-checked in the new version to make sure this equation shows correctly.
Indication of changes in the revised manuscript:
We corrected this mistake in Supplementary Note S6.
Reviewer #3: This is a very good paper which reports bias-field-free SOT switching in a single perpendicular CoTb layer with an engineered vertical composition gradient.
The authors experimentally show that the vertical structural inversion asymmetry induces SOTs and a gradient-driven Dzyaloshinskii-Moriya interaction (g-DMI). The g-DMI breaks the in-plane symmetry and makes the bias-field-free SOT switching possible. Especially, they show that the sign of g-DMI depends on the sign of vertical composition gradient, which is a strong proof that the observed DMI originates from the composition gradient.
The paper is well written and the experimental result is original and clear. I recommend the publication of this paper in Nature Communications after revisions.
Authors' Response:
Thank you very much for the recognition of our work and for recommending publication of this manuscript in Nature Communications.
My only concern is the initial magnetic state in the micromagnetic simulation. There is a domain wall at the edge, and the magnetization reversal proceeds by the motion of this domain wall. What happens if the simulation starts from a fully saturated state without a domain wall?
Regarding this concern, I would like to request the authors to add the simulation for circular shape structure which is more relevant to a real device structure than the present device shape in the manuscript.
Authors' Response:
Thank you for this helpful suggestion. We would like to clarify that the initial state in our simulation is indeed the fully saturated state without any domain at the edge. Other modifications:
1) We noticed that Ref. [39] in the main text has been recently published in
Advanced Materials. Thus, the citation has been changed from the arXiv reference to a journal citation.
2) We added two more funding sources in the Acknowledgements section.
Reviewers' Comments:
Reviewer #1: Remarks to the Author: The revised manuscript includes additional switching experiments showing that the first current pulse (e.g., its polarity) has a definite role in the symmetry breaking and field-free switching. However, a concrete description about the symmetry breaking mechanism associated with the "first current pulse" is still ambiguous. The authors speculate that a creation of a chiral magnetic texture due to the simultaneous effect of SOT and g-DMI breaks the symmetry. To my view, a speculation about the underlying mechanism is not sufficient to be publishable in a high profile journal like Nature Communications. Q1. Is it not possible to experimentally prove the creation of a chiral magnetic texture to break the symmetry?
Q2. Why is the Co/Tb special for this speculated symmetry breaking mechanism? Widely studied ferromagnet/heavy metal bilayers (e.g., Co/Pt) also have the interfacial DMI having the same symmetry with the g-DMI of this work. Why does not the speculated mechanism work for Co/Pt? Is there any specific role of two sublattices of Co/Tb in the field-free switching?
Q3. Figure 1f shows that there is a local variation of concentration gradient in Co/Tb. For instance, both Co and Tb concentrations increase with the position Z for 11.5 nm <Z < 14 nm, whereas the Co (Tb) concentration increases (decreases) for 14 nm < Z < 17 nm. Then Co and Tb concentrations decrease for 17 nm <Z < ~18 nm. This local variation of concentration ratio would result in a local variation of the magnitude and sign of both SOT and g-DMI. How does this local variation affect the field-free switching? Is it not a source of the symmetry breaking, instead of the speculated creation of a chiral magnetic texture?
A clear explanation about the symmetry breaking associated with satisfactory answers to all above questions must be given to deserve publication in Nature Communications. Without such clarity of the mechanism, I cannot recommend publication of this work. It is suited to a more specialized journal.
One minor comment: In p4 of the main text, it was stated that "… may enable deterministic switching in the absence of an external field [33-35]. However, this type of field-free combined DMI-SOT switching has not been experimentally observed to date." This statement gives an impression that the current work experimentally proves the prediction in Refs. [33][34][35]. However, in p4 of the response letter, it was stated that "Note that, this behavior is qualitatively different from what Refs. [1.1-1.3] (Refs. [33][34][35] in the main text) predicted from micromagnetic simulations." These two statements contradict each other and thus must be corrected.
Reviewer #2: Remarks to the Author: The authors have addressed all my concerns in this revision. I now support this manuscript to be published in Nature Communications.
Reviewer #3: Remarks to the Author: I think that the authors have addressed the concerns and questions from reviewers, and that the manuscript is ready for publication.
COMMENTS TO AUTHOR:
Reviewer #1 (Remarks to the Author): The revised manuscript includes additional switching experiments showing that the first current pulse (e.g., its polarity) has a definite role in the symmetry breaking and field-free switching. However, a concrete description about the symmetry breaking mechanism associated with the "first current pulse" is still ambiguous. The authors speculate that a creation of a chiral magnetic texture due to the simultaneous effect of SOT and g-DMI breaks the symmetry. To my view, a speculation about the underlying mechanism is not sufficient to be publishable in a high profile journal like Nature Communications. Q1. Is it not possible to experimentally prove the creation of a chiral magnetic texture to break the symmetry?
Authors' Response: Thank you for this suggestion. We performed new experiments, as described below, which provide additional evidence for the creation of a chiral magnetic texture (CMT) in response to training by a current pulse. Specifically, we hypothesize that if indeed the presence of a CMT is responsible for the observed training effect, it should be possible to destroy the effect of this training using a sufficiently large magnetic field applied to the device. This, in turn, would allow one to subsequently retrain the device by applying a new training current pulse in the opposite direction. In addition, if this hypothesis is correct, there should be a range of smaller external magnetic fields which would not be sufficient to destroy the CMT, and hence the training effect.
Our new experiments clearly confirmed this hypothesis, as described below: The results of this reset and retraining experiment were as follows: 1. Large H 1 field: Fig. R1a shows the results for the case where the applied reset magnetic field H 1 is 8 T. Clearly, after going through an 8 T field and being retrained by a current along the -x direction, the device exhibits the opposite switching directionality compared to the original training.
2. Small H 1 field: Next, we repeated the same sequence of experimental steps in another (also previously untrained) device, with the only difference being that the applied reset field H 1 was chosen to be smaller (0.2 T). Fig. R1c shows the results for this case, which clearly indicate that the retraining process does not work with this smaller field. In other words, the device keeps its original training and switching directionality after going through a 0.2 T field.
We interpret these results as a strong confirmation that a CMT is responsible for the observed training effect and directional switching in our devices. This CMT can be destroyed by a large field of 8 T, but not by a smaller field of 0.2 T. It is worth noting that in principle, even without applying an external 8 T field, a large enough current pulse in the opposite direction (compared to the initial training current) should be able to directly modify the CMT and retrain the device to exhibit an opposite switching directionality. In our experiments, however, we were not able to apply sufficiently large currents to do so without burning the device.
We also note that, while these experiments clearly indicate that a chiral magnetic texture is responsible for the field-free deterministic and directional switching observed in this work, we did not attempt to microscopically investigate or image the structure of these CMTs, which we believe goes beyond the scope of the present work.
We hope that our findings will motivate subsequent work in the magnetism community to further elucidate the detailed microscopic nature of the CMTs and their effect on the magnetization switching characteristics.
Finally, it is worth noting that the concept of a directional anisotropy, defined by the history of applied fields or torques to the magnetization, has previously been observed in other material systems. Specifically, the "nonvolatile chirality printing" seen here is in some ways similar to the so-called "triad anisotropy", previously studied in disordered magnetic alloys (spin glasses) exhibiting DMI induced by heavy impurities with large spin-orbit coupling [1.1-1.5]. In those cases, however, the emergence of a directional "triad anisotropy" was the result of the history of applied magnetic fields in conjunction with DMI, unlike our present case where the training occurs in response to SOT and DMI.
Indication of changes in the revised manuscript: (1) Supplementary Note S9: We added text and Fig. S14 to describe the above-mentioned experiments.
Q2. Why is the Co/Tb special for this speculated symmetry breaking mechanism?
Widely studied ferromagnet/heavy metal bilayers (e.g., Co/Pt) also have the interfacial DMI having the same symmetry with the g-DMI of this work. Why does not the speculated mechanism work for Co/Pt? Is there any specific role of two sublattices of Co/Tb in the field-free switching?
Authors' Response: In principle, it should be possible to realize a similar field-free deterministic switching in conventional ferromagnet / heavy metal material systems, such as the Co/Pt system.
However, there are reasons why it could be more difficult to do so compared to the CoTb gradient material system presented here, which we summarize below.
First, it is worth noting that the simulation papers [1.6-1.8] which predicted DMI-induced field-free deterministic (but not directional) switching, all focused on heavy metal / ferromagnet bilayers. However, to achieve field-free switching in such systems, the switching dynamics required a strict balance between the interfacial DMI value and spin current density (in other words, the applied current density), which resulted in a very narrow operation window. This narrow operation window, which is highly material-dependent, could hinder the experimental realization of deterministic switching in heavy metal / ferromagnet bilayer systems.
Second, while it is in principle possible that interfacial DMI and SOT in an appropriately optimized Co/Pt-based material system could result in current-induced CMTs similar to our CoTb gradient samples (and thus also achieve directional field-free switching), this process could be more difficult due to the stronger dipolar interactions in ferromagnets [1.9-1.10]. Given that the interfacial DMI in systems like Co/Pt and the g-DMI in our ferrimagnetic samples are comparable, this difference in the magnetostatic dipole energy of ferromagnetic and ferrimagnetic systems could play an important role in whether chiral textures controlled by DMI can be stabilized during the training process.
Q3. Figure 1f shows that there is a local variation of concentration gradient in Co/Tb. First, we note that the type of symmetry breaking required to explain the field-free directional switching is an in-plane breaking of the inversion symmetry, which, as noted before, can be created by chiral magnetic textures. The intermixing of Co and Tb with adjacent layers only results in an overall reduction of the out-of-plane (vertical) composition gradient in the film. Thus, while it weakens the symmetry breaking along the growth (i.e., vertical) direction, it cannot by itself explain the emergence of field-free directional switching in the device. However, it should indeed be possible to further increase the magnitude of g-DMI and SOT, and thus the switching efficiency of our devices, by developing deposition and processing techniques that suppress this intermixing effect. We hope that this work will motivate further research towards the development of such optimized g-DMI material structures.
Second, we note that the observed field-free directional switching is also not a result of any unintentional in-plane symmetry breaking, as already demonstrated in Supplementary Note 9 through measurements on multiple devices with different orientations on the same chip.
Last but not least, we performed additional experiments to investigate the device-to-device uniformity not only within a single chip, but also across a wafer. We performed field-free switching experiments in devices randomly picked on a full 100 mm (4 inch) silicon wafer. The results are shown in Fig. R2, where parts 2, 4, 6 and 8 on the wafer (indicated in Fig. R2a) were selected for measurements. Fig. R2b plots the field-free current-induced switching loops of the devices on these different chips using the same measurement setup. Clearly, all measurements show the same switching polarity as well as similar critical switching currents. This device-to-device uniformity over a large area further reveals that very little in-plane concentration variations can exist in our samples. These results are also encouraging for the translation of this field-free SOT switching scheme to industrial manufacturing on larger wafer sizes. Indication of changes in the revised manuscript: (1) The results of Fig. R2, along with the related discussion, were added in a new Supplementary Note S10.
(2) In the main text, page 13, we added a reference to the new Supplementary Note 10, along with the sentence "This behavior was consistent for multiple devices measured across different locations on a 100 mm wafer." A clear explanation about the symmetry breaking associated with satisfactory answers to all above questions must be given to deserve publication in Nature Communications. Without such clarity of the mechanism, I cannot recommend publication of this work. It is suited to a more specialized journal.
One minor comment: In p4 of the main text, it was stated that "… may enable deterministic switching in the absence of an external field [33-35]. However, this type of field-free combined DMI-SOT switching has not been experimentally observed to date." This statement gives an impression that the current work experimentally proves the prediction in Refs.
[33-35]. However, in p4 of the response letter, it was stated that "Note that, this behavior is qualitatively different from what Refs. [1.1-1.3] (Refs. [33][34][35] in the main text) predicted from micromagnetic simulations." These two statements contradict each other and thus must be corrected.
Authors' Response:
Thank you for pointing this out. We have modified the corresponding sentence on page 4 to address this point.
Indication of changes in the revised manuscript:
Main text, page 4: "However, this type of field-free combined DMI-SOT switching has not been experimentally observed to date" was replaced by "However, field-free deterministic switching due to the combined action of DMI and SOT has not been reported to date." | 6,907.6 | 2021-01-21T00:00:00.000 | [
"Physics"
] |
Divulgence of Additional Capital Requirements in the EU Banking Union
: The European Central Bank, as a supervisory authority, set additional to the European level one capital requirements known as Pillar 2 for 118 significant credit institutions. Disclosure of Pillar 2 requirements is not compulsory, although many credit institutions choose to inform about them. We estimate a logit model to investigate if being listed on stock exchange, size, profitability and credibility have impact on the probability of divulgence. We estimate sixteen specifications of our model to compare the explanatory power of di ff erent measures of size, profitability and credibility. The legal form and the size are statistically significant in all specifications, while profitability and credibility are significant only in some of them.
Introduction
Inadequate financial regulation is widely seen as one of the sources of the Global Financial Crisis (Stiglitz 2010;Dagher 2018). The role of financial regulation is especially important for monetary unions, because as famously stated by the financial trilemma financial stability, financial integration and national financial policies are incompatible. Only two of these three aims can be achieved simultaneously (Schoenmaker 2011;Rey 2018). After the Global Financial Crisis European policy makers chose to transfer important competences in the area of banking supervision to the European level through the introduction of the banking union. The aim of the banking union is to ensure that the credit institutions of the-so far-19 member states from the euro area will be subject to a single supervision, a single resolution, and a common deposit insurance system (Schäfer 2016).
The European Central Bank set the additional capital requirements (known as Pillar 2) for 118 significant credit institutions responsible for almost 82% of banking assets in the euro area (ECB 2018a). The disclosure of Pillar 2 Requirements (P2R) is not compulsory, although as our analysis identified nearly 70% of supervised entities disclosed them.
The increased level of accounting openness benefits the public and investors, as they can read financial-statement disclosures to gauge crucial elements of economic performance of any entity. This is a reason why disclosure of financial data since decades ago is an important topic of research (Bens et al. 2011;Berger 2011;Gad 2015). Although, the determinants of divulgence of additional capital requirements set for the banks in the Eurozone have not been investigated before.
Our contribution to economic research is twofold. We collect information on the disclosure and value of P2R for 2018 and as far as we know we first time publish so detailed descriptive statistical data on P2R requirements on the country level. Then we estimate logit probability model to identify factors impacting the disclosure of P2R.
Institutional Settings
Single Supervisory Mechanism is one of the core elements of the banking union. Since 2014 the European Central Bank (ECB) directly supervises significant credit institutions in the Eurozone. It means that ongoing supervision of the significant banks (significant credit institution) is carried out by Joint Supervisory Teams (JSTs). Each significant bank has a dedicated JST, comprising staff of the ECB and the national supervisors (National Competent Authority-NCA).
Less significant credit institutions are supervised by NCAs in a close cooperation with the ECB. So formally all final decisions are signed by the President of the ECB, despite the fact that substance was prepared by the NCA. It should be noted the ECB in accordance with the SSM Regulation (Regulation (EU) No 1024/2013) may at any time, on its own initiative or upon the request of an NCA, decide to directly supervise one or more less significant institutions to ensure consistent application of high supervisory standards. Since the SSM came into existence, the ECB has not introduced direct supervision of less important bank (Götz et al. 2019).
The list of significant credit institutions is determined on the basis of value of total assets. size of total assets relative to country GDP, and scope of cross-border activities. Moreover, three biggest credit institution in each country of the Eurozone are directly supervised by the ECB even if they not fulfill above mentioned quantitative criteria.
Basic or minimum capital requirements (known as Pillar 1) for credit, market and operational risk are fully harmonized within the European Union. According to the art. 92 of the CRR (Regulation (EU) No 575/2013) all European banks shall at all times satisfy the following own funds requirements: (a) a Common Equity Tier 1 capital ratio of 4,5%; (b) a Tier 1 capital ratio of 6%; (c) a total capital ratio of 8%.
All capital ratios are expressed as a percentage of the total risk exposure amount, i.e., risk-weighted assets, RWA (Ojo 2015).
The Basel 2 accord introduced the Supervisory Review Process as a tool for banking supervisors to monitor and evaluate additional banks' risks. The possibility and necessity of using such tool are underlined by the Basel 3 package. Supervisory authorities can transform their findings into supervisory decisions, including capital requirements (Basel Committee on Banking Supervision 2006). Additional capital requirements, known as Pillar 2 cover bank-specific risks, which are not adequately covered by Pillar 1 capital requirements that applies to all banks. According to the EU law Pillar 1 obligations are set up by means of Level 1, i.e., Regulation (EU), but Pillar 2 depends on supervisory actions.
The possibility of imposing Pillar 2 requirements has been implemented to the European law by the Capital Requirement Directive IV (Directive 2013/36/EU). According to art. 104 of the CRDIV competent authority shall have powers to require credit institutions to hold own funds in excess of the requirements set out in Regulation (EU) No 575/2013 relating to elements of risks not covered by that Regulation. The level of capital ratio depends of the results of supervisory review. According to art. 97 of the CRDIV competent authority, responsible for banking supervision, shall review the arrangements, strategies, processes and mechanisms implemented by the banks to comply with this directive and CRR Regulation. The banking supervisors are obliged to evaluate: (a) risks to which the credit institutions are or might be exposed; (b) risks that bank poses to the financial system as a whole (systemic risk); (c) risks revealed by stress testing taking into account the nature, scale and complexity of a bank's activities.
Since the Single Supervisory Mechanism came into existence, the ECB is legally treated as a competent authority. So, all above mentioned rules are applicable to the ECB too.
Taking into account art. 104 and art. 97 of the CRDIV the European Central Bank each year conducts Supervisory Review and Evaluation Process (SREP) to assess significant banks' risks. Each bank is assessed according to the common methodology and decision process allowing for peer comparison and transversal analyses. Overall SREP score is based on four elements: viability and sustainability of business model, adequacy of governance and risk management, risks to capital and risks to liquidity and funding (ECB 2017). On the basis of SREP results European Central Bank communicate Pillar 2 requirements to all significant credit institution. Pillar 2 is separated in two parts: compulsory Pillar 2 requirement (P2R) impacting the Maximum Distributable Amount (MDA) trigger and not legally binding Pillar 2 guidance (P2G). Nonetheless the ECB expects credit institutions to comply with Pillar 2 guidance too. Pillar 2 ratio are always expressed as a Common Equity Tier 1 capital ratio (CET1).
The ECB neither prevents nor dissuades credit institutions from disclosing Pillar 2 requirement, while it does not expect credit institution to disclose Pillar 2 guidance. The Market Abuse Regulation-MAR-(Regulation (EU) No. 596/2014) obligates all listed entities to disclose price-sensitive inside information, although it is not clear if P2R constitute a price-sensitive inside information.
On the one hand P2R impacts the Maximum Distributable Amount (MDA)-trigger and if capital requirements are not met the amounts of dividend are limited (Ebner 2018). On the other hand, almost all directly supervised entities have capital levels above their capital requirements (ECB 2018b). The European Securities and Markets Authority, responsible for the supervisory convergence on the European capital market, did not provided market participants with the clear interpretation of MAR requirements, but only advised that banks should asses on case-by-case basis if Pillar 2 requirements constitute price-sensitive inside information (ESMA 2017).
Methodology and Data
The divulgence of Pillar 2 requirements is not standardized and can take many forms. We collected data on P2R manually checking banks websites and press releases, annual/biannual/quarterly financial statements, investor presentations and bond issuance prospectuses. Nearly all credit institutions in our sample publish these materials in English, although we also checked sources available only in national languages. However only in few cases the P2R disclosures have not been available to the international public opinion or investors. Our investigation happened in December 2018. The most important information and descriptive statistics on the Pillar 2 requirements are reported in the Table 1.
The main aim of our empirical research is to investigate the most important determinants of the Pillar 2 disclosures. In formal language we wanted to predict probability of dichotomous outcome: bank publishes information about Pillar 2 requirement or not. The most popular quantitate techniques used for such predictions is logistic regression (Weisberg 2005;Dieguez et al. 2015). We decided to estimate multivariate logistic model: where: and: where: p-probability; X 1 , X 2 , . . . , X k -explanatory variables; F-the logistic distribution function (link function).
By means of the logistic distribution function we get: The logistic distribution function transforms the regression (3) into the interval (0, 1). Further defining the logit(p) as: the model can be rewritten as: Taking into account the purpose of our research we estimated following logit model: Data on the legal form of credit institution and information about being listed on stock exchange, value of total assets, return on average assets, risk weighted assets, return on risk weighted assets, return on equity, credit ratings, and ratio of impairment loans to risk weighted assets have been collected on the basis of bank websites, financial statements and Moody's Analytics BankFocus database. We also used the results of Comprehensive Assessment which have been published by the European Central Bank (ECB 2014). Financial data used in our model are actual at the end of 2018.
We use accounting data at the highest level of consolidation. Accounting data at the highest level of consolidation reflect the entire scale of operation of the significant credit institutions, including the operation carried out by subsidiaries. If we instead choose to use accounting data on the parent company of each observation (solo level) we may miss part of the operations. Moreover, our estimation would be prone to international differences in the organizational structure of significant credit institutions. Table 2 present descriptive statistics of banks included in our sample. Significant credit institutions are quite diverse. In case of 17 significant credit institutions the value of assets is smaller than 10 billion euro, while in case of six of them the value is bigger than 1 trillion euro. This diversity remains important also on a country level. In case of many countries the mean value of assets is smaller or similar to the standard deviation of the value of assets. Profitability indicators are also quite diverse. Only bank ratings are relatively similar on a country level. This is not surprising, because bank rating partly reflects also risk profile of country, in which bank is operating.
Some of the significant credit institutions are the subsidiaries of other credit institutions. This is particularly common in Baltics. Each bank directly supervised by the ECB is treated as the independent observation, even in case, in which given credit institution is a subsidiary of another credit institution.
The binary variable "listed" is equal to one if given institution is listed on the stock exchange or the parent company of the given institution is listed on the stock exchange. This variable is equal to one even if the parent company of the bank is listed in the country, which is not the part of the banking union. Listed significant credit institutions sometimes disclose the P2R information on subsidiaries, which are not listed on their own. Notes: Assets and risk-weighted assets (RWA) are given in billions of euro. Ratings have been transformed to numeric variable increasing with rating-the value of 1 corresponds to Moody's C, while 21 stands for Moody's Aaa. Standard deviation in parentheses. Source: websites, annual, semi-annual and quarterly reports and bond prospectuses of 118 credit institutions directly supervised by the European Central Bank. We estimate the model (6) in four specifications. In each of the specifications variable "listed" is a binary variable equal to 1 if credit institution is quoted on the stock exchange and equal to 0 otherwise. In the first specification the logarithm of total assets is used as a measure of the bank size, return on average assets stands for bank profitability, while the bank credibility is given by bank rating. As usually done in financial econometrics ratings have been transformed to numeric variable increasing with rating (Cantor and Packer 1996;D'Apice et al. 2016). In our case the value of 1 corresponds to Moody's C, while 21 stands for Moody's Aaa.
In the second specification size is expressed by the logarithm of risk-weighted assets, while profitability is measured by the return on risk weighted assets. The measurement of the rest of variables did not changed.
In the third specification logarithm of total assets is used as measure of the bank size and a return on average assets stands for bank profitability, while bank credibility is measured on the basis of the ECB Comprehensive Assessment (ECB 2014(ECB , 2015. One of the very important components of this analytical exercise was stress test performed to check the resilience of banks' balance sheets to adverse market developments. We used the spread between bank reported (current) CET1 ratio and CET1 ratio estimated after 3 years in the adverse scenario as the proxy for bank credibility.
In the last specification logarithm of risk-weighted assets is used as a measure of bank size and return on risk weighted asset stands for bank profitability, while bank credibility is measured on the basis of ECB Comprehensive Assessment.
We estimated also twelve additional specifications in which return on equity is used as a measure of the bank profitability, while ratio of the impairment loans to risk weighted assets or the CET1 ratio after 3 years in the adverse scenario of the ECB Comprehensive Assessments are used as measures of the bank credibility. All these variables however turned out to be statistically insignificant and we have decided to not present them in the paper because of the brevity.
Empirical Results and Discussion
Using the above assumptions and data we conducted the logistic regressions. The estimated logit models are summarized in Table 3.
In each specification the impact of being listed on a stock exchange and bank size on the disclosure probability is statistically significant. The level of statistical significance of both variables is similar in all specifications.
The second specification is the only one in which the impact of bank profitability on the disclosure probability is statistically significant. The impact of the return on risk weighted assets on the disclosure probability is statistically significant at 10 percent level, although only marginally. The impact of the bank credibility on the disclosure probability is statistically significant when it is measured by the spread between the bank reported CET1 ratio and the CET1 ratio after 3 years in the adverse scenario of the ECB Comprehensive Assessment, while the impact of bank ratings is not statically significant. Table 3. Estimated logit models.
Variable
(1) The spread between CET1 ratio and the CET1 ratio after 3 years in the adverse scenario of ECB Comprehensive Assessment is higher in the case of more vulnerable banks, which may be seen as less credible. The fact that lower credibility increases the probability of disclosure may be seen as surprising. It is possible that in the case of management of more vulnerable banks the P2R is more often perceived as price-sensitive information due to the worse financial condition of those credit institutions. As well, the management is characterized by higher propensity to disclose, even when publishing information is not legally necessary. According to our model being listed on a stock exchange and bank size are crucial determinants of the divulgence of Pillar 2 requirements. 78% of listed banks had disclosed P2R information in contrast to the 57% of non-listed banks. Our findings suggest that listed credit institution conservatively interpret Market Abuse Regulation requirements and have strong preference for the disclosure of P2R information. In most cases they have little to lose in the case of the disclosure, while in the opposite scenario they face the risk of legal punishment if undisclosed P2R would be later recognized by competent authorities as a price-sensitive inside information.
The explanatory power of the credit institution size is a robust result in the research on the divulgence of financial data. Bigger and more significant credit institutions face economies of scale, because the cost of the disclosure is to large extent exogenous to credit institution size.
It could be suspected that more profitable and more credible institutions would have a preference for disclosure of P2R. The impact of the bank profitability on the divulgence probability is however statistically significant only if profitability is measured by means of the return on risk weighted assets, which is in general a less popular measure of bank profitability than return on equity or return on average assets. Similarly, the impact of the bank credibility on disclosure probability is statically significant only if measured on the basis of the ECB Comprehensive Assessment and the relationship is inverse than expected.
Conclusions
Disclosure of financial data is an important topic in economic research. We estimated logistic regression to measure the impact of characteristics of credit institutions on the probability of divulgence of Pillar 2 capital requirements. We used the sample of 118 significant credit institutions, responsible for 82% of bank assets in the Eurozone. These institutions are directly supervised by the European Central Bank according to the common methodology and decision process.
Legal form associated with being listed on stock exchange and the size of credit institutions are crucial determinants of the disclosure of Pillar 2 requirements. Being listed on the stock exchange is linked with much higher probability of P2R disclosure. This result proves that credit institutions conservatively interpret disclosure requirements resulting from the Market Abuse Regulation (MAR). Nevertheless, more specific interpretation of legal requirements on the disclosure of Pillar 2 capital requirements by the European Securities and Markets Authority would probably be appreciated by market participants. The easiest way to achieve such aim would be an update of the Questions and Answers on the MAR (ESMA 2017) clarifying the nature of the Pillar 2 disclosures. In our opinion the P2R divulgence should be at least clearly recommended by the European Supervisory Authorities-not only by ESMA, but also EBA (the European Banking Authority).
One of the basic principles of the well-developed capital market is transparency of issuers of securities. The vast majority of the significant credit institutions from euro zone are quoted on the stock exchanges. The results of our research proved that it is the most important determinant of disclosure of information on additional capital requirements, despite the lack of legally binding obligation. Such policy of the managements of the biggest European banks increases post-trade (post-IPOs) transparency of capital markets. We strongly support the opinion that high transparency is the crucial condition of well-functioning capital markets (Thuesen 2004;Brochet 2019) and benefits not only investors, but indirectly also consumers, taxpayers and the entire economy.
Among other characteristics of credit institution size measured by the logarithm of total assets is the most important predictor of the disclosure. Large institutions face important scale effects, which make disclosure relatively cheaper. Institution credibility and profitability have limited impact on the disclosure probability, although in some specifications their impact is statistically significant.
At the beginning of our research we decided to take into account determinants which possess the relatively clear quantitative representations. However, market effects of divulgence of Pillar 2 requirements as well as the impact of organizational culture and management quality on the disclosure probability are promising directions for future research. Do credit institutions benefit from the disclosure of capital requirement? To what extent voluntary and full divulgence is the effect of organization culture and management quality? These and similar questions still remain unanswered.
Author Contributions: Conceptualization, investigation, data curation, writing-original draft preparation, M.W.; formal analysis, supervision, validation, writing-review and editing; funding acquisition, M.K. All authors have read and agreed to the published version of the manuscript.
Funding: The APC was funded by Vistula University, Warsaw, Poland. | 4,800.6 | 2020-05-08T00:00:00.000 | [
"Economics",
"Business"
] |
Turn-key mapping of cell receptor force orientation and magnitude using a commercial structured illumination microscope
Many cellular processes, including cell division, development, and cell migration require spatially and temporally coordinated forces transduced by cell-surface receptors. Nucleic acid-based molecular tension probes allow one to visualize the piconewton (pN) forces applied by these receptors. Building on this technology, we recently developed molecular force microscopy (MFM) which uses fluorescence polarization to map receptor force orientation with diffraction-limited resolution (~250 nm). Here, we show that structured illumination microscopy (SIM), a super-resolution technique, can be used to perform super-resolution MFM. Using SIM-MFM, we generate the highest resolution maps of both the magnitude and orientation of the pN traction forces applied by cells. We apply SIM-MFM to map platelet and fibroblast integrin forces, as well as T cell receptor forces. Using SIM-MFM, we show that platelet traction force alignment occurs on a longer timescale than adhesion. Importantly, SIM-MFM can be implemented on any standard SIM microscope without hardware modifications.
1. Imaging protocols, parameters, and data analysis: I would highly recommend the authors to prepare a step-by-step protocol with a visual guide on the implementation of SIM-MFM on the Nikon N-SIM system for image acquisition and post-acquisition image processing steps and publish this into a citable repository such as the Nature Protocol Exchange. In my opinion, these protocols form the core of this paper and hence will be relevant for users who intend to start testing SIM-MFM on their own.
2. Can the authors comment on the influence of changes in the local microenvironment at the ligand-receptor interface on the plasma membrane especially in platelets that actively secrete their acidic granular content during spreading on the transition dipole moment for Cy3b and how it may influence ϕ, θ and Imax? Are there other fluorophores that show similar transition dipole moment as Cy3b? If yes, is it possible to combine two or more DNA tension probes with unique ligands to target different receptors within the same experiment? or are there limitations? Please discuss.
3. In the current study authors used cRGDfK as a high-affinity ligand for αvβ3 and α5β1 integrin which also bind to fibronectin with high affinity. Previously it has been shown that platelets secrete or "self-deposit" their own ECM proteins such as fibronectin ( and maybe other ligands) while spreading in Vitro and this potentiates αIIbβ3 mediated platelet adhesion/spreading beyond the available ligand covered surface. (PMID: 25964667). Can the authors provide additional comments on how this phenomenon may influence the current observations on the observed differences in force tension points and their orientation along lamellipodia and in the inner regions of a spreading platelet during the course of timelapse measurements? are within a reasonable range. Normally this should not happen if the system is well calibrated.
Likewise, the Fig. 3b shows strong dotted noise, which might be partially due to low SNR. And Fig. 3e3 shows severe honeycomb structure across the entire image. I suggest the authors to increase the imaging time for a better SNR and better reconstruction. 7. Why the error in orientation measurement is larger in SIM-MFM than fluorescence polarization microscopy in Ref. 16? The author think it may cause by illumination intensity fluctuation, so further image pre-processing/calibration should be performed to improve measurement accuracy. 8. The calculated tilt angle seem to be around 45° and have little difference in different samples or different regions of the same sample. The effectiveness of the method of tilt angle calculation should be further evaluated. 9. In Fig 5 and Fig. 6, we could not intuitively see what new information the SIM has brought after the resolution enhanced. It seems that these analyses could be completed on the MFM only. The author should compare the analysis results of SIM-MFM and MFM. And, are Fig 5a displaying widefield images? It does not look like a super-resolution image reconstructed by SIM. 10. The equation 6a contains 17, 77, and 137 degrees. These values are due to the angle between the illumination and the CCD detectors, making the expression only applies to this very specific case. To make it general, I suggest the authors to change them to 0, pi/3, and 2pi/3. 11. Eq. (8) lacks a physical model, reference, or a well-defined explanation. 12. In Fig. 6b, why the three data sampling spots have different intervals? 13. In Fig. 6 caption, <A>/(<c>+1) is very hard to follow. From Fig. 6b, A is greater than 1 and c is less than 1. So it is unclear to me how it is normalized. And, the figure legend of Fig. 6c is expressed as <A>/<c+1>.
Minor comments: 1. Figure 2 and Figure 6 legend missing the description of scale bar. 2. Some letters in Figure 2 legend are capitalized, such as B,C,F,G. 3. The platelet results in Figure 6a looks overexposure and the contrast should be readjusted. Figure 6a also miss the intensity bar. 4. Line 335-336, cross-reference error appears. 5. Line 631, format error in reference 55 and 56.
Reviewer #3: Remarks to the Author: In this study the authors combine DNA-hairpin based fluorescent molecular force sensors with polarized structured illumination microscopy (SIM) to generate traction maps of platelets, 3T3 mouse fibroblasts, and mouse T-cells adhering to glass coverslips. Polarized SIM yields traction maps with approximately 100 nm spatial resolution, along with estimates of the polar and azimuthal angles of the force vector. The authors report that "platelets dynamically re-arrange the orientation of their integrin forces during activation," that the focal adhesions of fibroblasts have subregions with different levels of traction stress, and forces in T-cells adhering to an anti-CD3 functionalized surface are not polarized. Summary: Unfortunately the paper is not up to the standards of Nature Communications, and more broadly is not ready of publication. The technical advance reported here is potentially useful but incremental. A few aspects of the data analysis require clarification and improvement. The presentation of the work is not consistent with scholarly standards.
1) While the combination of MTFM and polarized SIM is potentially helpful but incremental: other research groups have used superresolution techniques to improve the spatial resolution of TFM and related measurements (see below). For that reason I feel that this paper is better suited to a specialized journal, for example Biophysical Journal, Cytoskeleton, or perhaps the Journal of Microscopy.
2) The authors need to present a balanced analysis of prior work in the field. For example, the spatial resolution of traction force microscopy is, not surprisingly, improved when the technique is combined with superresolution microscopy techniques: see for example Stubb 3) Relatedly, the authors have a tendency to hype their work. The use of the term "turn-key" in the title is arguable: all of the studies cited above used microscopy technology that is readily available in commercial form (SIM, STED, or TIRF) and could therefore be termed "turn-key" by the same logic. The claim that MTFM yields a "three orders-of-magnitude improvement in force magnitude resolution" is not supported. The technique averages over all of the DNA hairpins in a pixel, just as TFM averages over the forces exerted by integrins within a given area. Further, MTFM is not comparable to TFM in important ways: because the signal from the DNA hairpins is switchlike, it does not provide accurate local measurements of local stresses. Further, there is a two-fold degeneracy in the measurement of the azimuthal (phi) angle, a limitation that is only indirectly acknowledged in the discussion. The authors do not provide a rigorous estimate of the spatial resolution of their force maps, but instead rely on the best-case resolution for SIM as a proxy. The authors' tendency to over-sell their work does a disservice both to their field and, over the longer term, to themselves.
4) The authors should provide evidence that the SIM maps presented are not subject to reconstruction artifacts. For example, the reticular structures observed in Figure 3e are reminiscent of the artifacts previously noted when using Weiner deconvolution (Fan et al. Biophys Rep. 2019) and low signal intensities.
5) The authors report that Monte Carlo simulation suggests that estimates for the polar angle theta are poor at low photon counts-presumably this is why they discard low intensity pixels in their analysis. For this reason it is important that they quantify and report the distribution of photon counts and the cutoff applied for their analyses.
6) The polar angle for the force vector of 45 degrees measured for 3T3s seems large relative to previous measurements (Liu PNAS 2015). It seems plausible that this may reflect the tendency of the measurement used here to overestimate low theta values (Fig. S5E). Liu et al. is, incidentally, another relevant prior publication that is not cited.
7) The rationale for splitting platelets into "two groups (28 increasing alignment cells and 22 cells with non-increasing alignment)" is not clear. The authors should provide evidence that these cells are distinguishable in an independent, biologically relevant way. Otherwise, the parsimonious interpretation is that all the cells come from the same distribution, of which half happen to fall above the measurement threshold for force vector alignment. This ambiguity makes this section of the paper difficult to evaluate.
8) The null result reported for T-cells (no obvious force polarization) is likewise difficult to interpret. The three possibilities listed by the authors are reasonable. However, it also seems possible that the force threshold for the sensor used may be too low or too high to measure the forces that are most relevant for T-cell activation. Another possibility is that the choice of antibody might matter: tangential force transmitted through 17A2 resulted in T-cell activation, whereas other mABs did not (Kim et al. JBC 2009). (So far as I can tell the authors do not report what antibody they used.) All of these ambiguities make this section of the paper likewise difficult to interpret.
9) The method used to derive confidence bands in Figure 5h is ad hoc, and needs to be replaced with a more rigorous approach.
Reviewer 1:
Reviewer summary: The manuscript by Blanchard et al. is a systematically designed, meticulously performed, and well-written proof-of-concept study on the feasibility of 3D-SIM in combination with molecular force microscopy to assess forces generated at receptor-ligand interfaces and importantly nanoscale organization and orientation of these forces based on transition dipole moments of fluorophoreconjugated to a DNA tension probe. In my opinion, the entire study documented in this paper of great interest not only to the light microscopy imaging community but across broad interdisciplinary fields spanning cell biology, mechanobiology/mechanochemistry, and biophysics. Blanchard et al. using SIM-MFM, provide biologically relevant key takeaway messages with their studies on human blood platelets and primary T-cells from mouse and in-vitro cell line such as the 3T3 fibroblast, supported by solid datasets and detailed analysis. In addition, the authors also provide extensive details on the implementation of SIM-MFM for potential future users. In summary, the current version of the manuscript needs no additional experiments to substantiate the findings as observations and the interpretations thereof are well within the scope of the manuscript. While I am limiting myself to the platelet part of the manuscript, my specific queries to the authors are as follows: Response: We thank the reviewer for their favorable and insightful review of our work. We also thank the reviewer for their numerous helpful suggestions for future work. Comment 1: 1. Imaging protocols, parameters, and data analysis: I would highly recommend the authors to prepare a step-by-step protocol with a visual guide on the implementation of SIM-MFM on the Nikon N-SIM system for image acquisition and post-acquisition image processing steps and publish this into a citable repository such as the Nature Protocol Exchange. In my opinion, these protocols form the core of this paper and hence will be relevant for users who intend to start testing SIM-MFM on their own. Response: We greatly appreciate this suggestion and have begun preparing a protocol for publication on the Nature Protocol Exchange. We believe that the methods section is sufficiently detailed for SIM users to implement SIM-MFM, but we agree that a detailed protocol that walks researchers through all stepsfrom probe synthesis to image processingwould greatly improve the ease-of-entry to this research area. Unfortunately, due to time and personnel constraints, we were not able to submit the protocol prior to resubmission of this work. Notably, our group has recently submitted a protocol to JOVE that describes the surface preparation protocol used in this work 1 . We have included a citation to this article in the revised manuscript.
Comment 2:
Can the authors comment on the influence of changes in the local microenvironment at the ligand-receptor interface on the plasma membrane especially in platelets that actively secrete their acidic granular content during spreading on the transition dipole moment for Cy3b and how it may influence ϕ, θ and Imax? Are there other fluorophores that show similar transition dipole moment as Cy3b? If yes, is it possible to combine two or more DNA tension probes with unique ligands to target different receptors within the same experiment? or are there limitations? Please discuss. Response: We thank the reviewer for these thoughtful considerations. Based on previous studies, cyanine dyeswhich have a wide range of excitation and emission wavelengthsin general are thought to exhibit the base-stacking interactions necessary for SIM-MFM 2, 3 . In contrast, we previously tested Alexa dyes and showed that these dyes do not align their transition dipole moments with the nucleobases 4 . Several studies have also shown that, the fluorescence intensity of cyanine dyes such as Cy3b are still maintained at acidic conditions with pH 4-6 (e.g. see https://help.lumiprobe.com/p/44/fluorescence_cyanine_dyes), similar to the pH of stress granules. Therefore, we find it unlikely that degranulation would alter fluorescence anisotropy. Moreover, degranulation events are highly transient, and pH should rapidly reach equilibrium with surrounding media over subsecond times scales 5 .
We have expanded upon our discussion section to acknowledge that there are many unknowns regarding the local microenvironment under the cell (see response to comment 3, below). We have also included new commentary on the potential of imaging multiple probes with distinct fluorophores simultaneously: " "…of molecular force orientation. In the future, it may be possible to perform two-color SIM-MFM using probes with other duplex terminus-stacking dyes such as cyanine 5 (Cy5). Such an advance could enable simultaneous mapping of forces applied through probes that bind to different surfaces receptors, or probes that exhibited different 1/2 values. We expect that ongoing…"
Comment 3:
In the current study authors used cRGDfK as a high-affinity ligand for αvβ3 and α5β1 integrin which also bind to fibronectin with high affinity. Previously it has been shown that platelets secrete or "self-deposit" their own ECM proteins such as fibronectin ( and maybe other ligands) while spreading in Vitro and this potentiates αIIbβ3 mediated platelet adhesion/spreading beyond the available ligand covered surface. (PMID: 25964667). Can the authors provide additional comments on how this phenomenon may influence the current observations on the observed differences in force tension points and their orientation along lamellipodia and in the inner regions of a spreading platelet during the course of timelapse measurements? Response: We appreciate the reviewer for bringing these fascinating observations to our attention and have briefly commented upon them in our discussion: "This finding raises important questions about the dynamic processes governing platelet mechanics. Future work could use timelapse SIM-MFM, along with two-color imaging of structural proteins such as tubulin, actin, and vinculin 6 , to investigate relationships between cytoskeletal network re-arrangement, focal adhesion growth, and tension alignment. Moreover, SIM-MFM could be a powerful tool for studying platelet-related diseases such as Glanzmann thrombasthenia or Wiskott-Aldrich syndrome (in which actomyosin contractility is impeded) 7 . It may also be necessary to investigate the dynamic effects of platelet biochemical activity on tension probe-functionalized surfaces; it was previously shown that platelets secrete ECM proteins such as fibronectin while spreading in vitro 8 , which can potentiate αIIbβ3 mediated adhesion. Because cRGDfK is specific to αvβ3 and α5β1, the gradual potentiation of αIIbβ3 signaling (driven by the accumulation of ECM proteins secreted onto the surface surrounding/below platelets) could introduce temporal effects on platelet mechanics. Finally, SIM-MFM is well-suited to investigating the effects of physiologically-relevant shear flow rates on platelet mechanics and dynamics 9 ." In addition, we would like to note that, in previous work, we added exogenous fluorescent fibronectin to platelets and found no diminishment of tension signal 10 . These findings suggest that secreted ECM minimally alters our observations of αvβ3 and α5β1 tension, potentially because integrins are sufficiently abundant to simultaneously bind to secreted ECM molecules and tension probes. That said, we passivate our surface for 45 minutes with 1 mg/mL BSA (see methods) to minimize the adsorption of exogenous proteins and ECM.
Comment 4:
To quote the authors "An interesting finding of this analysis is that spreading occurs on a significantly faster timescale than alignment; τ_spread=1.7 min, while τ_align=5.0 min (p<0.001, Wilcoxon rank-sum test) (Fig. 5g). This result suggests that platelet activation displays a characteristic mechanical progression that includes three phases. In the first phase, the platelet spreading area and the traction forces grow until reaching a steady-state plateau. In the second phase, forces within the platelet re-organize and realign along an axis. Finally, the platelet abruptly terminates contractility and detaches from the surface." The authors make a very important observation here, which in my opinion needs to be further elaborated during manuscript revision in the context of the link between the magnitude of force generation during platelet activation by spreading and the dynamic mechanotransduction processes involving platelet actomyosin contractility/depolymerization of marginal band tubulin ring. An interesting future follow-up topic of translational value could be an investigation using SIM-MFM with platelets having defective in actomyosin contractility or Wiskott-Aldrich syndrome (WAS) patient platelets. Response: We thank the reviewer for this comment and for recognizing the significance our observation.
In addition to an expanded discussion addressing some of these points (see response to comment 3, above), we have expanded the relevant results section to include mention of these processes, which we hope to investigate in greater detail in future work: As a preview for upcoming work, we have included a preliminary set of images below. An RICM image shows locations of close contact between two platelets and a tension-probe functionalized surface. A fluorescent, cell-permeable tubulin stain (Tubulin Tracker) shows that the tubulin networks in both platelets have largely collapsed at the time of acquisition, colocalizing primarily to the tension-free centroid of the platelets. It also appears that the collapsed tubulin network is ellipsoidal in shape, with the long axis of the ellipsoid parallel to what appears to be the axis of tension. The dynamics of tubulin rearrangement in relation to the dynamics of spreading and the dynamics of tension alignment will be the subject of future work. Furthermore, a similar stain for actin will allow near-simultaneous visualization of tension and both cytoskeletal networks.
Comment 5:
Since the currents experiments were performed under static conditions, can the authors comment on whether platelet adhesion under hydrodynamic shear may alter the force orientation vector of the integrin receptor based on the shear rate? Response: We agree with the reviewer that the effect of flow would be interesting to investigate using SIM-MFM. We have added commentary and cited relevant literature to the discussion section at the end of the manuscript (see response to comment 3, above).
Reviewer summary: In this paper, Blanchard et al. presents a novel method to map the cell receptor force orientation and magnitude with commercial SIM system. Taking advantage of the polarization super-resolution imaging capability of the commercial SIM system, the authors have upgraded their 3D orientation mapping technique to super-resolution force mapping. As biological force mapping is crucial to understanding a series of interactions, this work can be potentially deployed widely for studying the cell force at super-resolution. Overall, the authors present their idea clearly and the manuscript is wellwritten. However, the super-resolution results should be further optimized to eliminate factors that could be misleading. And, the advantages of using SIM as a super-resolution imaging technique, should be demonstrated. I have the following comments: Response: We thank the reviewer for their favorable assessment of our work, and for recognizing the potential for our methods to be widely-deployed. Based on the reviewer's suggestions, we have implemented several updates detailed below.
Comment 1:
My biggest concern is that, despite the spatial super-resolution offered by SIM, the SIM-MFM does not show new insights or significantly better results over conventional MFM. Is this because of the poor SNR which obstacles the resolution/accuracy, or it reflects that the MFM field does not have super-resolved feature beyond diffraction limit? This should be clearly distinguished.
Response: To better illustrate the improved spatial resolution of SIM-MFM, we have conducted a more detailed analysis summarized in new supplemental figures. The new figure illustrates a systematic, automated method for comparing the resolution of conventional MFM and SIM-MFM using the lamellipodial edge of platelet tension as the common structure being compared. The figure also shows that applying this method to a set of 19 platelets consistently shows a significant enhancement in spatial resolution by ~82 nm, which is similar to (but less than) the expected ideal enhancement of ~95 nm: A two-fold improvement, as is generally expected for ideal implementations of SIM, would thus produce = 96 , which is a resolution improvement of ~95 nm. As such, our implementation of SIM reconstruction with SIM-MFM, which yields an ~82 nm improvement in resolution, comes close to the ideal resolution enhancement.
In order to highlight the quantitative improvement afforded by SIM-MFM, we have also added the boxplot shown in subfigure h to Fig. 2. In addition, to offer more examples of the spatial resolution improvement offered by SIM-MFM, we have included a new supplemental figure (Fig. S7) that shows additional images of 3T3 fibroblasts in widefield and super-resolution. These additional examples also include linescans that demonstrate the ability to resolve of additional features. These images provide qualitative depictions of resolution enhancements, complementing the quantitative proof in Fig. S6.
However, as the reviewer mentions the SNR for tension maps is indeed weaker than that obtained for past work using fixed and stained cytoskeletal structures. Our data are all collected on live dynamic cells, which offers exciting opportunities in studying dynamics but inevitably results in lower SNR, particularly because of the highly transient nature of biophysical forces in cells. In our response to comment 6 below, we discuss our use of SIMcheck to evaluate our data, which revealed that our SIM-MFM acquisitions are in the "low-to-moderate quality" range. Accordingly, improvement of the signal to noise is a crucial target area of future MFM development. In the discussion section of our revised manuscript, we now discuss these limitations and future directions: The moderate signal to noise ratio (SNR) is a key factor limiting super-resolution reconstruction of SIM-MFM images. The primary factor constraining SNR is photobleaching of the tension probes' fluorophores; the need for several high-quality images per SIM acquisition (and the need for several acquisitions during timelapse imaging), means that moderate laser powers and short exposure times must be used to ensure that substantial photobleaching does not occur between images. Accordingly, techniques that improve photostability (e.g. the incorporation of DNA fluorocubes 11 and/or the use of in-solution oxygen scavenging systems 12 ) could enable higher SNR, longer imaging durations, and higher frame rates Finally, we would also like to emphasize that the attainment of super-resolution is not the sole advance reported in this manuscript; there are many features of SIM-MFM highlighted in our manuscript, including the ability to use SIM microscopes in turn-key fashion to implement MFM, the ability to pair SIM-MFM with SIM imaging of tagged biomolecules (e.g. GFP-paxillin, Fig. 3), and the unprecedented ability to acquire parallel MFM timelapses (Fig. 5). We also present novel biological findings about platelet dynamics (Fig. 5) and the organization of T-cell forces (Fig. 6), which don't require superresolution reconstruction. To better ensure that the breadth of these advances are accounted for, we have made edits to slightly de-emphasize the super-resolution aspect of our work, most notably by removing the term "super-resolution" from the title. Here in this work, the axial orientation of the 2D dipole (or the "disk" dipole) is inferred from the polarization modulation depth, which is also related to the extinction ratio of the polarized excitation. The authors need to measure the polarization states of the excitations as suggested In Ref. 44, and to discuss whether depolarization affects their measurement accuracies. Response: We thank the reviewer for raising this concern. To address this concern, we have included a measurement (and a description of the measurement) of the extinction ratio in the methods section at the end of the subsection on structured illumination microscopy: "To confirm linear polarization of the excitation laser, a linear optical polarization filter (WP25M-VIS, Thorlabs, NJ) on a rotatable mount (RSP1/M, Thorlabs, NJ) was placed between the objective and an optical power meter (7Z02621, Ophir Starlite, Ophir Photonics, PN) and set to 532 nm detection mode. The polarization axis of the polarization filter was aligned normal to the polarization axis of light exiting the objective at 0° by rotating the mount until the reading by the power meter was minimized. The 532 nm laser was set to operate at 100% of its full power in SIM mode while the polarization filter was rotated in 20° increments, measuring the power of the light exiting the objective at each 20° increment. Three replicates of this experiment showed that the laser intensity varied sinusoidally with respect to polarizer angle (Fig. S1). The intensity minimizing at zero, thus confirming linear polarization of the laser." And included a new supplemental figure that illustrates full extinction when the polarizer and the excitation polarization are aligned, thus confirming that the excitation beam is linearly polarized: Polarization verification was performed as described in the section paragraph of the methods subsection "Structured illumination microscopy (SIM)". a) Microscope excitation path diagram adapted from Fig. 1a depicting arrangement of the optical power meter and rotatable linear polarizer. The polarization modulator (PM) and diffraction grating (DG) used to create the SIM striped interference pattern were removed from the light path for this measurement. b) Plots of three experimental replicate measurements at various linear polarizer angles. Black curve shows best-fit sin 2 curve fit to the triplicate averages. Fitting was performed using MATLAB's built-in fmincon function, and was constrained to a negative sinusoid minimum below zero. Accordingly, the best-fit sinusoid had a minimum of zero, corresponding to linearly-polarized light. Elliptically polarized light would have produced a positive sinusoid minimum.
We also include brief reference to this measurement in the main text: "..previous implementation of MFM 4 which required ~3.6 sec per acquisition. We also verified that the microscope's illumination laser was linearly polarized using a photometer and a linear polarizer (Fig. S1).
We next tested whether SIM could…" In addition, our measurement shows that the excitation beam is nearly perfectly linearly polarized before interference, which is consistent with calibration performed by Zhanghao et al. in the article mentioned by the reviewer. The polarization at the sample plane cannot be measured as easily. Accordingly, Zhanghao et al. semi-quantitatively characterized the polarization depth in SIM mode by imaging a standard sample (phalloidin-labeled actin filaments in U2OS cells). They observed that the polarization factor was similar to what had been observed in previous studies of similar samples with verifiably linearly polarized excitation light. Their analysis suggested that switching the excitation beam to the interfering SIM mode did not significantly distort the polarization of their laser beam. We performed a similar control experiment by imaging DiI-labeled microspheres. In our previous MFM work, we also imaged this type of sample. We recorded similar polarization factors (( + )/ )≈ 1.5 in both studies.
Comment 3:
The uneven illumination is calibrated with the thick fluorescent slides, however, the illumination pattern will possibly change when changing the focus. Therefore, we suggest the authors to calibrate the uneven illumination with fluorescent beads or with a thin layer of fluorophore. Response: We thank the reviewer for this astute observation. As mentioned in our methods, we used two different techniques in this work: "…images were generated on the day of the experiment by collecting at least six SIM acquisitions with the microscope focused onto the bottom of a Chroma slide or a surface with 100% open tension probes, which were prepared by adding a "opening strand" (which is complementary to the full hairpin) during the DNA strand hybridization step. Each acquisition …" While the first method (Chroma slide) may have some penetration depth-related issues as the reviewer points out, the second method is essentially a thin layer of fluorophore as suggested. However, the first method is much less labor intensive and produces much smoother images with reduced localized noise.
To determine whether the two methods could be expected to produce similar results, we generated background illumination profiles using both methods from images collected on the same day. We then took the ratio of these two background illumination profiles. The resulting ratio was essentially a flat field with no noticeable global variations. This analysis suggests that both methods are equally effective at accounting for long-range variations in intensity (i.e. the Gaussian illumination profile), as well as variations between individual images. We have included a supplemental figure (Fig. S5) to demonstrate this analysis. We have also included a reference to this figure in the methods section.
Comment 4:
Although the spatial resolution of SIM-MFM is doubled to reach 100 nm, the polarization resolution of SIM-MFM is still diffraction limited. The authors should revise line 466 in the Discussion. Response: We thank the reviewer for raising this concern. This limitation is fundamental to pSIM-based techniques and was noted in the original pSIM paper, which was also published in Nature Communications 13 . While we did mention this point earlier in the text, we agree that it is important to continue to emphasize this point because it may be easy for readers to miss. Therefore, we have updated our discussion to explicitly note this and other fundamental disadvantages of our technique:
"… A broader limitation of pSIM-based techniques such as SIM-MFM is that super-resolution reconstruction only improves the resolution of the intensity mapnot the force orientation map.
This limitation arises because the orientation of the polarization and the illumination striping are rotated simultaneously, rather than separately. In other words, the spatio-angular crossharmonics cannot be detected with conventional SIM microscopes 13 This modification specifies that the resolution enhancement is limited to information on force magnitude. In addition, this modification separates the two claims (turn-key MFM and super-resolution) from each other.
Comment 5:
The raw data in Figure 2 seems to have low signal-to-noise ratio, and SIM reconstruction artifacts are obvious in Figure 2 and Figure 3. In the super-resolution image processing part of SI (line 191), the author claim NA=1.4 but the system NA is 1.49. The wavelength should be the emission wavelength not the excitation wavelength. Response: During revisions, we switched our image reconstruction from FairSIM to Nikon Elements. The Nikon Elements software affords a greater degree of control and usability, which allowed us to obtain higher-quality reconstructions with fewer reconstruction artifacts. Accordingly, we have re-written this section and the parameters mentioned by the reviewer (NA and wavelength) are no longer listed (these parameters are saved directly into the image metadata and loaded into the Elements software automatically).
Comment 6:
The raw data should be evaluated with SIMcheck and the reconstruction parameters should be further optimized to reduce reconstruction artifacts. In Fig. 3d, the SIM super-resolution result has shown apparent illumination pattern at one single direction. I suggest the authors to check the raw data with SIMcheck, to see the modulation depth, and distribution between the diffraction orders are within a reasonable range. Normally this should not happen if the system is well calibrated. Likewise, the Fig. 3b shows strong dotted noise, which might be partially due to low SNR. And Fig. 3e3 shows severe honeycomb structure across the entire image. I suggest the authors to increase the imaging time for a better SNR and better reconstruction.
Response:
We were able to connect many of these errors back to our choice of image reconstruction software. Accordingly, we have switched from the ImageJ plugin FairSIM to the Nikon Elements software, which offers greater control over reconstruction. Our updated reconstructions are much smoother, as evidenced by side-by-side views of Fig. 2f, which has fewer apparent honeycomb errors: In addition, we have checked our images with SIMcheck. Example results from individual checks, as well as summary statistics collected from several cells, are now shown in Fig. S8: 13 While the plugin found that they exhibit "low-to-moderate" quality, we wish to re-emphasize that superresolution imaging is not the sole (or primary) advance presented in this paper. Also, note that our images are collected using live, mechanically active cells, whereas SIM imaging is often performed on fixed cells. The limited quality is due in large part to low SNR (which is a byproduct of working with live dynamic cells). SNR can be improved with increased imaging time as suggested by the reviewer. However, doing so increases the rate of photobleaching, which also reduces the quality of reconstruction. Furthermore, because background noise in MFM arises due to the relatively bright background signal from unopened tension probes (rather than camera read noise), our background signal scales with exposure time. Our image acquisition routine was optimized to balance SNR with photobleaching, and so the results 14 presented here are close to optimal. We discuss these limitations and potential solutions in the discussion section of our updated dataset (see response to comment 1).
Comment 7:
Why the error in orientation measurement is larger in SIM-MFM than fluorescence polarization microscopy in Ref. 16? The author think it may cause by illumination intensity fluctuation, so further image pre-processing/calibration should be performed to improve measurement accuracy. Response: We thank the reviewer for raising this concern. We believe the primary reason for increased error is the reduced number of distinct values used in this work (3) compared to previous work (72). To better convey this point, we added the following: "…resulted in at most a 7% deviation from the ideal. These errors likely result from the small number of angles (3) relative to our previous implementation of MFM (72), and in future work it may be possible to address this issue by implementing SIM-MFM with more than 3 angles. Nonetheless, we do not expect this form of systematic error, which is small in magnitude, to meaningfully bias our measurements.
We would also like to note that, while the errors in this work are larger than in previous work, they are still small in magnitude. For example, the histogram in Fig. 4b is truncated, but if the full scale of the plot is shown, it is clear that angles are for the most part uniformly distributed, with only small biases: These small errors are not substantial enough to alter the important conclusions that we draw in this work (highlighted in Figs. 5-6), which rely on the type of ensemble level estimates that would be insensitive to the systematic errors shown here.
Comment 8:
The calculated tilt angle seem to be around 45° and have little difference in different samples or different regions of the same sample. The effectiveness of the method of tilt angle calculation should be further evaluated. Response: We thank the reviewer for raising this concern. In fact, we have similar concerns about the effectiveness of the measurement; in previous work 4 , we performed modeling that suggested that can be systematically biased by 1) local heterogeneity, and/or 2) thermal fluctuations of the probe at low force magnitude. These effects, which both can cause to be systematically underestimated due to increased orientational heterogeneity, are separate from the photon noise-based effects that we investigated in this work. However, we neglected to address these points in our original submission. These effects are not expected to systematically bias calculations 4 . For this reason, the majority of our analyses centered on our measurements of . In future work, we do have intentions to decouple orientational hetereogeneity and using a technique that we developed called variable incidence angle linear dichroism (VALiD) 14 . However, as VALiD is not SIM-based, these future studies are outside the scope of this work.
In addition, please note that we previously found that varied slightly from the periphery to the centroid of platelets 4 . In a later study, we found that fibroblast integrins exhibited near-vertical forces on supported lipid bilayers (rather than glass) 15 . In addition, we found that T-cell receptor forces do not have a = 45° value. Together these results show that the technique doesn't inherently produce =45° in all scenarios, and rather it may be a reflection of the biology as the forces measured in platelets and fibroblasts are mediated by integrins. Therefore, our findings (that is similar between fibroblasts and platelets) suggests that the geometry of integrin mechanics is similar across cell types 4 .
Comment 10:
The equation 6a contains 17, 77, and 137 degrees. These values are due to the angle between the illumination and the CCD detectors, making the expression only applies to this very specific case. To make it general, I suggest the authors to change them to 0, pi/3, and 2pi/3. Response: We thank the reviewer for this discussion. The derivation in supplemental note 1 includes generalized equations as requested. We have edited our manuscript (following equations 6 & 7) to better highlight the general equations: "…applied these calculations to the entire image set. Note that Supplemental Note 1 includes calculations that are generalizable to SIM systems with initial α values other than 77°. Our initial results reproduced…" Comment 11: Eq. (8) lacks a physical model, reference, or a well-defined explanation. Response: We thank the reviewer for bringing this deficiency to our attention. We have added an explanation of this equation in the paragraph following the equation: "…the unit step function that denotes initiation of spreading at = ℎ (Fig. 5b,c). We constructed this equationwhich is based purely off of our observation of the nature of the data such that the first term (one mines the decaying exponential) reflects an asymptotic increase in , while the second term (with the exponential in the denominator) is a sigmoid function that reflects an s-shaped decrease in to baseline. We set , ℎ , ℎ , , and all as fit parameters…" Comment 12: In Fig. 6b, why the three data sampling spots have different intervals? Response: We apologize for the confusion. Figure 6c is simply a re-sized version of Figure 1c. Accordingly, the three points represent 3 intensity vs. values from a pixel indicated in Fig. 2c. We included a copy of this plot simply to illustrate what and mean, to aid with interpretations of the rest of the figure. Fig. 6 caption, /(<c>+1) is very hard to follow. From Fig. 6b, A is greater than 1 and c is less than 1. So it is unclear to me how it is normalized. And, the figure legend of Fig. 6c is expressed as /<c+1>. Response: We apologize for the confusion. We have an expanded description of this analysis in supplemental note 2, which is referenced in the main text. To better highlight this discussion, we have also added a reference to supplemental note 2 in the figure caption as well:
Comment 13: In
"…background close to the cell. A more detailed description of this analysis can be found in Supplemental Note 2. While platelets exhibit…" In addition, we have changed /(<c>+1) in the caption to /<c+1>, for clarity (the two are equivalent mathematically).
Minor Comments 1-5: 1. Figure 2 and Figure 6 legend missing the description of scale bar. 2. Some letters in Figure 2 legend are capitalized, such as B,C,F,G. 3. The platelet results in Figure 6a looks overexposure and the contrast should be readjusted. Figure 6a also miss the intensity bar. 4. Line 335-336, cross-reference error appears. 5. Line 631, format error in reference 55 and 56. Response: We thank the reviewer for pointing out these oversights. We have fixed all of these issues in the revised manuscript, except for the overexposure comment in point 3. The intensity limits were chosen to be the same both for platelets and T-cells while enabling clear visualization of both. Because platelet signal is slightly brighter than T-cell signal, this results in the image appearing to be saturated for the platelet images.
Reviewer summary: In this study the authors combine DNA-hairpin based fluorescent molecular force sensors with polarized structured illumination microscopy (SIM) to generate traction maps of platelets, 3T3 mouse fibroblasts, and mouse T-cells adhering to glass coverslips. Polarized SIM yields traction maps with approximately 100 nm spatial resolution, along with estimates of the polar and azimuthal angles of the force vector. The authors report that "platelets dynamically re-arrange the orientation of their integrin forces during activation," that the focal adhesions of fibroblasts have subregions with different levels of traction stress, and forces in T-cells adhering to an anti-CD3 functionalized surface are not polarized. Unfortunately the paper is not up to the standards of Nature Communications, and more broadly is not ready of publication. The technical advance reported here is potentially useful but incremental. A few aspects of the data analysis require clarification and improvement. The presentation of the work is not consistent with scholarly standards.
Comment 1:
While the combination of MTFM and polarized SIM is potentially helpful but incremental: other research groups have used superresolution techniques to improve the spatial resolution of TFM and related measurements (see below). For that reason I feel that this paper is better suited to a specialized journal, for example Biophysical Journal, Cytoskeleton, or perhaps the Journal of Microscopy.
The authors need to present a balanced analysis of prior work in the field. For example, the spatial resolution of traction force microscopy is, not surprisingly, improved when the technique is combined with superresolution microscopy techniques: see for example Stubb being an early example. Response: We thank the reviewer for raising this concern and apologize for presenting what came across as an unbalanced view of the literature. We have presented an expanded description of existing literature in the revised manuscript (see below). We have also added the reviewer's reference to Hu et al. Cytoskeleton (2015) and a description of protein-based tension sensorswhich we excluded because they have not been used to measure force orientationin our introduction: "As stated above, enhancing the spatial resolution of force mapping is desirable due to the nanoscale size of many mechanically active structures of interest. While we recently developed a method for localization of integrin forces with a spatial resolution of ~20 nm, this method does not capture force orientation information 18 . Similarly, while protein-based probes have recently been used for superresolution force mapping, such probes have not yet been shown to be capable of reporting force orientation information [44][45][46][47][48] . Super resolution imaging has been employed to improve the spatial resolution of TFM 49-51 , but even under ideal theoretical circumstances the spatial resolution is not expected to exceed ~500 nm due to TFM's inherent need for substrate deformation 50 . Accordingly, improving the spatial resolution of MFM would enable measurements of force orientation with unprecedented resolution (see Table S2 for a comparison of high resolution TFM and MTFM techniques). " We have also noted previous literature showing nanoscale organization within focal adhesions, including a reference to the paper mentioned by the reviewer, to our results section: "… The observation of these sub-regions within FAs, which has been borne out in previous literature [16][17][18] , can also be seen in additional examples in Fig. S7." We nonetheless stand by our original description of the limitations of traction force microscopy (TFM). It is true that advances have been made to increase the spatial resolution of TFM. However, TFM's spatial resolution, even in the best theoretical circumstances, is 2-fold worse than the diffraction limited resolution that we push past in this work. For example, Colin-York et al Nano Letters 2016 suggests a theoretical maximum resolution of ~500 nm, while the diffraction limit in our context is ~250 nm. In practice, the discrepancy is even greater because these theoretical limits have not been reached in experimental contexts. The fundamental basis for this limitation lies in the fact the TFM has two factors limiting spatial resolution: 1) diffraction of light, and 2) the propagation of mechanical strain through the soft underlying substrate. While super-resolution approaches can address the diffraction issue, they cannot address the softness issue, which is a fundamental requirement for TFM. In contrast MFM's spatial resolution is only limited by diffraction, so super-resolution techniques can push the spatial resolution down beyond the diffraction limit and even to tens of nanometers. Indeed, we are currently combining our recently-presented DNA PAINT-based approach with SIM-MFM to enable force orientation mapping of single molecule forces with spatial resolution on the order of 40 nanometers: In addition to our brief description of these factors, we have compiled a systematic analysis of TFM literature and included it as supplemental table (Table S2) in the updated manuscript. This analysis, which builds on analyses presented in other recent works 19,20 . was focused on papers that use super-resolution microscopy to improve the spatial resolution of TFM, including those referenced by the reviewer. Although these papers generally do not explicitly quantify their spatial resolution, we carefully obtained best possible estimates from each paper and found that the highest spatial resolution presented to-date was roughly 1 µm.
We do not wish to suggest that MFM will replace TFM, as the techniques are complementary. TFM reports global traction forces and provides the crucial ability to study cells on substrates with physiologically-relevant stiffnesses. However, when it comes to resolving forces generated by nanoscale structures, MFM can perform tasks that TFM fundamentally cannot.
Preliminary data showing Force-PAINT (a) Cells seeded on a substrate bearing DNA tension probes exert receptor forces on those probes. Molecular tension probes reorient parallel to the applied molecular forces. (b) DNA tension probes sample a hemisphere of conformations in the absence of forces (gray hemisphere); however, receptor forces dictate the DNA probe orientation. When imagers are recruited to the DNA tension probe, the fluorophore attached to the imager immediately stacks with the tension probe in the manner of another base. Therefore, fluorophore orientation, and the XY projection of the fluorophore (yellow ellipse in XY plane) are both dictated by applied receptor forces. Note that for ForcePAINT to work, a strained tPAINT probe must be used, which somewhat limits the number of localizations that are observed. (c) In ForcePAINT, the orientation tension probes are imaged by rapidly exciting fluorophores with 3 different excitation orientations (Φexcitation). (d) The fluorescence intensity measured for a given Φexcitation depends on the orientation of the fluorophore with respect to Φexcitation as it does in MFM. The fluorophore orientation is given by the phase of the sinusoid. (e) Preliminary forcePAINT map for a MEF cell exerting integrin forces on cRGDtagged forcePAINT probes. Force orientations calculated for single molecule localizations are given by the color of the spots. The colorwheel maps color onto force orientation.
Comment 2:
Relatedly, the authors have a tendency to hype their work. The use of the term "turn-key" in the title is arguable: all of the studies cited above used microscopy technology that is readily available in commercial form (SIM, STED, or TIRF) and could therefore be termed "turn-key" by the same logic. The claim that MTFM yields a "three orders-of-magnitude improvement in force magnitude resolution" is not supported. The technique averages over all of the DNA hairpins in a pixel, just as TFM averages over the forces exerted by integrins within a given area. Further, MTFM is not comparable to TFM in important ways: because the signal from the DNA hairpins is switch-like, it does not provide accurate local measurements of local stresses. Further, there is a two-fold degeneracy in the measurement of the azimuthal (phi) angle, a limitation that is only indirectly acknowledged in the discussion. The authors do not provide a rigorous estimate of the spatial resolution of their force maps, but instead rely on the bestcase resolution for SIM as a proxy. The authors' tendency to over-sell their work does a disservice both to their field and, over the longer term, to themselves. ** In existing MFM techniques, orientation measurements have a two-fold degeneracy that prevents unique force vector mapping 2a. "Turnkey" comment: The phrase turnkey is meant to describe a commercially available microscope that is accessible to the broad biological community. By that definition (see also the Oxford dictionary definition of turnkey), our approach is turnkey as the imaging does not require any hardware or software modifications. Moreover, there is strong precedent to using this adjective when describing microscopy techniques that can be implemented on a commercial system. See the following examples of recent Nature Communications papers that use the term "turnkey" or "turn-key" to describe microscopy and analysis methods: 2b. digital versus analog tension sensors: The folded DNA probes are threshold "digital" sensors and counterintuitively this is a more precise approach to measure molecular forces compared to analog sensors. This is because an ensemble of two-state sensors will faithfully inform on the absolute number of molecules experiencing force exceeding threshold force. In contrast, analog sensors will not provide this information. Unless one is doing single molecule measurements, analog sensors fail at providing quantitative information on the absolute number density of mechanical events. We address this point in the new 14 . Again, such advances will likely not be possible without hardware modifications to existing SIM microscopes."
22
We have also added a description of this limitation to the introductory section of the manuscript: "…spends ~10% of its time unstacked from the duplex terminus (during which time the fluorophore is randomly oriented) 3 . Note that the 180° periodicity, as shown in equation (4), means that force orientation is degenerate (i.e. any orientation measurement is indistinguishable from the same orientation rotated 180° around the z-axis). This two-fold degeneracy in force orientation measurement is a fundamental limitation of MFM that could potentially be addressed in the future using inclined illumination approaches 14 ."
2d. Spatial resolution:
We believe that this qualified language is appropriate because MTFM does in fact yield information that is more directly related to single molecule forces. We have measured the density of hairpins on our chips at ~200 molecules/micron 2 , and ~1-5% of hairpins are engaged and mechanically unfolded at any given time point. These calculations indicate that MTFM force signal is produced by ~1-10 open hairpins and in SIM-MFM this number is smaller. As a result, SIM-MFM produces averaged signal from a discrete number of probes, which is in contrast to TFM which averages the forces from thousands of receptors per "element". It is possible to detect the signal from single hairpins experiencing pN forces, and this is not the case for TFM which requires the collective activity of thousands of adhesion receptors. Nonetheless, we have removed the language referenced.
Comment 3:
The authors should provide evidence that the SIM maps presented are not subject to reconstruction artifacts. For example, the reticular structures observed in Figure 3e are reminiscent of the artifacts previously noted when using Weiner deconvolution (Fan et al. Biophys Rep. 2019) and low signal intensities. Response: See response to reviewer 2, comment 6.
Comment 4:
The authors report that Monte Carlo simulation suggests that estimates for the polar angle theta are poor at low photon counts-presumably this is why they discard low intensity pixels in their analysis. For this reason, it is important that they quantify and report the distribution of photon counts and the cutoff applied for their analyses. Response: We appreciate this suggestion and have taken the additional step of reporting the number of photons collected on the EMCCD such that we can compare the modeling to the results. The results of this additional analysis is shown below is now added as a new supplemental figure (Fig. S11). Briefly, we generated histograms of pixel photon counts for a single representative cell and also for n=17 cells. The data shows that most accepted signal lies in the range of 300 < < 1,000 photons, while the brightest 10% of signal lies in the range of 1,000 < < 3,000 photons. Therefore, the MFM force orientation measurements are reliable based on the photon intensities collected.
Photon counts were estimated from arbitrary units (a.u.) by 1) subtracting the 200 a.u. baseline from raw images, 2) multiplying by the pre-amplification factor (4.9, also called "Conversion Gain #1" in the nd2 image metadata), and 3) dividing by the conversion gain (100). We then used a masking procedure described in Fig. S13to select pixels that were included in SIM-MFM analyses. a) A histogram of photon counts following this full process is shown for an individual platelet, along with b) a cumulative density function of counts. c,d) Same as a and b, but for an aggregated dataset consisting of 17 platelets. The data shows that most accepted signal lies in the range of 300< <1,000 photons, while the brightest 10% of signal lies in the range of 1,000< <3,000 photons. Therefore, the MFM force orientation measurements are reliable based on the photon intensities collected.
We have also included a reference to this new figure in the main text: "Our results at the experimentally relevant signal level of = 1,000 ℎ (Fig. S7) displayed small systematic errors in (less than half a degree across all orientations)."
Comment 5:
The polar angle for the force vector of 45 degrees measured for 3T3s seems large relative to previous measurements (Liu PNAS 2015). It seems plausible that this may reflect the tendency of the measurement used here to overestimate low theta values (Fig. S5E). Liu et al. is, incidentally, another relevant prior publication that is not cited. Response: First, we would like to note that Liu et al. was reporting the angle from the horizontal plane, and we are reporting the angle from the vertical axis. Therefore, we are actually reporting values that are larger than that of Liu et al. Secondly, MFM is more precise in reporting force orientations that are more parallel to the horizontal plane, as these force vectors produce greater amplitude in the fit sinusoids. Finally, we emphasize that there is considerable debate in the literature and other work has reported talin tilt angles that are more consistent with our 45 degree value. For example, see the work of Weaver and Paszek in Nature Methods and also Waterman and Springer in Nature Communications 21,22 .
As a final note, the literature precedent has mostly focused on measuring the tilt of talin and other focal adhesion proteins, while our work reports the tilt of the integrin ligand itself. To the best of our knowledge, these two vectors may not necessarily align due to the complex architecture of adhesion assemblies and biophysical factors including the plasma membrane. For example, it is not clear what rotational and tilt angles the alpha-beta integrin heterodimer can adopt within the focal adhesion. To accurately portray this controversy, we have included a citation to Liu et al. in our updated text (see response to reviewer 2, comment 18).
Comment 6:
The rationale for splitting platelets into "two groups (28 increasing alignment cells and 22 cells with non-increasing alignment)" is not clear. The authors should provide evidence that these cells are distinguishable in an independent, biologically relevant way. Otherwise, the parsimonious interpretation is that all the cells come from the same distribution, of which half happen to fall above the measurement threshold for force vector alignment. This ambiguity makes this section of the paper difficult to evaluate. Response: We apologize for the lack of clarity in our original submission. We did not mean to suggest that these two groups represent biologically distinct populations of cells. Rather, we binned the cells into two groups based on their phenotype; only the increasing-alignment cells would fit well to equation 9, so it would distort our results if we included non-increasing alignment cells in the calculation of average . Notably, heterogeneity in cell phenotype and even in platelet phenotypes is common. This is documented in platelet textbooks. In our hands, even when we study a homogeneous population of platelet from a single donor and plate these on chemically uniform surfaces, we find a variety of responses, with a subset of platelets displaying a spiky morphology (filopodial projections) while other platelets spreading uniformly on the substrate. Others have reported heterogeneity in platelet cytoskeleton. For example, prior work by Lickert et al. presented in Scientific Reports found that human platelets spreading on fibronectin-coated glass showed actin and vinculin was generally concentrated in 2, 3, or 4 lobes (65%, 19%, or 3% of platelets, respectively), while the remainder of platelet (13%) were isotropically (e.g. circularly) organized 6 . Accordingly, classifying the platelets in two groups based on the dynamics of their tension signal is consistent with past literature.
We have modified our discussion of the grouping to better communicate the purpose behind the separation: "To quantify the average behavior of the increasing-alignment population, we split these cells into two groups (28 increasing alignment cells and 22 cells with non-increasing alignment) and compared the sets of fit-parameters." Furthermore, we have expanded upon our discussion of the classification of the cell phenotype in the end of the relevant results section.
"…underpinning this phenomenon. Our analysis of increasing alignment was restricted to 56% of platelets studied, as the remainder did not appear to exhibit increasing alignment. However, a localization microscopy-based analysis of human platelets spreading on fibronectin-coated glass found that platelet actin and vinculin was generally concentrated in 2, 3, or 4 lobes (65%, 19%, or 3% of platelets, respectively), while the remainder of platelet (13%) were isotropically (e.g. circularly) organized. In this work, we expect that only the 2-lobe subset should be recognized as increasing-alignment. This limitation occurs because, if alignment is increasing internally with 3 or 4 lobes the whole-cell measurement should appear artificially low and remain largely timeinvariant. Therefore, the non-increasing alignment group may include 3-or 4-lobed platelets with alignment that is increasing internally within lobes. As such, we expect that our findings regarding the dynamics of alignment may be applicable to greater than 56% of platelets."
Comment 7:
The null result reported for T-cells (no obvious force polarization) is likewise difficult to interpret. The three possibilities listed by the authors are reasonable. However, it also seems possible that the force threshold for the sensor used may be too low or too high to measure the forces that are most relevant for T-cell activation. Another possibility is that the choice of antibody might matter: tangential force transmitted through 17A2 resulted in T-cell activation, whereas other mABs did not (Kim et al. JBC 2009). (So far as I can tell the authors do not report what antibody they used.) All of these ambiguities make this section of the paper likewise difficult to interpret. Response: Our prior published work and also the work of Zhu and colleagues 23,24 (Nature Immunology 2018) has shown TCR activation using antibodies generates tension signal that unfolds the 4.7 pN probes. The 4.7 pN hairpin used here is identical in sequence to the probe used previously 25,26 ( Our previous work paper clearly shows that T cells are activated using this probe and are fully capable of opening probes of this 1/2 . Therefore, we can rule out this possibility as a potential reason for generating null signal.
Regarding the choice of antibody, we apologize for neglecting to report this important detail. We have updated our manuscript to specify the antibody. Regarding the antiCD3 antibody, we use the clone 145-2C11. This clone binds to the epsilon domain of antiCD3 and is a known activating antibody. Therefore, the antibody will activate T cells regardless of the direction of the force. We have adjusted our manuscript to specify this: We cultured primary mouse CD8+ T-cells on DNA hairpin tension probes that present antibodies to CD3e (CD3ε is part of the TCR complex which includes CD3δ, γ and ζ chains, as well as the TCR α/β chains). The antibody is the clone 145-2C11, a known activating antibody that should activate T cells regardless of force orientation. As observed previously 25,26 , T-cells spread on the tension probe-functionalized surface, and the TCRs engaged and mechanically unfold tension probes to generate bright tension fluorescence signal.
Comment 8:
The method used to derive confidence bands in Figure 5h is ad hoc, and needs to be replaced with a more rigorous approach. Response: We have replaced the curves' confidences intervals with a bootstrapping-based method. | 14,169.4 | 2021-08-03T00:00:00.000 | [
"Engineering",
"Physics",
"Biology"
] |
Teaching geometry in schools : an investigative rather than instructive process
Research has documented the prevalence of lessons characterised by homework check, followed by teacher lecture and demonstration, followed in turn, by learner practice sequence of classroom instructional activities in our classrooms. This sequence of classroom activities does not allow for the development of sound mathematics practices and mathematical proficiency. Meanwhile, curriculum reforms in South Africa as well as in other parts of the world recommend classroom activities where teachers create opportunities for, listen to and extend learners. This paper presents a sequence of activities to be used in the teaching of geometry and surface areas of solid shapes in a grade 8 classroom. The sequence portrays the teaching of these concepts as an investigative rather than instructive process.
Introduction
Outcomes-based education (OBE) is the foundation for Curriculum 2005 (C2005) in South Africa, and guides curriculum development and learning outcomes based on critical and developmental outcomes.According to the Department of Education policy documents, critical outcomes require learners to be able to (1) identify and solve problems and make decisions using critical and creative thinking; (2) work effectively with others as members of a team, group, organisation and community; (3) organise and manage themselves and their activities responsibly and effectively; (4) collect, analyse, organise and critically evaluate information; (5) communicate effectively using visual, symbolic and/or language skills in various modes; (6) use science and technology effectively and critically showing responsibility towards the environment and the health of others; and (7) demonstrate an understanding of the world as a set of related systems by recognising that problem-solving contexts do not exist in isolation (Department of Education, 2002a).To achieve these, the Revised National Curriculum Statement emphasises a learnercentred, activity-based approach to the teaching of mathematics.This is clearly the tone in the set learning outcomes and assessment standards at various phases of the different grade levels of the educational system.It identifies the following learning areas, which include interrelated know-ledge and skills, on the basis of which learning outcomes and subsequent assessment standards are set.The knowledge areas include, (1) numbers, operations and relationships; (2) patterns, functions and algebra; (3) space and shape (geometry); (4) measurement; and (5) data handling.Skills include, (1) representation and interpretation; (2) estimation and calculation; (3) reasoning and communication; (4) problem-solving and investigation; and (5) describing and analysing (Department of Education, 2002b).A critical look at these will reveal some positive relationships between the curriculum and either of the RAND Mathematics Study Panel's (2002) mathematics practices and Kilpatrick, Swafford and Findel's (2001) strands of mathematical proficiency.It then suggests that the provisions of the curriculum are strong enough to evolve the doers of mathematics that would exhibit mathematics practices and proficiency in mathematics, within and outside of the classroom.
However, the sequence of mathematics classroom practices which research has documented to be prevalent in our classrooms entails checking homework, followed by teacher lecture and demonstration, followed in turn by learner practice in a sequence of classroom instructional activities.This is unfortunately neither that which the curriculum proposes, nor that which allows for the attainment of the laudable outcomes.This explains why learners would tell almost spontaneously that the area of a rectangle 20,000 metres by 20 metres is 400,000 square metres, but could not find out how much land has to be taken from a cocoa plantation in order to build a 20-km-long 2-lane highway within the plantation; or a learner can successfully find the surface area of a cuboids 10 × 8 × 10 cm in class, but cannot help her mother estimate the number of one-metre square tiles required to cover the walls and floor of her (mother's) shop which is 10 m long, 8 m wide and 10 m high.This issue of discontinuity between school learning and cognition out of school is observed by Resnick (1987, in Engestrom, 1996) thus: The process of schooling seems to encourage the idea that the "game of school" is to learn symbolic rules of various kinds, that there is not supposed to Panel's (2002) repressentation, justification and generalisation skills focused on in traditional teaching.Provisions and demands of the new South African curriculum suggest a drastic change from traditional modes of teaching to reform (learner-centred) teaching, which focuses on the nurturing of proficiency and development of mathematical practices.It is therefore imperative that ways are found not only to suggest changes in teachers' practices, but also to provide necessary support and assistance for such desired changes to manifest in the classroom.One such mode of support is through the presentation of lessons that support the nurturing of proficiency and development of mathematical practices in learners, which is the focus of this paper.
Investigative teaching of geometry
Research findings have shown that learning is enhanced by their engagement with relevant materials, even if learners have certain learning difficulties (Bransford, Brown & Cocking, 1999;Mastropieri & Sruggs, 1992).Hence, the situated theorist's position is that everybody will learn if given an enabling environment to participate in a community of practice.This is similar to the calls by Brown, Collins and Duguid (1989), Hanks (1991) and Lave (1996), that learning is enhanced through participation of individual learners -a process referred to as enculturation and apprentice-ship.All these point to the importance of learners' active participation in instructional process.Abraham's (1997) learning cycle is an inquirybased teaching approach, developed for science teaching, which I have found useful in investigative teaching of concepts in geometry and indeed in other areas of mathematics.The approach, which derives from the cognitivist and constructivist position on teaching and knowledge-seeking, divides instruction into four progressive stages.Step one is engage, where the teacher creates an enabling environment to engage learners in activities that generate curiosity and interest in the planned topic of the day.Usually, an inquiry question is presented to the learners at this stage.
The second stage is explore, where learners explore the question(s) raised at the engage stage and generate answers.At this stage, the learners are placed in groups, and the teacher acts as a facilitator and usually asks further questions to guide learners' explorations, and provides hints about how to proceed, without showing learners "exactly how to go about solving the problem" (Stein, Smith, Henningsen & Silver 2000).Usually, this stage is characterised by a series of questions and introductory activities that are similar to the topic presented in their worksheets (see Appendix 1 and 2).
Then comes the third stage, explain, where opportunities are provided for learner groups to present solutions or answers to the inquiry question(s), giving justifications and explanations for their claims.And then comes the last stage, extend, where learners extend their concepts and skills to other situations by applying what they have learned in the explain stage.Usually, and particularly so in mathematics, further tasks in which these skills can be exhibited are provided for the learners.It could be in groups, pairs, or individually.At each of these stages, and indeed at the end, evaluation of the process would go on simultaneously.
It is important to note that activities at all of these stages are interlinked, and will bring about learners' active participation.This emphasis represents an important merger between mathematics as an investigation and mathematics as a body of knowledge, where learners acquire "knowledge through investigation and experimentation in order to facilitate verbalising, understanding and applying principles in the real world" (Luera, Killu & O'Hagan, 2003: 195).According to Freudenthal (1991), mathematics must be connected to reality.It is through this approach that the learners can develop and apply mathematics to problem that makes sense to them (Van Den Heuvel-Panhuizen, 2003;Wigley, 1994;Freudenthal, 1991).
Below, I provide an example of a learner-centred, inquiry-based investigative lesson in a unit of mathematics in the space and shape (geometry) learning area of C2005.This unit is chosen because of its applicability to all learners, especially from grade level 7 through to 12, and because elements of it are discernable from the assessment standards at these levels.More importantly, it has been used because learners find it difficult to understand description of surface areas, even though this is one of the most physical ways of describing an object.
Skills required
• Length measurement.
• Addition and multiplication of numbers.
Lesson objectives
By sustained investigation, inquiry-learning activities, the learners will be able to: • Describe the concept of surface area.
• Explain the features of a shape that influence its surface area.
• Measure different edges of a prism.
• Decouple regular prisms and identify the different planes in it.
• Find the area of the composite surfaces.
• By addition, compute the surface area of solids.
• Generate and use mathematical equations to compute surface areas of plane shapes and other possible variables in them Materials needed • Three cardboard boxes of 3 different sizes -A: 5 × 4 × 6 cm; B: 2 × 3 × 5 cm; and C: 4 × 4 × 7 cm -and their nets, for different learner groups.
Lesson presentation
Engage This stage requires the teacher's initiative in presenting a relevant scenario that would arouse learners' curiosity and stimulate their interest in getting involved.I consider the one below for the lesson: Christmas is approaching and Mr White and his family have decided to make their living room wear a new look.John, a member of the family, came up with an idea: "tiling the walls and floor would be good.""That's nice, sky blue walls and light brown glittering floor," says his sister Flo."How many tiles will we need?" asked Mr White.
The teacher then poses the following question to the class: How many wall tiles, floor tiles would be needed to make the living room, 8½ metres long, 7 metres wide and 3 metres high, wear a new look?Assume that they are using tiles that are 0.5 by 0.5 metres, and packaged in packs of 10 tiles.If a pack of floor tiles sells for R135 and a pack of wall tiles sells for R110, estimate the cost of the required tiles.
The inquiry question here is to determine the number of tiles required and then the cost.The learners are supposed to work in groups of 4 and the teacher circulates and provides guidance but never tells them the way to go about it.
Explore 1
At this stage, learners in their various groups discuss ideas that emerge within the groups as they attempt to work through worksheet 1 (see Appendix 1).They engage in different forms of activities using the materials provided.These activities are guided by the teacher and especially through work-sheet 1.Once learners have worked through the worksheet successfully, the class is ready to move onto the next stage.
Explain 1
This is the stage where learners discuss their solution to the problem in worksheet 1.This should be done by a presenter from each group, and detailed explanation and justification should be demanded.The teacher should ensure this by pressing other learners to ask questions and specifically demand explanations.Before the end of this plenary session, the teacher should ensure that an interaction evolves in which learners realise the importance of nets of shapes in mathematical explorations.This leads to the main problem in Explore 2.
Explore 2
Exploration continues in this stage.As in explore 1, activities are guided by the teacher and especially with worksheet 2 (Appendix 2).The inquiry question in this exploration is to determine the number of tiles that the Whites will need, and the cost.An important question regarding the door and windows in the living room will possibly erupt as learners work through this worksheet.Depending on the learners' level of competence, the teacher may find it necessary to advise them to ignore this detail in working on the number of tiles required for the wall.Learners should be given free hand to approach this exercise by counting squares on the square paper or by fixing squares stickers on the net.More competent learners may even recognise the pattern and choose to compute the areas.They might work in their original groups or in pairs.
Explain 2
This is the stage where learners discuss their solution to the original problem in a plenary session.Presentation should be done by a different presenter from each group, and detailed explanation should be demanded.As in explain 1, the teacher needs to ensure that logical explanations or justifications are given for the position taken by different groups.Pressing them and specifically demanding explanation will do this.One or two groups might notice the linkages to a pattern relating to the area of rectangles.
During the learners' presentations, the teacher should guide the learners in understanding that the number of 1-by-1 squares that a plane surface can take is referred to as, the 'surface area of the plane surface'.Furthermore, where two or more plane surfaces combine to form a shape, then the surface area of that shape is obtained by summing the surface areas of all the plane surfaces involved.The latter is what is referred to as, 'total surface area'.
Extend
Here, an extension of concepts and generalisations of ideas is necessary.Learners infer from the group presentations that the surface area of a rectangular prism is a function of its length, breadth and height, or its base area and height.Using the two activities, the teacher should now show how the work relates to (i) length, breadth and height, and (ii) base area and height.This can then be used to generalise an equation for the surface area of a rectangular prism, and perhaps other prisms (triangular base, cylinder, etc), depending on the learners' level of competence.
Learners might now be asked to find the number of tiles that will be required: (i) to cover the walls of a company's warehouse that is 25 metres long, 15 metres wide and 12 metres high.(ii) both inside and outside of a regular hexagonal reservoir with an outer measurement of 10 metres long and 20 metres high, assuming the thickness of the wall is one metre.This could be done in pairs.To further extend knowledge in this area, learners could do a project using buildings within the school, or beyond, where the teacher has identified shapes that are not rectangular but some other geometrical shape like a trapezium, or has other fixtures like fireplaces or alcoves.
Explain 3
This phase allows the learners to approach surface area problems without having to engage in practical counting of squares.It should be noted that some of the pairs might still approach task (i) by disjointedly computing the areas of the walls and summing up.This is fine, but the teacher needs to help the learners link the two tasks in a logical manner, especially if some of the class have already approached the solution holistically.An attempt at task (ii) above, and the project that follows, should actually be commended and appreciated.In fact, this could generate lengthy 1).Evaluation Although, there ought to have been some form of evaluation at each of the different levels above, it is imperative that learners be asked questions or given tasks that allow a further demonstration of knowledge, like tasks (i) and (ii) and the suggested project on the previous page.This could be in another worksheet or textbook, or tasks earlier identified and selected by the teacher for that purpose.It is important that such tasks cover a variety of possible scenarios that learners might face in real life situations, within or outside of the school, and even in different units of measurement.These tasks should be handed in, graded, returned and discussed in class because, according to Piaget (1964) and other cognitivists, the discussion that learners' responses generates is even more important than the questions asked by the teacher or tasks set up and implemented in the classroom.
Conclusion
The foregoing scenario is an illustration of application of Abraham's (1997) learning cycle in the planning and implementation of mathematics lessons.Although it is widely used in science teaching, it is equally useful in teaching mathematics.The approach explicates the teaching of geometry as an investigative rather than instructive process.An investigative approach to teaching ensures adequate teacher preparation for the lesson, offers opportunities for learners to recognise previous knowledge, and accommodates learners' alternative conceptions.Hence, it provides learning experiences that help them to revise alternative notions and develop new concepts, and ensures adequate involvement in the lesson.Also, according to Lorsbach (2002), it naturally leads to other investigations that promote further exploration of other mathematical concepts.It therefore appears that one way for mathematics teachers to exemplify the current reform in mathematics teaching is for them to teach the subject via the learning cycle.This emphasises an investigative rather than instructive mode of instruction that will enhance the nurturing of all five strands of mathematical proficiency and develop mathematical practices. | 3,848.6 | 2007-10-14T00:00:00.000 | [
"Mathematics"
] |
Generalized Bayes Estimation Based on a Joint Type-II Censored Sample from K-Exponential Populations
: Generalized Bayes is a Bayesian study based on a learning rate parameter. This paper considers a generalized Bayes estimation to study the effect of the learning rate parameter on the estimation results based on a joint censored sample of type-II exponential populations. Squared error, Linex, and general entropy loss functions are used in the Bayesian approach. Monte Carlo simulations were performed to assess how well the different approaches perform. The simulation study compares the Bayesian estimators for different values of the learning rate parameter and different losses
Introduction
Generalized Bayes is a Bayesian study based on a learning rate parameter (0 < η < 1). As a fractional power on the likelihood function L ≡ L(θ; data) for the parameter θ ∈ Θ, the traditional Bayesian framework is obtained for η = 1. In this paper, we will show the effect of the learning rate parameter on the estimation results. That is, if the prior distribution of the parameter θ is π(θ), then the generalized Bayes posterior distribution for θ is: π * (θ|data) ∝ L η π(θ), θ ∈ Θ, 0 < η < 1. ( For more details on the generalized Bayes method and the choice of the value of the rate parameter, see, for example [1][2][3][4][5][6][7][8][9][10]. In addition, we refer readers to [11,12] for recent work on Bayesian inversion. An exact inference method based on maximum likelihood estimates (MLEs), and compared its performance with approximate, Bayesian, and bootstrap methods developed by [13]. A joint progressive censoring of type-II and the expected values of the number of failures for two populations under joint progressive censoring of type-II introduced and studied by [14]. Exact likelihood inference for two exponential populations under joint progressive censoring of type II was studied by [15]. A precise result based on maximum likelihood estimates developed by [16]. A study of Bayesian estimation and prediction based on a joint type-II censored sample from two exponential populations was presented by [17]. Exact likelihood inference for two populations of two-parameter exponential distributions under joint censoring of type II was studied by [18].
Suppose that products from k different lines are manufactured in the same factory and that k independent samples of size n j , 1 ≤ j ≤ k are selected from these k lines and simultaneously subjected to a lifetime test. To reduce the cost of the experiment and shorten the duration of the experiment, the experimenter can terminate the lifetime test experiment once a certain number (say r) of failures occur. In this situation, one is interested in either a point or interval estimate of the mean lifetime of the units produced by these k lines.
Suppose {X n j j , j = 1, . . . , k} are k-samples where, X n j j = {X j1 , X j2 , . . . , X jn j } are the lifetimes of n j copies of product line A j and assumed to be independent and identically distributed (iid) random variables from a population with cumulative distribution function (cdf) F j (x) and probability density function (pdf) f j (x).
Furthermore, let N = ∑ k j=1 n j be the total sample size and r be the total number of observed failures. Let W 1 ≤ . . . ≤ W N denote the order statistics of the N random variables {X n j j , j = 1, . . . , k}. Under the joint Type-II censoring scheme for the k-samples, the observable data consist of (δ, W), where W = (W 1 , . . . , W r ), W i ∈ {X n j i j i , j i = 1, . . . , k}, and r is a predefined integerand δ = δ 1j , . . . , δ rj associated with (j 1 , . . . , j r ) is defined by: If r j = ∑ r i=1 δ ij denotes the number of X j -failures in W and r = ∑ k j=1 r j , then, the joint density function of (δ, W) is given by: The main goal of this paper is to consider the Bayesian estimation of the parameter based on the learning rate parameter under a joint censoring scheme of type-II for exponential populations when censoring is applied to the samples in a combined manner. Section 2 presents the maximum likelihood and generalized Bayes estimators, using squared error, Linex, and general entropy loss functions in the Bayesian approach to estimate the population parameters. A numerical investigation of the results from Section 2 is presented in Section 3. Finally, we conclude the paper in Section 4.
Estimation of the Parameters
Suppose that for 1 ≤ j ≤ k, the k populations are exponential with the following pdf and cdf: Then, the likelihood function in (3) becomes: where Θ = (θ 1 , . . . , θ k ) and u j = ∑ r i=1 w i δ ij + w r n j − r j .
Maximum Likelihood Estimation
From (5), the MLE of θ j , for 1 ≤ j ≤ k, is given by: Remark 1. MLEs of θ j exist if we have at least k failures (r ≥ k), which means at least one failure from each sample, i.e., 1 ≤ r j ≤ r − k + 1 and r j ≤ n j .
We determined the MLEs to compare their results with those of Bayesian estimation, which uses the three types of loss functions for different values of the rate parameters, as described in Section 3.
Generalized Bayes Estimation
Since the parameters Θ are assumed to be unknown, we can consider the conjugate prior distributions of Θ as independent gamma prior distributions, i.e., θ j ∼ Gam a j , b j . Therefore, the joint prior distribution of Θ is given by: where and Γ(·) denotes the complete gamma function. Combining (5) and (7), after raising (5) to the fractional power η, the posterior joint density function of Θ is then: Since π j is a conjugate prior, we see that if θ j ∼ Gam a j , b j , then it has the posterior density function as θ j data ∼ Gam r j η + a j , u j η + b j .
In generalized Bayes estimation, we consider three types of loss functions: (i) The squared error loss function (SE), which is classified as a symmetric function and gives equal importance to losses for overestimates and underestimates of the same magnitude; (ii) The Linex loss function, which is asymmetric; (iii) The generalization of the entropy (GE) loss function.
Using (9), the Bayesian estimators of θ j under the squared error (SE) loss function are: Under the Linex loss function, the Bayesian estimators of θ j are given by: and under the GE loss function, the Bayesian estimators of θ j are given bŷ Remark 2. Obviously,θ j for 1 ≤ j ≤ k in the above three cases are the unique Bayes estimators of θ j and thus admissible. The estimatorsθ jJ , are Bayes estimators of θ j using the noninformative Jeffreys priors π J ∝ ∏ k j=1 1 θ j , obtained directly by substituting a j = b j = 0 into (9), so that (10) leads to MLEsθ jM .
Remark 3.
For c = 1, −1, −2, the Bayes estimatesθ jE agree with the Bayes estimates under the following losses: weighted squared error loss function, squared error loss function, and precautionary loss function.
Numerical Study
This section examines the results of a Monte Carlo simulation study to evaluate the performance of the inference procedures derived in the previous section. An example is then presented to illustrate the inference methods discussed here.
Simulation Study
We have considered the different choices for the three populations sample sizes (n 1 , n 2 , n 3 ) and also for r. We choose the exponential parameters (θ 1 , θ 2 , θ 3 ) as (1, 2, 3) and for the Monte Carlo simulations, we use 10,000 replicates. Using (6), we obtain the MLEs of θ 1 , θ 2 , θ 3 and their estimated risk, which are shown in Table 1. Table 1. Average value of (r 1 , r 2 , r 3 ) and the average value and estimated risk (ER) of the MLEŝ θ 1M ,θ 2M ,θ 3M for different choices of n 1 , n 2 , n 3 , r. In the simulation study, it should be noted that some of the simulated samples do not meet the condition in Remark 1 and, therefore, must be discarded. Thus, the average values of the observed failures (r 1 , r 2 , r 3 ) are calculated and reported in Table 1. For the Bayesian study, the hyperparameters are represented by ∆ = (a 1 , b 1 , a 2 , b 2 , a 3 , b 3 ), where ∆ = ∆ 1 = (1, 1, 2, 1, 3, 1).
Conclusions
In this work, we considered a joint type-II censoring scheme when the lifetimes of three populations have exponential distributions. We obtained the MLEs and the Bayesian estimates of the parameters using different values for the learning rate parameter η and the loss functions SE, GE, and Linex in a simulation study and an illustrative example. In both methods, the MLEs and the generalized Bayes estimatesθ 1 ,θ 2 ,θ 3 become better with more significant n i ; i = 1, 2, 3 for different values of r; of course, the estimates become better, but the Bayes estimators are better than the MLEs. From Table 2 to Table 7, it can be seen that the results improve as c increases. Generally, the best result is obtained for generalized Bayes estimators for η = 0.1, i.e., the result improve better when η becomes small. Studying this work with a different type of censoring might be interesting. | 2,381.4 | 2023-05-06T00:00:00.000 | [
"Mathematics"
] |
Error bounds and a condition number for the absolute value equations
Absolute value equations, due to their relation to the linear complementarity problem, have been intensively studied recently. In this paper, we present error bounds for absolute value equations. Along with the error bounds, we introduce an appropriate condition number. We consider general scaled matrix p-norms, as well as particular p-norms. We discuss basic properties of the condition number, its computational complexity, its bounds and also exact values for special classes of matrices. We consider also matrices that appear based on the transformation from the linear complementarity problem.
1. Introduction. We consider the absolute value equation problem of finding an x ∈ R n such that where A ∈ R n×n , b ∈ R n and | · | denotes absolute value. A slightly more generalized form of (AVE) was introduced by Rohn [36], which is written as where B ∈ R n×n , but we will deal merely with (AVE). Many methods, including Newton-like methods [12,27,45] or concave optimization methods [29,30], have been developed for (AVE). The important point concerning numerical methods is the precision of the computed solution. To the best knowledge of the authors, there exist only few papers which are devoted to this subject for (AVE); for instance see [1,42,43]. Wang et al. [42,43] use interval methods for numerical validation. In addition, some general bounds for the solution set were presented in [20]. One effective method for numerical validation is error bound method [33].
Error bounds play a crucial role in theoretical and numerical analysis of linear algebraic and optimization problems [11,13,14,18,33]. In this paper, we study error bounds for (AVE). Indeed, under the assumption guaranteeing unique solvability for each b ∈ R n , we compute upper bounds for x − x ⋆ , the distance to the solution x ⋆ , in terms of a computable residual function. We discuss various kinds of norms and investigate special classes of matrices.
It is well-known that a linear complementarity problem can be formulated as an absolute value equation [28]. In fact, it is one of the main applications of absolute value equations. In Section 3, we study error bounds for absolute value equations obtained by the reformulation of linear complementarity problems. In addition, thanks to the given results, we provide a new error bound for linear complementarity problems.
The paper is organized as follows. After reviewing terminologies and notations, we investigate error bounds for absolute value equations in Section 2. Section 3 is devoted to linear complementarity problems. We study relative condition number of AVE in Section 4.
1.1. Notation. The n-dimensional Euclidean space is denoted by R n . Vectors are considered to be column vectors and the superscript T represents the transpose operation. We use e and I to denote the vector of ones and the identity matrix, respectively. We denote an arbitrary scaling p-norm on R n by · , that is, x = Dx p for a positive diagonal matrix D and a p-norm. In particular, · 1 , · 2 and · ∞ stand for 1-norm, 2-norm and ∞-norm, respectively. We use sgn(x) to denote the sign of x.
Let A and B be n × n matrices. We denote the smallest singular value and the spectral radius of A by σ min (A) and ρ(A), respectively. We use λ(A) to denote the vector of eigenvalues of a symmetric matrix A, and λ min (A) and λ max (A) stand for the smallest and the largest eigenvalue, respectively. For a given norm · on R n , A denotes the matrix norm induced by · , which is defined as The matrix inequality A ≥ B, |A| and max(A, B) are understood entrywise. For d ∈ R n , diag(d) stands for the diagonal matrix whose entries on the diagonal are the components of d. In contrast, Diag(A) denotes the vector of diagonal elements of A. The ith row and ith column of A are denoted by A i * and A * i , receptively. We denote the comparison matrix A by A , which is defined as We recall the following definitions for an n × n real matrix A: • A is a P-matrix if each principal minor of A is positive.
• A is an H-matrix if its comparison matrix is an M-matrix. We will exploit some results from interval linear algebra, so we recall some results from this discipline. For two n × n matrices A and A, A ≤ A, the interval matrix A = [A, A] is defined as A = {A : A ≤ A ≤ A}. An interval matrix A is called regular if each A ∈ A is nonsingular. Furthermore, we denote and define the inverse of a regular interval matrix A as A −1 := {A −1 : A ∈ A}. Note that the inverse of an interval matrix is not necessarily an interval matrix.
In this paper, generalized Jacobian matrices [9] are used in the presence of nonsmooth functions. Let f : R n → R m be a locally Lipschitz function. The generalized gradient of f atx, denoted by ∂f (x), is defined as where X f is the set of points at which f is not differentiable and co(S) denotes the convex hull of a set S.
2. Error bounds for the absolute value equations. Consider an absolute value equation (AVE). It is known that (AVE) has a unique solution for each b ∈ R n if and only if the interval matrix [A−I, A+I] is regular; see Theorem 3.3 in [44]. That is why in many statements below, we make an assumption that the interval matrix [A − I, A + I] is regular. In this case, we denote the unique solution set of (AVE) by x ⋆ .
Proof. Note that due to regularity of [A − I, A + I] the right side of the above inequality is finite. Define the residual function φ : By the mean value theorem, see Theorem 8 in [19], By multiplying −1 on the both sides and using induced norms property, we obtain which completes the proof.
To take advantage of this formulation, we need to compute the optimal value of the following optimization problem, We call the optimal value of (2.2) the condition number of the absolute value equation (AVE) with respect to the norm · . In addition, we denote the condition number with respect to the 1-norm, 2-norm and ∞-norm by c 1 (A), c 2 (A) and c ∞ (A), respectively. By properties of matrix norms, we have the following results.
Proof. Part i) and ii) are straightforward. Part iii) follows form the fact that In the next proposition, we show that optimization problem (2.2) attains its minimum at some vertices of the box {d : Proof. We will show that problem (2.2) has a solution whose components are either one or minus one. As the feasible set is compact, problem (2.2) attains its maximum. Letd be an optimal solution. Ifd is a vertex of {d : d ∞ ≤ 1}, the proof will be complete. Otherwise, without loss of generality, suppose that |d 1 | < 1. Let f : [−1, 1] → R given by f (t) = (A − diag((t,ď))) −1 , whereď is obtained by removing the first component ofd. By Sherman-Morrison formula [21], is well-defined for t ∈ [−1, 1]. Consider the optimization problem max t∈[−1,1] f (t). Since A + tE as a function of t is convex and is strictly monotone on [−1, 1], f is convex on its domain [4], and consequently max t∈[−1,1] f (t) = max{f (−1), f (1)}. Hence, due to optimality ofd, we get a new pointd which is optimal to (2.2) and all components instead of first one are equal tô d and its first component is either one or minus one. In the same line, one can obtain a solutiond with |d| = e, which completes the proof.
By Proposition 2.3, to handle problem (2.2), one needs to check solely all vertices of {d : d ∞ ≤ 1}. As the number of vertices is 2 n , this method may not be effective for large n. Indeed, problem (2.2) is NP-hard in general. It is known that for any rational p ∈ [1, ∞), except for p = 1, 2, computation of the matrix p-norm of a given matrix is NP-hard [17]. Consequently, problem (2.2) is NP-hard for any rational p ∈ [1, ∞) except p = 1, 2. We prove intractability for 1-norm, so it is NP-hard for ∞-norm, too. We conjecture it is also NP-hard for 2-norm.
Proof. By triangle inequality u = 1 , from which the statement follows. Proof. By [35], solving the problem max e T |x| subject to |Ax| ≤ e (2.3) is NP-hard. Even more, it is intractable even with accuracy less than 1 2 when A −1 is a so called MC-matrix [35]. Recall that M ∈ R n×n is an MC matrix if it is symmetric, 2n−1 . Therefore λ min (A) ≥ 1 2n−1 and we can achieve λ min (A) > 1 by a suitable scaling. As a consequence, [A − I, A + I] is regular.
Feasible solutions to the above optimization problem can be equivalently characterized as Introducing an auxiliary variable z = 1, we get Rewrite the system as Let α > 0 be sufficiently large. The system equivalently reads Now, we relax the system by introducing intervals on the remaining diagonal entries That is why we analytically express the inverse matrix (notice that it exists due to regularity of [αA − I, αA is attained for the last column, the claim is resulted. Otherwise, since α > 0 is arbitrarily large, the 1-norm is attained for no column of the middle part. Suppose that the norm is attained for ith column of the first column block. We compare the norms of this column and the last column of M (D, We compare separately their three blocks. Obviously, for the last entry the latter is larger. Since C → 0 as α → ∞, the first block of entries of the former vector is arbitrarily small and neglectable. Thus we focus on the second block. The former vector has entries αC * i . In view of Lemma 2.4, one can choose a suitableD such that |D| = I and αC * i 1 ≤ αCDe 1 = αC * i + α j =i C * jdjj 1 . Furthermore, one can select a matrixD ′ with |D ′ | = I and e + D ′ CDe 1 = e +D ′ CDe 1 . Because c 1 (M (0, 0, 0)) = M (D, D ′ , −1) −1 1 , the given matricesD andD ′ fulfill the claim. Claim B. The 1-norm of the last column is arbitrarily close to 1 + n + e T |A −1 De|. Proof of the Claim B. The last entry of the column is 1. Since C → 0 as α → ∞, the first block tends to e as α → ∞. The second block reads By Claim B, the 1-norm of the last column is by 1 + n larger than the objective value of (2.3). So by maximizing 1-norm of M (D, D ′ , d) −1 we can deduce the maximum of (2.3) with arbitrary precision. Notice that e T |A −1 |e is an upper bound on (2.3) and it has polynomial size, so we can find α of polynomial size, too by the standard means (c.f. [39]).
In general, the computation of c(A) is not easy. However, computation of the condition number with respect to some norms or for some classes of matrices is not difficult. In the rest of the section, we study the given condition number from this aspect.
For the following we say that a matrix norm is monotone if |A| ≤ B implies A ≤ B . For instance, the scaled matrix p-norms are monotone.
Proof. By Proposition 2.3, we need to check the vertices of {d : d ∞ ≤ 1}. Let d be such that |d| = e and denote D := diag(d). Then By monotonicity of the matrix norm Proof. Note that the assumption implies that (AVE) has a unique solution, see Theorem 4 in [38], and [A − I, A + I] is regular. Due to the continuity of eigenvalues with respect to the matrix elements, there exists matrix B with |A −1 | < B and ρ(B) = γ. By Perron-Frobenius theorem, there exists v > 0 such that Bv = ρ(B)v.
By Neumann series theorem [21], (I − |A −1 |) −1 and (I − B) −1 exist and are nonnegative. Hence, Moreover, for d with d ∞ ≤ 1, One may wonder why we do not use the well-known result which states the existence of a matrix norm . with A < ρ, see Lemma 5.6.10 in [21], to prove the above theorem. The underlying reason is that the given matrix norm by this result is not necessarily a scaled matrix p-norm. It is worth mentioning that, under the assumption of Theorem 2.7, when |A −1 | > 0, one obtains for some scaling 1-norm. Note that a sufficient condition for having ρ(|A −1 |) < 1 is the existence of a diagonal matrix S with |S| = I such that A −1 S ≥ 0 and (A−S) −1 S ≥ 0. In fact, Theorem 5.2 in Chapter 7 of [3] implies that ρ(A −1 S) < 1 under this condition, which is equivalent to ρ(|A −1 |) < 1.
Error bound can be utilized as a tool in stability analysis [10,14]. As mentioned earlier, (AVE) has a unique solution for each b ∈ R n if and only if [A − I, A + I] is regular. We denote the set of matrices which satisfy this property by A. It is easily seen that A is an open set. Let function X(A, b) : A × R n → R n return the solution of (AVE). In the following proposition, we list some properties of function X.
ii) Function X is locally Lipschitz with modulus c(A).
Proof. First, we show the first part. Suppose that X(A, b 1 ) = x 1 and X(A, b 2 ) = x 2 . Thus, There exists a matrix D ∈ [−I, I] such that |x 2 | − |x 1 | = D(x 2 − x 1 ). So the above equality can be written as Now, we prove the second part. Consider the locally Lipschitz function φ : A × R n ×R n → R n given by φ As [A − I, A + I] is regular, the implicit function theorem (see Chapter 7 in [9]) implies that there exists a locally Lipschitz function X(A, b) : A × R n → R n with φ(A, b, X(A, b)) = 0. In addition, where A 1 , A 2 and b 1 , b 2 are in some neighborhoods of A and b, respectively.
As mentioned earlier, one class of effective approaches to handle (AVE) is concave optimization methods. Mangasarian [29] proposed the following concave optimization problem, He showed that (AVE) has a solution if and only if the optimal value of (2.6) is zero. Now, we show that (2.6) has weak sharp minima property. Consider an optimization problem min x∈X f (x) with the optimal solution set S. The set S is called a weak sharp minima if there is an α > 0 such that where dist S (x) := min{ x − s 2 : s ∈ S}. Weak sharp minima notion has wide applications in the convergence analysis of iterative methods and error bounds [5,6]. Proposition 2.9. Let A ∈ A. Then the optimal solution of (2.6) is a weak sharp minimum.
Proof. Let X and x ⋆ denote the feasible set and the unique solution of (2.6), respectively. By Theorem 2.1, c 2 (A) ∈ R + and 1 c 2 (A) x which shows that x ⋆ is a weak sharp minimum.
Condition number of AVE for 2-norm. Since
, c 2 (A) can be computed as the optimal value of the following optimization problem, In general, the function σ min (·) is neither convex nor concave; see Remark 5.2 in [34]. Here, σ min (·) is a function of diagonals. Nonetheless, σ min (·) is also neither convex nor concave in this case; the following example clarifies this point. From this perspective, Proposition 2.3 mentioned above is by far not obvious.
In the next proposition, we give a formula for symmetric matrices. Before we get to the proposition, we present a lemma. The "only if" part is resulted from Bauer-Fike Theorem [2].
In the following example we show that the bound (2.9) can be arbitrary large while the error bound with respect to 2-norm is bounded.
For matrix A, let It is easily seen that T is diagonally dominant with nonnegative diagonal, so it is positive semi-definite. Consequently, , which implies the desired equality.
Note that under the assumptions of Proposition 2.15, we also have the following bound As for a permutation matrix P ,
Condition number of AVE for ∞-norm.
Some upper bounds were proposed for A −1 ∞ and A −1 1 ; see [23,25,31,40]. As Theorem 2.1 holds for any scaling p-norm, it would be advantageous to use these norms.
Proof. By virtue of Theorem 3.6.3 in [32], A − I is an M-matrix. In addition, as M-matrices are preserved by the addition of positive diagonal matrices [3], A+I is also an M-matrix. Hence, by Kuttler's theorem [24], [A − I, A + I] is inverse nonnegative, and we proceed as in the proof of Proposition 2.17.
where the first inequality follows form the fact that for H-matrix A, A −1 ∞ ≤ A −1 ∞ ; see Theorem 1 in [41].
Proof. First, we show that for a given d with d ∞ ≤ 1, we have the following inequality Consequently, interval matrix [A − I, A + I] is regular. Similarly to the proof of Proposition 2.15, one can show that The above equality and (2.13) imply c · (A) ≤ 1 α , and the proof is complete.
Error bounds and a condition number of AVE related to linear complementarity problems. The study of AVE is inspired from the well-known linear complementarity problem (LCP) [28]. LCP provides a unified framework for many mathematical programs [10]. In the section, we study error bounds for AVE obtained by transforming LCPs. Consider a general linear complementarity problem where M ∈ R n×n and q ∈ R n . Throughout the section, without loss of generality, we may assume that one is not an eigenvalue of M . So matrix (M − I) is non-singular. This assumption is not restrictive, as one can rescale M and q in (LCP). Problem (LCP) can be formulated as the following AVE, see [26]. The following proposition states the relationship between M and (M + I)(M − I) −1 ; see Theorem 2 in [37]. In addition to the error bounds introduced for some classes of matrices in the former section, in the following results, we propose error bounds for absolute value equation It is worth noting that the assumption Diag(M ) ≤ e is not restrictive, since LCP(M, q) is equivalent to LCP(λM, λq) for λ > 0. In the following, we investigate the case that M is an H-matrix. Before we get to the theorem, which gives a bound in this case, we need to present a lemma first. By applying Neumann series and the obtained results, we have where the last equality obtained by using the relations ( Therefore, Â −1 ≤ 1 2 M −1 − I , and the proof is complete. In the rest of this section, by using the obtained result, we present new error bounds for linear complementarity problems. Many papers have devoted to the error bounds for the LCP(M, q); see [7,8,10,16,33]. It is easily seen thatx is a solution of (LCP) if and only ifx solves The function θ(x) is called the natural residual of (LCP). As mentioned earlier, (LCP) has a unique solution for each q if and only if M is a P-matrix. For M being a P-matrix, Chen and Xiang [7] proposed the following error bound Therefore, the given results in this paper can be exploited for providing an upper bound for this maximization. For instance, Chen and Xiang, see Theorem 2.2 in [7], proved that when M is an M-matrix, then As seen, f is a piece-wise linear convex function. However, maximization of a convex function, in general, is an intractable problem. In this case, one needs to solve n linear programs.
In the next proposition, we give an upper bound for the optimal value for ∞-norm.
On the other hand, suppose that B ∞ = B i * ∞ . There existB ∈ {(I − M ) −1 X : which is a well-known bound; see Theorem 2.1 in [7]. Here, we obtain inequlity (3.3) with a different method as a by-product of our analysis.
4. Relative condition number of AVE. We introduce a relative condition number as follows which is equal to c(A) max d ∞ ≤1 A−diag(d) . The meaning of the relative condition number follows from the bounds presented in the proposition below. They extend the bounds known for the error of standard linear systems of equations [18].
Proof. Since b = 0, we have x ⋆ = 0. First, we show the upper bound. Denote from which the bound follows. Now, we establish the lower bound. From the proof of Theorem 2.1 we know that there exist some ∈ [A − I, A + I] such that Ax − b − |x| =Â(x − x ⋆ ). Hence from which the statement follows.
In order to compute c * (A) we have to determine c(A) and max d ∞≤1 A − diag(d) . The former is discussed in detail in the previous sections, so we focus on the latter now. Recall that a norm is absolute if A = |A| , and it is monotone if |A| ≤ |B| implies A ≤ B . For example, 1-norm, ∞-norm, Frobenius norm or max norm are both absolute and monotone. A − diag(d) 2 ≤ A 2 + 1.
Moreover, It holds as an equality when A is symmetric.
Conclusion. In this paper, we studied error bounds for absolute value equations. We suggested some formulas for the computation of error bounds for some classes of matrices. The investigation of other classes of matrices may be of interest for further research. The proposed formulas can be employed not only for absolute value equations obtained by transforming linear complementarity problems, but also for linear complementarity problems. In addition, We showed that , in general, the computation of error bounds, except for 2-norm, for a general matrix is an NP-hard problem, and it remains an open problem for 2-norm. | 5,423.6 | 2019-12-30T00:00:00.000 | [
"Mathematics",
"Computer Science"
] |
Virtual Sensors
Information systems increasingly link to the physical world. Technological advancements and declining unit costs of sensor technology combined with increased connectivity drive the spread and complexity of the Internet of Things (IoT) (Wortmann and Flüchter 2015) or so-called cyber-physical systems (Lasi et al. 2014; Lin et al. 2017). Today, billions of sensors feed information systems (IS) with data describing physical phenomena – such as temperature, pressure, humidity, velocity, chemical components, or material composition – across many areas ranging from industrial applications (e.g., smart factories) to consumer applications (e.g., smart watches). They form a key foundation for AI-based information systems that apply machine learning and generate analytics-based solutions. In particular, sensor data represents an essential building block of digital twins as an important phenomenon of interest for the BISE community (van der Aalst et al. 2018). As digital duplicates of real assets in the physical world, they rely on sensor technology for continuous data acquisition: As an example, the digital representation of a production plant (captured via physical or virtual sensors) may be used to optimize the production process by means of simulation or to develop predictive maintenance services (Tao et al. 2019). The increasing importance of sensors and IoT-based data for IS is also evident from the rapidly growing number of articles in academic IS journals dealing with ‘sensors’, which has increased more than tenfold within the last two decades. This development of cyber-physical systems is drawing attention to the question of how data can be captured from the physical world and be fed into a connected IS: the condition of the physical world can either be ‘‘directly’’ observed (by a physical sensor) or indirectly derived by fusing data from one or more physical sensors, i.e., applying virtual sensors. Typically, embedding physical sensor output into IS is subject to a number of limitations: equipping assets with sensors is cost-intensive, sensor signals are noisy or may interfere with each other, sensors may lose accuracy over time, or their use is even technically not feasible due to spatial or environmental conditions. However, software-based virtual sensors offer an additional abstraction layer built on digital representations of sensor hardware. They issue signals that aggregate input from physical sensors; thus, they may overcome the Accepted after two revisions by Ulrich Frank.
Introduction
Information systems increasingly link to the physical world. Technological advancements and declining unit costs of sensor technology combined with increased connectivity drive the spread and complexity of the Internet of Things (IoT) (Wortmann and Flüchter 2015) or so-called cyber-physical systems (Lasi et al. 2014;Lin et al. 2017). Today, billions of sensors feed information systems (IS) with data describing physical phenomena -such as temperature, pressure, humidity, velocity, chemical components, or material composition -across many areas ranging from industrial applications (e.g., smart factories) to consumer applications (e.g., smart watches). They form a key foundation for AI-based information systems that apply machine learning and generate analytics-based solutions. In particular, sensor data represents an essential building block of digital twins as an important phenomenon of interest for the BISE community (van der Aalst et al. 2018). As digital duplicates of real assets in the physical world, they rely on sensor technology for continuous data acquisition: As an example, the digital representation of a production plant (captured via physical or virtual sensors) may be used to optimize the production process by means of simulation or to develop predictive maintenance services (Tao et al. 2019). The increasing importance of sensors and IoT-based data for IS is also evident from the rapidly growing number of articles in academic IS journals dealing with 'sensors', which has increased more than tenfold within the last two decades. 1 This development of cyber-physical systems is drawing attention to the question of how data can be captured from the physical world and be fed into a connected IS: the condition of the physical world can either be ''directly'' observed (by a physical sensor) or indirectly derived by fusing data from one or more physical sensors, i.e., applying virtual sensors.
Typically, embedding physical sensor output into IS is subject to a number of limitations: equipping assets with sensors is cost-intensive, sensor signals are noisy or may interfere with each other, sensors may lose accuracy over time, or their use is even technically not feasible due to spatial or environmental conditions. However, software-based virtual sensors offer an additional abstraction layer built on digital representations of sensor hardware. They issue signals that aggregate input from physical sensors; thus, they may overcome the limitations mentioned above, offering lower operating cost or increased reliability, agility, or even indirect measurement of physically non-measurable properties. In addition, virtual sensors can make low-level physical sensor information more broadly available for application in cyberphysical systems: they foster collaboration on the level of sensors (e.g., improving accuracy of individual sensors), on the level of assets (e.g., replacing or substituting individual sensors) and even on the level of organizations (e.g., enabling different service providers to offer services based on the same sensor hardware). Thus, while physical sensors typically feed specific, isolated applications only, virtual sensors become the primary source of physical world data for generalized and connected cyber-physical systems.
While the basic concept of virtual sensors dates back to Muir (1990), still today a number of unresolved challenges (like data access and availability, standardization, platform deployment) limit its application within IS -and stand in the way of effective and efficient cyber-physical systems and IoT-based solutions. In this article, we will clarify the terminology around virtual sensors and describe their advantages (Sect. 2), describe the virtual sensor concept and differentiate four levels of application -from pure sensor virtualization to dynamic-cooperative sensing (Sect. 3). We emphasize the importance of systems thinking for the effective application of virtual sensors (Sect. 4) and outline research challenges for the BISE community (Sect. 5).
Physical Versus Virtual Sensors
In general, sensors are technical devices that monitor their environment and continuously produce signals at a regular frequency (either in analog form, like electric impulses, or digitally, like measurement data). A physical sensor is a sensor that reacts to a physical stimulus (e.g., temperature, light, pressure, magnetism, or a particular motion) and transmits a resulting impulse -typically through electrical signals that can be captured and stored in digital form (Fraden 2016;Merriam-Webster n.d.). In contrast to physical sensors, a so-called virtual sensor is a pure software sensor which autonomously produces signals by combining and aggregating signals that it receives (synchronously or asynchronously) from physical or other virtual sensors (Kabadayi et al. 2006): Fig. 1 illustrates various constellations of virtual sensors (VS): (a) a virtual sensor based on physical sensors (PS) only, (b) a virtual sensor based on another virtual sensor only, (c) a virtual sensor based on both physical and virtual sensors. Thus, virtual sensors only process data originally gathered by physical sensors. The data they deliver is then typically embedded into more complex functions or software applications that merge this input with data from other sources and execute analytics algorithms on the combined set of data.
By fusing and processing multiple physical sensor inputs, virtual sensors are able to measure abstract conditions or process variables that may not be physically measurable themselves (Albertos and Goodwin 2002;Kabadayi et al. 2006) -as, for instance, a type of sealing defect indicated by a function of several process signals (Martin and Kühl 2019): this condition could not be detected by any physical sensor built into the sealing itself. In existing literature, however, the distinction between physical and virtual sensors is fuzzy as most physical sensors are typically described as not capturing a measurand in a direct way. In fact, most physical sensors measure the phenomena of interest (e.g., pressure or force) by using physical correlations (e.g., the piezoelectric effect) to translate the variable to be measured into a processable electric signal. Thus, most real-world sensors already include additional hardware and software components for signal processing (Fraden 2016) -and in a strict sense would in fact be virtual sensors.
In literature, the general idea of combining several (homogeneous or heterogeneous) sensors has already been discussed for decades using different terms: A sensor network is comprised of a number of ''sensor devices that are deployed in an ad hoc fashion [to] cooperate on sensing a physical phenomenon'' (Tilak et al. 2002, p. 28). Nodes in sensor networks usually have no or limited computing power and, thus, transmit the sensed data to a central location where it can be processed further (Yick et al. 2008). While the concept of sensor networks focusses on connecting sensors at the physical (i.e., hardware and connectivity) level, sensor fusion describes a merge of different sensors at a data and information level. The concept of sensor fusion denotes strategies that serve to overcome issues of individual physical sensors (such as limited spatial and temporal coverage, uncertainty, or limited robustness). It describes the combination of ''information from multiple sensors and sensor types to increase the accuracy and to resolve ambiguities in the knowledge about the environment'' (Chiu et al. 1986(Chiu et al. , p. 1629. In other words, fusion enables both more precise measurements of one specific phenomenon (e.g., temperature at a specific location within a system) as well as abstract representations of diverse signals (e.g., a defect within the system).
Based on these concepts, the term virtual sensor (sometimes also referred to as soft sensor) has evolved as the implementation of a sensor fusion based on a sensor network. However, the term is still not unanimously defined, and we observe also other, slightly different meanings. Some authors emphasize architectural aspects and describe virtual sensors as a pure software abstraction layer without further specifying data processing aspects (Madria et al. 2014;Bose et al. 2019). Other authors only address certain aspects of virtual sensors, such as the ability to leverage different data sources in order to measure an unobservable target without considering aspects like the pure virtualization of a single physical sensor (Kabadayi et al. 2006;Tegen et al. 2019). To address this discord, this article aims to consolidate different definitions and to propose a coherent conceptualization.
Virtual sensors serve to overcome a number of weaknesses of purely physical sensors. First, there is the obvious advantage of significantly lower costs of software compared to hardware, applying to both initial investment and ongoing maintenance (Tegen et al. 2019). Second, virtual sensors provide an interesting alternative when a physical sensor cannot be placed in the preferred position due to spatial conditions (e.g., lack of space for a sensor) or a hostile environment (e.g., exposure to acids or extreme temperatures). The resulting delay or inaccuracy of the measurement, when installing the sensor in a less suitable spot, may be compensated by virtual sensors (Tegen et al. 2019). Third, virtual sensor technology can reduce signal noise and, thus, increase confidence in the signals, when a sensor's output is confirmed by other sensors measuring the same phenomenon (Albertos and Goodwin 2002). Fourth, so-called drifts of physical sensors are a well-known phenomenon rendering a sensor inaccurate over time due to, e.g., wear or pollution (Baier et al. 2019). These drifts can be recognized or compensated by virtual sensors. Finally, virtual sensors are extremely flexible and can be redesigned as required, while physical sensors, once installed, often can only be repositioned by mechanical intervention (Neidhardt et al. 2008;Tegen et al. 2019).
In addition to this functionality of ''replacing'' physical sensors, virtual sensors are used to deliver a ''higher level'' output as a function of various, heterogeneous sensor signals (as stated above). For instance, they may transform various sensor data into information about the condition of an asset (e.g., the wear and tear level of an industrial robot) forming a small-scale information system themselves. Based on this output, better decisions could be made (e.g., the scheduling of maintenance).
Key Characteristics of Virtual Sensors
Virtual sensors represent a software layer that provides indirect measurements of a process variable or an abstract condition based on data gathered by physical (or other virtual) sensors leveraging a fusion function. In order to clearly describe the concept of a virtual sensor and also to identify key properties, Fig. 2 graphically illustrates its building blocks and their relationships. In the following, we first elaborate on a conceptual framework of a virtual sensor and its inherent assumptions. In a second step, we focus on describing different application levels of the virtual sensor concept.
An asset describes an object, subject, or system which, as a whole or in parts, is to be monitored or observed in any form. It is a delimitable, natural or artificial ''thing'' consisting of various components that can be regarded as a common whole due to certain relationships between them. Examples include technical systems such as machines, cars, or airplanes, but also social or sociotechnical systems such as patients to be monitored or a work environment.
Data sources provide data streams about the asset generated by physical or other virtual sensors at a regular frequency. This sensor data may originate from the same asset or other assets in cyber-physical systems. The data can be of any type (e.g., numeric, categorial, etc.) and is typically made available in a continuous fashion. Nevertheless, interruptions of the data streams, time delays and batchwise provision of the data are also conceivable. Moreover, the number of sources or its format may dynamically change over time.
A data fusion function describes a transformation procedure of any complexity which converts source data into a desired output variable or information. The simplest fusion function would reproduce the input signal without any modification. However, more complex, but still simple fusion functions apply methods such as scaling, filtering, linearization, aggregation, smoothing, extrapolation and others to the source data in order to provide a final measurement result (Albertos and Goodwin 2002). These functions depend on the characteristics of the sensor and the sensing environment. Moreover, machine learning-based functions are applicable, which are able to infer a target of interest from data sources of different resolution, availability, type and form (Meng et al. 2020).
The derived measurements produced by a data fusion function represent the virtual sensor data. This time series data can be of any type and form but should be directly attributable to the asset to be observed (e.g., a part of the system).
In order to persist this data, a digital twin is required. According to Dietz and Pernul (2020), a digital twin represents an asset's virtual counterpart that can be leveraged to digitally mirror and constantly manage it. It combines and integrates an asset's data sources and controls its availability and validity. This includes providing metadata, semantics and context information that refines data into information, e.g., the interpretation of a transmitted floating-point number as the measure of electric current in a particular module. Additionally, the digital twin provides necessary interfaces between the virtual and the real world, and enables bi-directional data sharing as well as synchronization (Alam and El Saddik 2017). Thus, virtual sensors can serve both as data sources for digital twins as well as their integrators, since a digital twin is also an integral part of the virtual sensor concept.
Based on these generic building blocks, different degrees of complexity, expansion stages or facets of virtual sensors can be observed in applications or conceptual descriptions that appear in literature. The typology illustrated in Fig. 3 schematically describes different levels of application on the interaction and data level. The degree of complexity with regard to data integration and fusion increases from left to right.
Sensor Virtualization
The simplest form of a virtual sensor obtains data from exactly one physical sensor and mirrors it either completely unchanged (Madria et al. 2014;Ko et al. 2015), in aggregated (Corsini et al. 2006), cleaned, or otherwise modified form (Albertos and Goodwin 2002). This kind of virtual sensor is very common in practice, as advances in communication technologies and increased bandwidth allow measurement data from many physical sensors to be made digitally available via cloud infrastructures (Fraden 2016;Matt 2018). A typical example of a virtualized sensor based on a single input signal is the pedometer in smartphones: Simple algorithms transform the output signal of an accelerometer into the number of steps taken over time (Abadleh et al. 2017). An accelerometer in turn is a force sensor with a seismic mass attached, which leverages the piezoelectric effect to translate a force into a proportional measurable electric signal (Gautschi 2002). In turn, an acceleration sensor can also be leveraged, for instance, to detect abnormal behavior in mechanical components such as pumps or bearings through defined threshold values in order to initiate maintenance actions (Donelson and Dicus 2002).
Competitive Sensing
Sensor configurations where each sensor provides independent measurements of the same property are called competitive or redundant. If several sensors -possibly with different accuracies -perceive the same features in the environment, overall accuracy may increase, and at the same time uncertainty as well as transmission volume is reduced, as less data needs to be transmitted (Luo and Kay 1989;Tegen et al. 2019). Multiple sensors providing redundant information can also increase reliability in the event of a sensor failure or malfunction (Luo and Kay 1989). Furthermore, the influence of drifts caused by decreasing sensor accuracy can be detected and optionally corrected (Dornfeld and DeVries 1990;Baier et al. 2019). Guérin et al. (2003) present an exemplary implementation of competitive sensing, in which the signals of two microphones are leveraged to improve the audio quality for hands-free car kits.
Static Cooperative Sensing
Cooperative sensing leverages data provided by several independent sensors to derive information that would not be available from an isolated view. However, one problem is the increased sensitivity to inaccuracies of individual sensors involved (Brooks and Iyengar 1998). For the same reason, suitable fusion functions for cooperative sensing usually show higher complexity compared to competitive sensing due to different types of involved sensors. An example is a neural network predicting NO x at cylinder level based on individual cylinder pressures and a downstream cylinder-aggregated NO x sensor (Henningsson et al. 2012). These cylinder-specific measurements can support the design of improved engines that meet customer demands for low fuel consumption as well as comply with legal regulations. In the static case, incorporated sensors are available at any time, so that the fusion function may permanently access a constant set of features.
Dynamic Cooperative Sensing
When the permanent availability of physical sensors is not guaranteed, dynamic fusion functions care for flexible adaptations to systemic changes (Tegen et al. 2019). Reasons can be dynamic changes in the system itself, such as the omission or addition of a system component equipped with sensors, as well as the limited availability of physical sensors for technical or economic reasons. An example would be the observation of the motion profile of a person that, depending on the time of day, is carrying either a smartphone or a fitness tracker with different built-in sensors. Another example is the pedestrian recognition function of autonomous vehicles, which can rely on camera signals in good weather conditions, but not in fog or at night. Dynamic cooperative sensing, thus, requires a complex fusion function being able to handle dynamic feature availability to adequately accommodate the context as well as the accuracy of a measurement (Mihailescu et al. 2017).
Towards a Systems Thinking Mindset
As described above, there are different application levels of virtual sensors. Higher levels allow for increasing accuracy, reliability and informational value of a virtual sensor, but hinge on the use of a richer set of data. This in turn calls for inclusion of a broader set of data sources across different assets (as in Sect. 3) or even across different organizational entities, and, thus, the extension of the system boundary along these dimensions (cf. Fig. 4). Thus, higher performance of a virtual sensor is linked to the inclusion of additional resources from the enlarged system -as generally postulated for service system engineering in IS (Böhmann et al. 2014) and evident from the advancement of cross-industry platforms (Beverungen et al. 2020).
With respect to the different application levels of virtual sensors described in Sect. 3, access to a broader set of data may improve sensor performance within any particular level as well as allow to progress to the next level: In sensor virtualization, where only one sensor signal is used, there may be different options for sensor positioning affecting the measurable correlation to the actual target variable. The more options for picking a sensor signal from own assets or even from those of other organizations, the more accurately the target variable may be measured. However, this positioning may entail the permission or support of other entities: A humidity or temperature sensor at a public weather station may be a good (isolated) data source for estimating the weather conditions for a particular target location nearby (Fig. 4, scenario I). Access to and a switch to a similar sensor at other self-owned weather stations may yield even better predictions (Fig. 4, scenario II) (Maniscalco and Rizzo 2017), while additional access to private weather stations would offer even more options to identify the best suited sensor (Fig. 4, scenario III). Therefore, an application-specific assessment of the benefit (increase of sensor performance) against the potential costs for the extension of the system boundaries (price of integration) is required.
In competitive sensing, more fusion options become available when different data sources can be joined: in the example above, simultaneous access to all available sensors of the different weather stations and the triangulation of their (''competitive'') signals might improve sensor performance.
For static and dynamic cooperative sensing, sensors tapping additional data sources for different signals may be the key for adequate performance: In an industrial context, the condition of a sealing cannot be monitored on the base of an individual sensor or type of signal. Only a higherlevel cooperative sensing solution achieves this, when sensor data across different assets in the operational process are combined (Martin and Kühl 2019). This, however, calls for extending the system boundary around several industrial assets provided by different manufacturers (Fig. 4, scenario III) -requiring interoperability, connectivity and a common platform.
Thus, in designing effective virtual sensor solutions -as part of larger information systems -we need to strive for exploiting data sources across assets and organizations. Joining physical sensor data will allow the creation of virtual sensors that can exploit connections and correlations among individual system components (e.g., assets or organizations). Thus, completely new avenues for the design of information systems and value co-creation will emerge: The BISE community is to contribute design knowledge and concrete methods to systematically develop virtual sensor concepts across assets and organizations. This also encompasses the economic evaluation of the trade-offs between benefits of higher precision and the costs of extending the system's boundaries.
Today most decisions on necessity, type and position of physical sensors in different assets are made by companies reflecting their own individual needs (Ji and Zha 2004) limiting data availability. In addition, even data already available within a system is not sufficiently shared with other actors (e.g., customers, suppliers) (Chanson et al. 2019). On the one hand, this is caused by a lack of technical solutions, as the exposure of data to other actors is limited by a lack of data standardization, insufficient exchange platforms and still low communication bandwidths (Matt 2018;Chanson et al. 2019;Martin et al. 2020). On the other hand, data is perceived as a valuable resource that needs to be protected and should not be shared at all (Zhang et al. 2008;Spagnoletti et al. 2015;Chanson et al. 2019). Accordingly, suitable approaches need to be developed to commercialize data as resources in order to create mutual benefits.
An example would be a scenario in which an OEM of a vehicle fleet obtains data of rain sensors below the windshields (which are currently only used to activate the wipers and the headlights). If this data could be exposed on a suitable platform, it might be integrated into advanced local weather forecast models. Such an IoT platform would have to provide interfaces to receive and to manage a huge variety of data from diverse actors, ensure enterprise-grade security, as well as to manage access to other participating actors. Furthermore, such a platform would have to enable the actors to filter the potentially most suitable data sources from the multitude of options for any particular purpose. Not only meteorological institutes could benefit from this, but also other drivers in traffic, who might be provided with an individualized alert by their vehicle or an external navigation app. A multitude of potential applications of these and similar scenarios are conceivable -once not only technical exchange of data is feasible, but also adequate incentives and remunerations for data providers are in place.
Challenges for Information Systems Research
The previous paragraph already pointed to the challenges that the application of virtual sensors poses to information systems research and to which the BISE community could contribute: First, the use of physical sensors in the design and construction of assets has to be informed by potential uses of the produced data in ''downstream'' virtual sensors. Sensorization of assets has to purposefully be planned to enable particular data-based, digital services that are to be built and run on the generated data. 2 This requires a customer-oriented mindset that in design thinking manner tries to anticipate user information needs and allows to equip assets with the appropriate sensor technology. The sensor configuration process can either be realized while designing an asset (proactive sensorization) or can even be retrofit to quickly respond to needs that were not known during the initial design phase (reactive sensorization). Especially the ability to retrofit sensor technology by means of virtual sensors adds ample possibilities to satisfy needs that have arisen ''post-design'', even without additional hardware. In both options, however, methods are needed that help to ''reverse engineer'' products with regard to sensors: if the customer or operator of a milling machine needs certain data on the asset's usage to run effective predictive maintenance, the manufacturer may be able to generate additional value by installing sensors to provide this data. He will only be able to do so, though, once he has the awareness, methods and tools to elicit the customer's need for information.
Second, in order to use the potential of joining more data sources in particular for cooperative sensing of virtual sensors, easy, intuitive, and secure exchange of data needs to be enabled. Interoperability standards and information exchange platforms for IoT-based, cyber-physical systems have to be developed. In the concrete sealing sensor example above, a whole range of data from different actors within the value network is required in order to draw conclusions about the condition of the seal via cooperative sensing. Although this data is already collected for isolated use, it has not yet been shared. With data being kept in nonstandard, proprietary or poorly documented formats, manual pre-processing could prove the added value of the virtual sensor, but it has so far not been possible to implement it for efficient productive use (Martin and Kühl 2019). Therefore, there is an urgent need to develop uniform communication standards for sensor data, which provide detailed meta-information, semantics and context in addition to the actual measured values, such as, e.g., unit of measurement, measuring range, time of measurement or update frequency. This would enable companies to provide sensor data for other actors in a simple manner on dedicated exchange platforms. The quest for these platforms has already begun: The International Data Spaces (IDS) initiative aims to design and develop a platform for trusted and secure data exchange -even beyond sensor data (Otto and Jarke 2019; International Data Spaces Association 2020). This endeavor also reveals that in particular data sovereignty seems to be a limiting factor for inter-organizational data exchange. Although the initiative shows that the questions and challenges identified in the context of virtual sensors also emerge in a broader context, simple solutions are not yet in sight. Moreover, the allocation of ownership of data originating from a multi-layer setting is still under debate (Hirt and Kühl 2018).
Third, enabling the technical exchange of data will not suffice. Only if (data-based) business models are developed that incentivize data providers to expose physical and virtual sensor data, data sharing for building virtual sensors will actually happen. This will require to explore and size the value of sensor-provided data, to develop appropriate data-based services and revenue models (Legner et al. 2017) or even to analyze the benefits of open data provision (Enders et al. 2020).
Fourth, cooperative virtual sensors may provide information on a higher abstraction level that is not directly measurable by individual physical sensors, as, e.g., the condition of an industrial asset. While this is a key benefit of virtual sensors, it may aggravate the ''downstream'' analysis of the data in machine learning applications, e.g., predictive maintenance forecasts: When the condition is used as a target in AI-based fusion functions, a known subset of the true conditions as the ''ground truth'' is required to enable training of machine learning models. For situations where this is tedious or excessively costly, methods are needed to deal with insufficient or sparse labelling. Techniques from the fields of semi-supervised learning or domain adaptation, for example, could serve to address these issues. However, this requires research into the suitability of these methods for sensor-specific applications.
Fifth, there is an economic tradeoff between the benefit of a virtual sensor performance level and the cost of incorporating additional data sources to reach this: A single reverse vending machine sensor may help to predict the filling level with certain accuracy (Walk et al. 2020). A (costly) ample set of sensors, though, may significantly improve the prediction quality. Economic information value models are needed to manage the tradeoff between approximate, cheap virtual sensor prediction and more precise, but costly ''brute force'' physical sensor detection.
For quite some time, virtual sensors have been offering promising and cost-effective options to augment or even replace physical sensors. With the explosion of data generated by IoT-based assets in cyber-physical systems, their understanding and competent use will be key for rendering competitive products and data-based services. Information systems research can and should contribute to the closure of existing research gaps and to exploiting this business potential.
Funding Open Access funding enabled and organized by Projekt DEAL.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/. | 7,225.4 | 2021-03-09T00:00:00.000 | [
"Computer Science"
] |
Singular Limits of Relative Entropy in Two Dimensional Massive Free Fermion Theory
We show that certain singular limits of relative entropy of vacuum states in two dimensional massive free fermion theory exist and compute these limits. As an application we show that the c function computed by Casini and Huerta based on heuristic arguments arise naturally as a consequence of our results.
Introduction
von Neumann entropy is the basic concept in quantum information and extends the classical Shannon's information entropy notion to the non commutative setting. The role of entropy in Quantum Field Theory is more recent and increasingly important, and appears in relation with several primary research topics in theoretical physics as area theorems, c-theorems, quantum null energy inequality, etc. (see for instance [2,3,12,13,28] and refs. therein).
The singular limits of certain relative entropies contain rich information about the underlying QFT. See Section 4 of [15] for an example where the global index of the underlying conformal net appears. Another such example is the paper [3] in which the celebrated c-theorem in 2 d QFT was derived from these singular limits (assuming they exist) by using monotonicity and covariance. The paper [3] contains physical arguments for c-theorem, and in the beginning of Sect. 2 we give a brief discussion. We refer interesting readers to the paper [3] for more details. It is clear that these arguments used certain unproved assumptions about singular limits of relative entropy. These singular limits are described precisely in Section 2, and is the main focus of this paper. To describe our main results, let us first recall some work in physics literature. In [5,7] Casini-Huerta computed c function in the case of 2d free massive fermions from singular limits of relative entropy and obtained the remarkable result that such function can be expressed in terms of functions which are related to solutions of Painleve equations of type V (cf. Eq. 31). However the starting point of [5,7] is the replica trick which uses formal manipulations of a density matrix that does not exist. There are identities in [4] where both sides are infinite, and in deriving an integral formula for the c-function the authors used integration by parts on these infinite functions, assuming certain vanishing properties on the boundary of the integral (cf. Remark 6.12). In fact it is not even known that the function that appears in formula (148) of [5] is integrable. This makes it very challenging to justify the formulas in [5]. See the second paragraph in the introduction of [14] for recent comments.
In this paper we take the point of view that relative entropy is fundamental in QFT, and by investigating the properties of their singular limits we should recover the above mentioned formulas in [5]. Our main results (cf. Th. 6.8, Th. 6.11 and Cor. 6.13 ) give an explicit formula for the singular limits in terms of τ function, and as a consequence (cf. Cor. 6.14) the formula of c-function in [5] follow. For reader's convenience, here is an explicit formula for singular limits of relative entropy as described in Cor. 6.13: the singular limits of relative entropy as defined in Eq. (6) where intervals are as in Fig. 6 are given by the following formula and τ 0 (t, β) is as defined after Eq. (32).
To prove these results, one of our key observations is that the deep results of [22,23] provide the right mathematical framework to investigate the properties of the singular limits, and indeed our theorems are proved in such framework, and our earlier results in [26] and [29] also play a crucial role. Even though the starting point of our computation is different from that of [5], some of the ideas are similar to that of [5], since the computations of Green's functions in [5] are special cases of the general theory of [22,23]. Here are more detailed account of our results: (1) in Sect. 3 we show that the singular limits exist based on a result in [25]; (2) we show that suitably modified two point "wave function" of [22,23] gives the resolvent of an operator, and this gives a formula to compute the singular limits; (3) by using the results of [22] and [23] we reduce the computation of the trace of kernel to a local computation around the ends of the interval, allowing us to express the singular limits in terms of τ function. Along the way we also find some seemingly new properties of τ function (cf. Remark 6.10).
The rest of this paper is organized as follows: In Sect. 2 we define the singular limits we want to compute. In Sect. 3 we show that the singular limits defined in Sect. 2 exist and analyze their properties. In Sect. 4 we collect the results from [22] and [23] that we will use in the paper, and also to set up notations. In Sect. 5 we prove a resolvent formula which will enable us to compute the singular limits. In Sect. 6 we use the results from the previous sections to give explicit formula for these singular limits. As a consequence in the last Sect. 6.3 we derive the formula of Casini-Huerta for c-function.
There are many interesting questions left open by this work. It will be interesting to extend our computation of singular limits to more general non-equal time slice configurations. We expect that the deep results of [23] will be useful. In the case of free bosons, since the entropy formula involves unbounded operators, our methods do not apply immediately. Finally, it seems to be a challenging question to have well controlled singular limits in more than two dimensional space time, since the singularities are no longer of the logarithm type, and the dependence of these singularities on the boundary is more complicated. We expect our results and ideas may be useful in addressing these questions.
We'd like to thank E. Witten for stimulating discussions.
The Singular Limits of Relative Entropy
In this section we describe the question that will be addressed in this paper. First we will make some general comments. See [11] for more details on algebraic formulation of QFT is Araki's relative entropy of the two states (cf. Section 2.1 of [15]). It is expected (cf. [19]) that F(O 1 , O 2 ) is finite and goes to infinity when O 1 and O 2 get closer to each other. We will focus in the two dimensional case in this paper, and the double cones lie on time equal to 0 slice. In that case we can represent double cone by its time equal to 0 slice. Remark 5.11 for precise statements in the massive free fermion case). The singular limits in this case are already interesting (cf. (2) of Th. 4.2 in [15]). As already observed in [3] (also cf. Section 4.2 of [15] Fig. 1, then when c 2 → c + heuristically the logarithm divergence will cancel out at the end c and we may have a well defined function, denoted by F(b 1 , b, c, c 1 ). Casini and Huerta argued in [3] that such function can be written as where G(t), t > 0 is "renormalized" entropy. Let t = e s , and define the c function to be c(s) = dG(e s ) ds . Casini and Huerta argued that c(s) ≥ 0 and c (s) ≤ 0 based on covariance and monotonicity of relative entropy. In Section 4.2 of [15] a large class of such functions in the case of conformal field theories are determined which have c function equal to a constant proportional to the central charge.
Our goal in this paper is to show that lim c 2 →c + F(b 1 , b, c; c 2 , c 1 ) exists and compute the limiting function F(b 1 , b, c, c 1 ) in the case of massive free fermions. We refer the reader to the introduction part of the paper [8] where general free fermion theory is described in Algebraic QFT framework. We will only be interested in the case of massive free fermions in two dimensional Minkowski spacetime. In the following we will introduce the formula for relative entropy F(b 1 , b, c; c 2 , c 1 ) as discussed on Page 1475 of [29]. First we will do some preparations. Specializing the formulas in the introduction part of the paper [8] to the case of massive free fermions in two dimensional Minkowski spacetime, our problem can be formulated as follows (also cf. [6]). Let where K 0 , K 1 are modified Bessel functions of second kind. Let where I 2 is two by two identity matrix.
Denote by C the operator on L 2 (R, C 2 ) given by Here we have chosen somewhat unconventional notation by integrating with respect to the first variable. The reason will be explained in Sect. 5.4. We write By the paragraph after Eq. (14), C is a projection. To compare with the notation in [8], this is the projection that is denoted by P in equation (10) of [8] that defines the Fock state which is the ground state for 2-dimensional Minkowski space-time. The reader can also find similar formula in equation (64) of [6].
If I is a closed interval, we will denote by P I the projection on L 2 (R, C 2 ) which is given by multiplication by the characteristic function of I . In Fig. 1 we have used 1, 2, 3 to denote the intervals Introducing the following: Definition 2.1.Ĉ I := P I C(1 − P I )C P I . When x is in the resolvent set of the operators T,Ĉ 12 ,Ĉ 23 ,Ĉ 2 ,Ĉ 123 , we define When it is necessary to indicate the dependence ofĈ(1 ∪ 2, 2 ∪ 3, x) on the intervals, we will writeĈ(b 1 , b, c; c 2 , c 1 , x). Let 0 ≤ E ≤ 1 be an operator. Let R E (β) = (E − 1/2 + β) −1 . Note that R E (β) is holomorphic in β ∈ C − [1/2, −1/2] in the operator norm topology. It is now useful to recall the following formula (cf. [5]): By Lemma 3.12 of [26] the above formula is more conveniently written as . This follows as a direct computation as in Lemma 3.12 of [26] that By the first paragraph on Page 1475 of [29], the relative entropy is given by the trace of an operator constructed from the covariance operator as in definition (2). Following the definitions in a straightforward way we have the following formula The main question in this paper to determine lim c 2 →c + F(1 ∪ 2, 2 ∪ 3) and study its properties.
In Sect. 6 we will frequently encounter functions H (a, b, w) or operators where a, b are end points of interval [a, b] and w denote the other variables H may depend on. We make the following: For the intervals as in Fig. 1, H (12 When it is necessary to specify the dependence of H (12, 23, w) on the end points of intervals, we will write H (12, 23, w)
Schatten-von Neumann ideals.
This paper relies on the results for general quasinormed ideals of compact operators. Here we limit our attention to the case of Schattenvon Neumann operator ideals S q , q > 0. Detailed information on these ideals can be found e.g. in [20] and [24]. We shall point out only some basic facts. For a compact operator A on a separable Hilbert space H , denote by s n (A), n = 1, 2, ... its n-th singular values, that is, the eigenvalues of the operator |A| := √ A * A. Note that if R 1 , R 2 are bounded operators, Then (cf. [24]) where ||A|| to denote the norm of an operator A. We denote the identity operator on H by 1. The Schatten-von Neumann ideal S q , q > 0 consists of all compact operators A, If q ≥ 1, then the above functional defines a norm; if 0 < q < 1, then it is a so-called quasi-norm. There is nevertheless a convenient analogue of the triangle inequality, which is called the q-triangle inequality: We also have the Holder inequality: See [17] and also [2]. In what follows we focus on the case q ∈ (0, 1]. We will use ||A|| to denote the norm of an operator, and ||A|| 1 the trace of |A|. By definition [25]. Let p(x, y, ξ) be a smooth function on R 3 . Define a class of operators on L 2 (R 1 ) with symbol p(x, y, ξ) as follows (10) where u ∈ L 2 (R 1 ). Let r > 0. Denote by L r loc (R 1 ) the set of measurable functions h such that |h| r is integrable on bounded measurable sets. 0 < δ ≤ ∞. Let C n be the unit interval centered at n. The lattice square norm is defined by
where C(q, m, 0 ) is a constant which only depends on q, m, 0 .
Proof. Ad (1): First note a 0 (ξ ) has no singularity for finite ξ . When ξ > m, we have the following convergent series expansion for a 0 (ξ ) It follows that when ξ is sufficiently large we have ∂ k a 0 ( Similarly the same bound holds when −ξ is sufficiently large. When q(k + 1) > 1, the series |n|>0 1 n q(k+1) < ∞, and (1) is proved. (2) is proved similarly. Corollary 3.3. Assume that h i = χ E i , i = 1, 2 are characteristic functions of measurable sets E i , and the distance between E 1 , E 2 is greater or equal to 0 > 0. Let a 0 (ξ ), a 1 (ξ ) be as in Lemma 3.2. Then for any q ∈ (0, 1], i = 1, 2 we have whereṽ l are defined as in Eq. (19).
where I 2 is two by two identity matrix. Note that the Fourier transform ofṽ 0 (x) is π 2 a 0 (ξ ), and the Fourier transform ofṽ 1 (x) is −i π 2 a 0 (ξ ) (cf. Section 6.5 of [9]) as in Lemma 3.2. In the momentum space or after Fourier transform (note that Fourier transformation maps convolution into products) the operator D m is simply multiplication by Using a 1 (ξ ) = ξ √ ξ 2 +m 2 , a 2 1 + m 2 a 2 0 = 1 we see that D m is self adjoint and D 2 m = 1/4. Denote by C the operator on L 2 (R, C 2 ) whose kernel is given by C(x, y) in Eq. (2). We write Using the fact that D m is self adjoint and D 2 m = 1/4 we see that C is a projection. Let h 1 , h 2 be characteristic functions of two measurable sets whose distance is greater or equal to 0 . By inequality (8) we have Since up to constants each C i j is Op(a 0 ) or Op(a 1 ), by Cor. 3.3 we have proved the following: (2) Proof.
Split of intervals.
The intervals are as in Fig. 1, where 1 : We assume that the length of intervals 1, 2 are bounded from below by 0 . Note that we allow b 1 (resp. c 1 ) to be −∞ (resp. ∞). Fix 1/2 < σ < 1. Given two operators T 1 , T 2 , we write T 1 ∼ T 2 if |T 1 − T 2 | S σ ≤ C( 0 ) for some constant C( 0 ) which only depends on 0 . Note that inequality in (8) implies that if T 1 ∼ T 2 , T 2 ∼ T 3 , then T 1 ∼ T 3 and similarly if T 1 ∼ T 2 , T 3 ∼ T 4 , then c 1 T 1 + c 2 T 2 ∼ c 1 T 1 + c 2 T 2 for any bounded operators c 1 , c 2 by Eq. (9). Also note that T 1 ∼ T 2 iff T * 1 ∼ T * 2 . For two subset E, F of the real line we use d(E, F) to denote the distance between them.
is a constant which only depends on 0 , and z * > 0 is defined as in Definition 3.5.
Proof. The theorem is proved by a series of reductions, and in each step we will throw away terms T which verifies ||T By Lemmas 3.6 and 3.7, after thrown away a term T which verifies ||T || 1 ≤ 3.9, Lemmas 3.6 and 3.7 we can replace g 0 (P 23 C(1−P 123 )C P 23 , z) by g 0 (P 6 C(P 0 + P 4 )C P 6 , z). Similarly by Prop. 3.9, Lemmas 3.6 and 3.7 we can replacê C 23 by P 5 C P 1 C P 5 + P 6 C(P 0 + P 4 )C P 6 . Hence we can replace Note that this term is independent of interval 3. Hence by repeating the above reductions for while taking 3 to be an empty set, and recall thatĈ(1∪2, 2∪3, z) as defined in Definition 2.1, we have proved the theorem. Fig. 2, dropping −∞ we will write the correspondingĈ (12, 23, x) Fig. 2, we will write the correspondinĝ Fig. 2. (1) follows by Lemma 3.10 of [15]. (2) follows from (1) and definitions.
Proposition 3.13. Assume that z is a complex number with z
Proof. All equations are proved in the same way. Let us prove the first equation. It is enough to examine the "thrown away terms" in the proof of Th. 3.10. By Lemma 3.6, all such "thrown away terms" are as in (1) of Lemma 3.7. A typical such term is of the following and either P i or P j depends on y = c 2 − c, and this projection, denoted by P(y) decreases or increases to a projection P strongly as y → 0 + . For definiteness assume that P j = P(y). We can replace P(y) by P(y)P(η) when y < η and P(y) is decreasing, or P(y) by P(y)P when P(y) is increasing. Apply Lemma 3.12 to g 1 (y)P i C P(η)P(y)g 3 (y) with A = P i C P(η), or to g 1 (y)P i C P P(y)g 3 (y) with A = P i C P, note that in both cases by Cor. 3.4 A is trace class, we conclude that when y → 0, the thrown away terms inĈ(b 1 , b, c; c 2 , c 1 , z) converges to the corresponding thrown away terms inĈ(b 1 , b, c, c 1 , z) in trace, and similarly for the rest of equations.
Recall that 1/2 < σ < 1. Since the function 2β the theorem now follows from Dominated Convergence Theorem and Prop. 3.13.
Proof. As in the proof of Prop. 3.13, it is enough to show that the trace of "thrown away term" in the proof of Th. 3.10 is holomorphic in where A is a fixed trace class operator, and f 1 (z), f 2 (z) are holomorphic in z with respect to the norm topology since z ∈ C−[−1/4, 0]. By Lemma 3.12 the proposition is proved.
By Eq. (6) we have the following formula We will compute this function in Sect. 6.
Some Results from [22,23]
In this section we will review some of the results of [22] and [23] that will be used in this paper. The reader is encouraged to consult [22] and [23] for more details. For a survey, see [21]. The functions introduced at the end of this section will play a crucial role in Sects. 5 and 6. Unfortunately there is no clear rigorous short cut to explain why these functions may be useful for our purpose. See [18] for a different perspective which may be helpful. Let with positive mass m > 0. We introduce a series of multi-valued special solutions of the above equations. For l ∈ C let I l (x) and K l (x) denote the modified Bessel functions of the first and second kind respectively. Set where z = 1 2 re iθ ,z = 1 2 re −iθ , r ≥ 0, θ ∈ R. These functions are multi-valued solutions (outside the origin) of Eq. (18), having the following local behavior as |z| → 0: where l! = (l + 1), and ... are higher order term. We also havẽ where γ 0 is Euler's constant. These functions satisfy the following recursion relations where ∂ := ∂ z ,∂ := ∂z. In this paper we will make use of two point "wave functions" as defined in section 3.2 of [22]. Let (a i ,ā i ), i = 1, 2 be two distinct points of R 2 . Denote byX the universal covering manifold of 2 with the covering projection π :X → X . We fix base pointsx 0 ∈X , x 0 ∈ X so that π(X 0 ) = x 0 , and denote by π 1 (X ; x 0 ) the fundamental group of X . An element γ ∈ π 1 (X ; x 0 ) is identified with the covering transformation it induces onX . For a function u onX we ,a 2 will denote the space consisting of complex valued real analytic function v onX with the following properties: where k is ±l ν + j, c s are constants independent of z, and the sum is absolutely convergent on any compact subset of |z − a ν | < η; We note that our W l 1 ,l 2 a 1 ,a 2 is W l 1 ,l 2 ,strict B,a 1 ,a 2 on page 589 of [22]. Denote by L the matrix l 1 0 0 l 2 and by slightly abusing notations we denote by 1 − L the matrix For each such fixed L We will make use of two canonical elements in W l 1 ,l 2 a 1 ,a 2 (cf. Page 592 of [22] where ... are higher order terms. Here we have suppressed the dependence of v μ (L) on z,z, a μ ,ā μ , μ = 1, 2, and the dependence of α μν (L), β μν (L) on a μ ,ā μ , μ = 1, 2 in the notation when no confusion arises.
Only when it is necessary to indicate the dependence of α μν (L), β μν (L) on a ν ,ā ν we will write them as α μν (a 1 , a 2 , L), β μν (a 1 , a 2 , L). We shall make use of the next two lemmas in Sect. 5.
Proof. Let us first consider the case when 1/2 > l μ > 0, μ = 1, 2. By Prop. 3.1.3 of [22] we have the following expansion of v around a μ : In both cases we see that v ∈ W l 1 ,l 2 .
Proof. By definition around a 1 v has expansion where ... are higher order terms. Note that when 0 where in the first inequality δ = −l μ modZ, and in the second inequality δ = l μ modZ, l μ , μ = 1, 2 are not integers, and C is a constant independent of r .
Proof. The proof is implicitly contained in the proof of Prop. 3.1.3 of [22]. We will prove the first inequality since the second one is proved in a similar way. As in the proof of Prop. 3.1.3 of [22], let z − a μ = 1 2 re iθ , we have We have The second inequality is proved similarly.
Recall that L is the matrix We use the following notation The next proposition follows from Prop. 3.2.5 in [22]: The entries of α(L), β(L) satisfy a remarkable set of equations. These equations are derived by considering the deformation of the equations satisfied by v μ (L) in [22] and more generally by using product formula in [23]. Let us now describe these equations as on Page 935 of [23] (also cf. Page 623 of [22]). Let us first define an invertible matrix G whose inverse is given by β(L) sin(π L) = −G −1 . Let Then G takes the following form where Then β(L) sin(π L) = −G −1 , and G −1 takes the following form k := k l (t), ψ := ψ l (t) only depends on l, t, where k l (t), ψ l (t) are as in Eq.(29), k, ψ satisfy the following: The above equation for ψ can be converted to the following Painleve equation of the fifth kind by the substitution s = t 2 , σ = tanh 2 (ψ):
τ Function and related functions.
Let us also recall some basic facts about τ function. We will only consider the case when l 1 = −l 2 , l = 2l 1 . This is a function which is analytic in l when |l| is small (cf. Lemma 4.11), and real analytic in a i ,ā i , i = 1, 2. The defining equation is (cf. Remark on Page 628 of [22] or Page 936 of [23]): Equation (32) will play an important role in Section 6.2. Note τ 0 := ln τ only depends on t as in Eq. (27) and l, and we will write such a function as τ 0 (t, l), and suppress its dependence on t, l in the following for the ease of notations. Let τ 1 = −τ 0 . Then by equation (4) in [27] τ 1 satisfies the following equation: ψ satisfies the Eq. (30) with boundary condition as t → ∞. We note that when l 1 ∈ R, 4i sin(πl 1 )K 2l 1 (t) ∈ iR and when l 1 ∈ iR, 4i sin(πl 1 )K 2l 1 (t) ∈ R. We have (cf. Appendix of [7]) ψ ∈ iR when l 1 ∈ R, and ψ ∈ R when l 1 ∈ iR.
Remark 4.7. In Sect. 6 we will sometimes consider real analytic function f (z, ...) restricted to z ∈ R, where ... stands for possible other variables, and consider the partial derivative of f with respect to the real variable z. To avoid possible confusions with ∂ z which is defined to be the partial derivative of f with respect to the complex variable z, we will use d f dz to denote the partial derivative of f with respect to the real variable z. We also have the following facts from [22]: where c, are as in Eq. (27); (2) When l 1 = −l 2 ∈ R, we haveβ 21 (L)β 21 (1 − L) = sinh 2 ψ; (4) When a 1 , a 2 are real and let t = 2m(a 2 − a 1 ) > 0, then we have Here we have used dα 11 da i to denote the partial derivative of α 11 with respect to the real variable a i as in Remark 4.7.
When |l| = 2|l 1 | is sufficiently small, we have the following convergent power series expansion for τ 0 = ln τ (cf. Page 933 of [23]): and u ±l k stands for u −l k for even k and u l k for odd k, λ = i sin(πl 1 ).
, and K is the transpose of K . When λ 1 is sufficiently small we have So we see that τ 0 = −τ 1 = −τ 2 . For an operator T we use ||T || and ||T || 2 to denote its norm and Hilbert-Schmidt norm respectively. Lemma 4.9. Assume that t ≥ η > 0, |l| < 1. Then Proof. Fix u > 0, let us evaluate the maximum of ∞ 0 |K (u, v)|dv. Denote by r the real part of l. Since |r | < 1 We have To find the maximum of f r (u), u > 0, it is sufficient to assume that 1 > r ≥ 0. Since . One easily checks that the maximum of f 1 (u) is a function of η, and goes to 0 as η → ∞.
Similar estimate holds when we integrate with respect to u and fix v in the previous paragraph. By Th. 6.18 of [10] we have proved ||K || ≤ C 0 (η) where C 0 (η) → 0 as η → ∞. We note that ||K || 2 | is a decreasing function of t and is dominated by an integrable function independent of t for any t ≥ η. Since |K (u, v)K (v, u)| goes to 0 as t → ∞, Lebesgue's dominated convergence theorem implies that ∞ 0 ∞ 0 |K (u, v)K (v, u)|dvdu goes to 0 as t → ∞. The proof for K is similar.
Given two series n≥0 f n , n≥0 g n we say n≥0 f n is dominated by n≥0 g n if | f n | ≤ |g n | for all n ≥ 0.
The following Lemma follows from Cauchy's formula and the proof can be found in If f (z) = n≥0 f n (z) and assume that n≥0 f n (z) is dominated by a series n≥0 C n with C n independent of z and n≥0 |C n | < ∞. Then f (z) = n f n (z) and the series converges absolutely and uniformly on any compact subset of U .
Proof. Ad (1): Note that
then the series is dominated by a convergent geometric series. It follows immediately that τ 0 → 0 as t → ∞ or as l → 0, and τ 0 is a holomorphic function of l 1 for |l| < C 0 (η) .
By (1) where by Lemma 4.10 the series on the righthand side converges absolutely and uniformly on any compact subset of |l 1 | < C 0 (η) . To prove the rest of cases in (2) and (3), it is sufficient to check each term ∂ l 1 ( (2i sin(πl 1 )) 2k k e (2k) l (t)) has the properties described in (2) and (3). For an example since ∂ l 1 (2i sin(πl 1 )) 2k → 0 as l 1 → 0, and (2i sin(πl 1 )) 2k → 0 as l 1 → 0, we have ∂ l 1 τ 0 → 0 as l → 0. The rest of the cases are proved in a similar way. Here c k is independent of r and is holomorphic in l 1 , l 2 . As on page 587 of [22] we have The term K 0 (mr) can be estimate directly as K 0 (mr) = O( 1 √ mr e −mr ), and (2) is proved.
The Resolvent
Let I := [a 1 , a 2 ] ∈ R be a closed interval. Throughout this section we shall assume that the following conditions are satisfied: where C 0 (η), C 2 (η) are constants in Lemmas 4.11 and 4.12 respectively.
Condition H .
We first recall some results from Chapter 2 of [17].
The proof of the following formula, also known as Sokhotski-Plemelj formula, can be found on Page 49 of [17]: = (a 1 , a 2 ). Then if x ∈ (a 1 , a 2 )
Lemma 5.2. Assume that f (x) verifies condition H on I
We will also need the property of singular integrals with Cauchy kernels at the ends of the interval I ( cf. Page 74 of [17] for more precise estimates): where A does not depend on x 1 , x 2 but may depend on [a 1 , a 2 ], 0 < μ < 1.
(2) Assume that near the ends c = a 1 or c = a 2 the function φ(x) is of the form where φ * (x) satisfies condition H near and at c.
x−z . First assume that z is near c but not on I . Then (3) If 2πi ln 1 z−c + 0 (z) where the upper sign is taken for c = a 1 , the lower for c = a 2 . ln 1 z−c is to be understood to be any branch, single valued in the plane cut along I ; 0 (z) is a bounded function; (4) 2i sin γ π + 0 (z) where the signs are as in (3), (z − c) γ is any branch, single valued near c in the plane cut along I and taking the value (x − c) γ on the upper side of I , and | 0 (z)| < C |z−c| α 0 where C, α 0 are real constants with α 0 < α. For the point z = x 0 lying on I , the following results hold: (5) 2πi ln 1 z−c + 0 (x 0 ) * where 0 (x 0 ) * satisfies the condition H near and at c, and the signs are again as in (3) where the signs are as in (3), 0 (x 0 ) = * * (x 0 ) |x 0 −c| α 0 where * * (x 0 ) satisfies condition H near and at c and α 0 < α.
We will also make use of the following 1 4 . By a result on Page 17 of [17] for fixed x ∈ I , |ψ( that ψ(x 1 , x), |x − x 1 | μ and |x − x 2 | μ are bounded functions on I and we will denote by M a constant which is greater than the upper bounds of these three functions.
We have where M is constant independent of x. By using Holder inequality we have Note that by Holder inequality again we have I |x 1 −x| −2μ |x 2 −x| −2μ dx ≤ ( I |x 1 − x| −4μ dx) 1/2 ( I |x 2 − x| −4μ dx) 1/2 < M where M is a constant independent of x 1 , x 2 since μ < 1/4, and the Lemma is proved.
Solving the linear equation. Recall from definition (1)
where I 2 is two by two identity matrix. We denote by D 0 (y, x) the massless limit of D m (y, x) and it is given by (48) A key idea to find resolvent for the operator defined as in definition (1) is to extend x from I to the complex plane. This is well defined becauseṽ i (y − z), i = 0, 1 are real analytic function of z,z when z = y. More precisely we define (50) Then ∀x ∈ (a 1 , a 2 ) Proof. Fix x ∈ (a 1 , a 2 ). We have D m (y, z) = D 0 (y, z) z) is a smooth function, and E 3 (y, z) is ln |z − y| multiplied by a smooth function F 3 (y, z). We have Notice when y = x, the integrand converges pointwise to 0 as → 0 + . For fixed x, the integral over y can be divided into two parts: the first part is over those y ∈ I with |y − x| ≥ 1/2, and the second part is over y ∈ I with |y − x| ≤ 1/2. On the first part of the integral the integrand can be dominated by a constant, and on the second part of the integral, when is small than 1/2, |x − y| 2 + 2 < 1, and hence the integrand is dominated by a constant multiplied by | ln |y −x|g(y)|, which is an integrable function on I . By Dominated Convergence Theorem we have lim →0 It is enough to prove the first equation in the Lemma for D 0 (y, x), and this follows immediately from Lemma 5.2. The second equation also follows in a similar way. Notice that the last equation follows from the first two by solving
Resolvent in
the case of m = 0. This case is a well known case of Riemann-Hilbert problem and more general case is discussed for an example on Page 130 of [16]. We will give a sketch of proof in our case.
It is sufficient to check the equation on smooth functions. Let g(x) be a smooth function on I . We will solve the following linear equation: Since g is smooth, f is smooth on (a 1 , a 2 ). Recall that h(z) = (z−a 1 ) −l (a 2 −z) l , and we have chosen the branch cut to be [a 1 , a 2 ] and define h(z) so that h( ∈ (a 1 , a 2 ). ∈ (a 1 , a 2 ) and G l (y, x − i0) = e −2πil G l (y, x), ∀x = y ∈ (a 1 , a 2 ).
We note that G l (y, z) satisfies many of the properties of v 0 in Eq. (40). For an example, ∂ a 1 G l (y, z) = l(z − a 1 ) −l−1 (a 2 − z) l (y − a 1 ) l−1 (a 2 − y) −l is non-singular at z = y, and factorized as products of a function of y and a function of z, similar to Eq. (43). This strongly suggests that the resolvent for m > 0 case can be obtained from suitable functions of v 0 , and we will prove in the next section that is indeed the case.
The general case.
Recall the function v 0 (z * , z, L) defined at the end of Sect. 4. Consider the following function Note that by definition R m (z * , z) also depends on L , a 1 , a 2 and we have suppressed its dependence on L , a 1 , a 2 in the following for the ease of notations. In fact, from now until the end of this paper, we make the following declaration: Starting from Sect. 5.4 until the end of this paper, whenever a function is defined, and it is clear from the definition that such a function depends on L (or equivalently l), a 1 , a 2 , unless otherwise stated, we will suppress its dependence on any of L (or equivalently l), a 1 , a 2 for the ease of notations following the convention in [22]. a 1 a 2
Fig. 4. A second cut
We come up with R m (z * , z) by demanding that R m (z * , z) share the following properties with D m (z * , z) as defined in Eq. (49): the second row is obtained from the first row by applying im −1 ∂z. Moreover the entries of R m (z * , z) as a function of z satisfy (1), (2) and (4) of definition 4.1.
We will now give a formula for the resolvent. We choose the cut of the complex plane to be I = [a 1 , a 2 ]. We choose a branch of R m (z * , z) on C − [a 1 , a 2 ], denoted by R m (z * , z) as follows. For y = x ∈ I , we define R m (y, x) := R m (y, x + i0) = lim →0 + R m (y, x +i ), i.e., the value of R m on the upper cut of I . Define R m (y, x −i0) = lim →0 − R m (y, x − i ), i.e., the value of R m on the lower cut of I . Note that from Eq. (53) and definition of v 0 (z * , z, L + 1) before Eq. (39) we have R m (y, x − i0) = e −2πil R m (y, x + i0). By Eq. (39) we can choose R m (y, x) such that when x is close to y, and y = x, R m (y, x) = D m (y, x)+ non-singular terms, where both x, y are in (a 1 , a 2 ). Since l 1 = −l 2 , R m (z * , z) is a well defined single valued function for z = z * , and both z, Let R m be the integral operator on L 2 (I, C 2 ) given by the kernel R m (y, x), i.e, Note that the notation R m follows the declaration after Eq. (53). Here we have chosen somewhat unconventional notation by integrating with respect to the first variable in R m : this is in tune with the notation of v 0 (z * , z, L), where z * corresponds to integration variable. The most singular part of R m (y, x) when x − y → 0 is (up to multiplication by constants) 1 x−y and the integral is understood as Cauchy's Principle Value. When x = y and both in (a 1 , a 2 ), R m (y, x) is a smooth function of y.
For fixed x ∈ (a 1 , a 2 ), by Eq. (40) near a i R m (y, x) has the worst singularity of the form (x − a i ) −l when l > 0, i = 1, 2.
Let f (y) be a smooth function on I . LetĜ(z) := 1 β+1/2 I R m (y, z) f (y)dy. Note that the notationĜ(z) follows the declaration after Eq. (53). G(z) verifies (1), (2) and (4) of definition 4.1 on the plane minus the cut I , since R m (y, z) as a function of z does when z = y. The following gives the properties ofĜ(z) near and on the cut I : Lemma 5.7. ∈ (a 1 , a 2 ).
Proof. Let us choose a second cut on the complex plane as given by the three line segments below I joining the ends of I in Fig. 4. Let S 1 (z * , z) be the function from R m (z, z * ) in Eq. (53) defined on this second cut, such that S 1 (y, z) = R m (y, z) when Imz > 0, y ∈ I . When we move z from above [a 1 , a 2 ] to the region between the two cuts (the interior of the square in Fig. 4), we have crossed the cut for R m (y, z), and by definition we have that when z is in the region between the two cuts, 1 (y, z).
It follows that
Note that when y is close to x, by Eq. (39) that gives the asymptotics when y is close to x, and definition of D m , S 1 (y, The first equation now follows as in the proof of Lemma 5.5. For the second equation, choose η > 0 to be small and let I η := (x − η, x + η). When → 0 + , we have where we have used e 2πil = β−1/2 β+1/2 . On I η , up to terms which go to 0 as η → 0, we have lim where in the last = we have used Lemma 5.2, and o(1) denotes a term that goes to 0 as η → 0. The proof now is complete.
where I 2 is the identity two by two matrix.
Proof. We will check the resolvent formula on a dense subspace of L 2 (I ).
Let f (x) be a smooth function on I with support of f contained in (a 1 , a 2 ). We will solve the following linear equation: First since β ∈ iR, β = 0, β is in the resolvent of the self adjoint operator D m , and the above linear equation has a unique solution g ∈ L 2 (I ). Let us first show that g satisfies condition H on (a 1 , a 2 ). We re-write the above equation as Since the entries of D m (y, x)−D 0 (y, x), up to addition of a smooth function of (x, y), equal to ln |x − y| multiplied by a smooth function of (x, y), by Lemma 5.4 g 1 (x) := I (D m (y, x)− D 0 (y, x))g(y)dy satisfies condition H on I . Apply the resolvent formula in Th. 5.6 we have g 1 (y))dy.
Since f (x) − g 1 (x) satisfies condition H on I , by the result on Page 49 of [17] we see that g(x) satisfies condition H on (a 1 , a 2 ). By applying item (5) and (6) (1) and (4) in Definition 4.1 on the plane with cut I . Apply Lemma 5.5 to G 2 and Lemma 5.7 to G 1 for x ∈ (a 1 , a 2 ) we have Now we examine the growth properties of G 1 (z), G 2 (z) near the ends a i . By the second paragraph after Eq. (53) we can write the components of G 1 (z), β+1/2 , the support of f is contained in (a 1 , a 2 ), when z is close to a i , we only need to consider the property of R m (y, z) when |z − y| ≥ η 0 > 0 where η 0 is a fixed small constant. By Eq. (40), it follows that As for G 2 (z), note that D m (y, z) − D 0 (y, z), up to addition of smooth functions, is ln |z − y| multiplied by a smooth function. By Lemma 5.4 the integrals involving ln |z − y| are bounded around a i . So up to bounded functions around a i we can replace D m (y, z) by D 0 (y, z) in the definition of G 2 (z).
Apply (3) and (4) . Apply Lemma 5.5 to G 2 we have . Using G 1 = G 2 we have proved the theorem.
Some properties of the resolvent.
Recall from the definition of R m (z * , z, L) below Eq. (53) where we have put in extra L dependence. By definition R m (z * , z, L) also depends on a 1 , a 2 and we have followed the declaration after Eq. (53). We will use R m (L) to denote the corresponding integral operator. Recall from condition (45 Proof. By Th. 5.8 Since β ∈ iR and D m is self adjoint, we have where we have used e 2πil = β+1/2 β−1/2 . It follows that R m (L) H = R m (−L), where R m (L) H is the adjoint of integral operator R m (L), and the Lemma is proved.
It is convenient to introduce for fixed L Note that by definition R(z * , z) also depends on a 1 , a 2 , L and we have followed the declaration after Eq. (53). By equation (3.2.23) of [22] we have the complex conjugate of i π ∂ z v 0 (z, z * , L + 1) is i π ∂ z v 0 (z, z * , L). Note that by Eq. (39) singularities of v 0 (z * , z, L) and v 0 (z * , z, −L) canceled out at z = z * ∈ (a 1 , a 2 ). It follows that R m (z, z) is smooth for z ∈ (a 1 , a 2 ), and where tr 2 here means the sum of the diagonal entries of the two by two matrix. Using Eqs. (43) and (24) we have Similarly When a μ = x μ +iy μ with x μ , y μ real, we will be interested in computing z). Note that when a μ is real this is the same as m −1 d R(z,z) da μ . Following the convention in Remark 4.7 from Eqs. (57), (58) we have When a μ is real, and z ∈ (a 1 , a 2 ), from Lemma 5.9 R(z, z) ∈ iR. It follows that m −1 d R(z,z) da μ ∈ iR and we conclude that v μ (z, 1 − L)v μ (z, 1 + L) +v μ (z * , L)v μ (z, −L) is real. So when a μ , μ = 1, 2 are real we have the following equation By definition R(z, z) also depends on a 1 , a 2 , L and we have followed the declaration after Eq. (53). We note that R(x, x) is smooth in x, a 1 , a 2 when x ∈ (a 1 , a 2 ). The following Proposition gives information about the property of R(x, x) near the ends a 1 , a 2 .
Let us examine the expansion of πi sin(πl 1 ) v 2 (1 − L)v 2 (1 + L) around a 1 : by Eq. (23) we find that the first few leading order terms are, up to multiplication by constants which only depend on l 1 , α 21 The same argument works for the expansion around a 2 , with l 1 replaced by l 2 = −l 1 . (1) is proved. To prove (3), by Eq. (60) we have We should examine the expansion of v μ (z, 1 − L)v μ (z, 1 + L) + v μ (z, L)v μ (z, −L) and its complex conjugate around a 1 , a 2 . The proof is now the same as in the proof of (1).
Remark 5.11. From Prop. 5.10 if one integrate R(x, x) over [a 1 , a 2 ], we will get log divergence from the two ends a 1 , a 2 respectively. Moreover these singularities only depend on the ends a 1 , a 2 through l 1 , l 2 .
The constant term.
We remind the reader that we will follow the declaration after Eq. (53) for the rest of this paper.
Proposition 5.12. The constant term C 2 (μ) in (2) of Prop. 5.10 is given by Proof. It is sufficient to check m dC 2 (μ) da j , j = 1, 2 where we have followed the notation in Remark 4.7. We will check the case when μ = 1. μ = 2 is similar.
From Prop. 5.10 near a 1 we have the following expansion of R(z, z): By Eq. (44) m −1 ∂ z R(z, z) = −m −1 (∂ a 1 + ∂ a 2 )R(z, z), expanding both sides around a 1 , we see that the constant term on the LHS is C 3 , where the constant term on the where C 1 is the constant term of m −1 ∂ a 2 R(z, z) in its expansion around a 1 . It follows that m −1 ∂ a 1 C 2 (1) = −C 1 . On the other hand m −1 ∂ a 2 C 2 (1) is the constant term of m −1 ∂ a 2 R(z, z) in its expansion around a 1 , it follows that m −1 ∂ a 1 C 2 (1) = −C 1 = −m −1 ∂ a 2 C 2 (1). From the Eq. (57) Using the Eq. (23) for the expansion near a 1 , we find that the constant term C 1 of m −1 ∂ a 2 R(z, z) near a 1 is given by It follows that we find the constant term around a 1 is given by Usingβ(−L)β(L) = sinh 2 (ψ) in (2) of Prop. 4.8, we find We conclude from (4) of Prop. 4.8 that and the Prop. is proved. a 2 a 1 Fig. 5. Contour J , 1 : the two circles are oriented clockwise with radius and centers a 1 , a 2 respectively, and the two line segments are above and below the cut [a 1 , a 2 ] and 1 is the distance between them
The Computation of Singular Limits
Throughout this section except the last Sect. 6.3, we shall assume that the conditions in (45) are satisfied.
Let us first give a heuristic reason for our approach. We'd like to compute the singular limit F(12, 23) from (16) where 1, 2, 3 are intervals as in Fig. 6. Ignoring the singularities for the moment, we need to compute a 2 a 1 R(x, x)dx, where a 1 , a 2 are the end points of intervals in Fig. 6. Formally differentiating with respect to a 1 for an example, we get a 1 ). By Eq. (60) we need to compute )dx, its complex conjugate, and R(a 1 , a 1 ). It turns out with suitable linear combinations as in Sect. 6.2 the singularities will cancel out in the integrals, and the contribution from R(a 1 , a 1 ) is the constant term determined in Prop. 5.12. By Eq. (60) we should turn to the computation of
A reduction of a line integral to a local computation.
We will use I + , I − to denote the upper side of the cut [a 1 + , a 2 − ] and the lower side of the cut [a 1 + , a 2 − ] respectively with its usual orientation, namely from the left to the right. −J will denote the interval J with opposite orientation. The value of the functions on the left hand side of the integral in the next Lemma are by definition the value of such functions from the upper side of the cut [a 1 + , a 2 − ]. Lemma 6.1.
As in the proof of Prop. 5.10, the leading term in the expansion of v 1 (1 − L)v 1 (1 + L) around a 1 is given by −l 1 sin πl 1 π (mz − ma 1 ) −2 . This term gives the first term on the right hand side of the equation in (1). The next term is (mz − ma 1 ) 2l 1 or (mz − ma 1 ) −2l 1 . However when integrating such a term multiplied by iθ over |z − a 1 | = , up to constant it is bounded by 1 Proof. When μ = 1 this follows from (2) of Lemma 6.3 and Eq. (60), and the fact that both sides of (2) of Lemma 6.3 are real. μ = 2 is proved in the same way as the proof of (3) of Lemma 6.3.
For an interval I (our interval is connected), we will simplify our notation further by writing R(I, x) the (1, 1) entry of the resolvent R m (x, x, L) − R m (x, x, −L) (cf. Eq. 54) on the interval I when x ∈ I , and 0 when x is not in I . Recall from definition 2.2 2 ∪ 3, x), where intervals 1, 2, 3 are as in Fig. 6. Define R 1 (b 1 , b, c, c 1 where we have followed the convention as in definition 3.11. As usual by definition R 1 (b 1 , b, c, c 1 ) also depends on l and we have followed the declaration after Eq. (53). Note the dependence of R 1 (b 1 , b, c, c 1 ) on its variables is very different from that of R m and we hope that there may be no confusions here. By Prop 5.10 R (12, 23, x) is continuous in the interior of intervals 1, 2, 3 and bounded near the ends of intervals 1, 2, 3.
From Eq. (16) Proof. We will prove the first equation. The second equation is proved in a similar way.
Let us compute . By definition Note that by (2) b 1 , c 1 , x) at b 1 + b 1 canceled out. | 13,287.4 | 2023-03-22T00:00:00.000 | [
"Physics",
"Mathematics"
] |
A Review-based Context-Aware Recommender Systems: Using Custom NER and Factorization Machines ESULTS FOR E ACH E NTITY
—Recommender Systems depend fundamentally on user feedback to provide recommendation. Classical Recom-menders are based only on historical data and also suffer from several problems linked to the lack of data such as sparsity. Users’ reviews represent a massive amount of valuable and rich knowledge information, but they are still ignored by most of current recommender systems. Information such as users’ pref- erences and contextual data could be extracted from reviews and integrated into Recommender Systems to provide more accurate recommendations. In this paper, we present a Context Aware Recommender System model, based on a Bidirectional Encoder Representations from Transformers (BERT) pretrained model to customize Named Entity Recognition (NER). The model allows to automatically extract contextual information from reviews then insert extracted data into a Contextual Machine Factorization to compte and predict ratings. Empirical results show that our model improves the quality of recommendation and outperforms existing Recommender Systems.
I. INTRODUCTION
In the last few years, the number of services and items offered and produced by businesses and websites have increased quickly, which makes the choice of products and services meeting customers' needs more difficult.
Recommender systems (RS) tackle this problem by helping users to find suitable resources, based on their past behaviors and preferences. Today, companies use RS in several domains to assist users, enhance customer experience and make it easier for them to satisfy their needs by speeding up searches.
Whereas, traditional Recommender systems such as Collaborative Filtering techniques [1] are based on two dimensions (User X Item) and on numeric rating (e.g., 5 stars rating) to compute similarities between users and items and produce recommendations. Context Aware Recommender Systems [2] use other dimensions beside the two classical dimensions, namely contextual information dimensions (User X Item X Context) to enhance the accuracy. The contextual information represents the environmental factors that influence the user's decision. In general, numeric rating expresses whether a user likes or dislikes an item, however, it does not allow us to understand why, when or where he/she makes this choice and reasons behind it.
Where sparsity and the lack of information represent big challenges to Recommender Systems, customers' reviews could be a good resource to solve these problems and help companies to well understand decisions made by users. In fact, many models have been proposed to extract valuable information from reviews like sentiment analysis and rating extraction, but only few works have been presented to extract contextual information from users' reviews.
In this article, we present a new model for contextual information extraction based on a pre-training language representation method trained on large amounts of data like Wikipedia called BERT and a custom Named Entity Recognition. The extracted data is used by a Contextual Machine Factorization algorithm to predict the user's interest. This paper is organized as follows.
The remainder of this paper is presented as follows. In Section 2, we give a review of related works. In Section 3, we define the context dimension and we present the Named Entity Recognition and BERT. We introduce Factorization Machine algorithm and its use in Context-Aware Recommender Systems in Section 4. In Section 5, we present the proposed work in detail. In Section 6 we discuss obtained results. Finally, a conclusion of the work is presented in the last section.
II. RELATED WORK
Recently, many works have been proposed for extracting precious data from reviews and integrating them into recommandation process. This section presents recent applications of reviews-based Recommender systems.
Zheng et al. [3] presented a Deep Cooperative Neural Networks (DeepCoNN) model based on word embedding techniques and two convolutional neural networks (CNNs). The first network extracts user behaviors from users' reviews and the second network extracts item properties from reviews written on items. The model merges network outputs and transmits it to a factorization machine algorithm for the prediction.
Similarly to DeepCoNN [3], R. Catherine and W. Cohen [4] proposed a model called Learning to Transform for Recommendation (TransNets) based on two parallel CNNs, one to process the target review and the other to process the texts of the user and item pair and Factorization Machine to predict rating. The difference between the two models is that the transnet model integrates an additional Transform layer to represent the target user-target item pair.
McAuley and Leskovec [5], introduced a Hidden Factors and Hidden Topics (HFT) model that merges reviews written by users and ratings to provide recommendations. The model uses Latent-Factor Recommender Systems to predict ratings and the Latent Dirichlet Allocation to discover hidden dimensions in review text.
Tan et al. [6], introduced a Rating-Boosted Latent Topics (RBLT) framework which models item features and user preferences by combining textual information extracted from reviews and numeric ratings. The RBLT model represents users item as a latent rating factor distribution, and repeats reviews with rating n time to dominate topics. To perform predictions outputs are introduced into a Latent Factorization Machine (LFM).
Zhang et al. [7], proposed an Explicit Factor Models (EFM) to produce explainable recommendations. EFM extracts user sentiments from reviews and explicit item features, then recommends or not recommends items based on hidden features learned, items features and users' interest. All previously cited works have exploited reviews to boost recommender systems, but they ignore contextual information which could significantly improve recommendations.
Other researchers succeeded in integrating context in recommendation tasks such as Aciar, [8] proposed a Mining Context Information method based on classification rules text mining techniques to automatically identify user's preferences and contextual information inside reviews, extract it and integrate it in recommendation. However, this method identifies sentences containing context but it can not extract contextual information from these sentences.
Hariri et al. [9] which proposed a Context Aware Recommender system that models user reviews to obtain contextual data and combines it with rating history to compute the utility function and suggest items to users. The model handles the context like a supervised problem of topic modeling and builds the classifier of context using a labeled-LDA. The system uses conventional recommendation algorithms to predict ratings. However, this work predicts the utility function not the rating.
Levi et al. [10] introduced a Cold Start Context-Based Hotel Recommender System based on context groups extracted from reviews. This approach uses many elements, including a weighted algorithm for text mining, an analysis to understand hotel features sentiment, clustering to build a hotel's vocabulary and nationality groups. Despite this study tackling the cold start issue, it doesn't show how to integrate extracted context in recommendation adequately.
Compos et al. [11] introduced an approach to extract contextual information from user reviews using large-scale and generic context taxonomy based on semantic entities obtained from DBpedia. In this approach a software tool builds the taxonomy by exploring DBpedia automatically, and also allows for manual adjustments of the taxonomy. Despite this work presenting a semi-automatic method to extract context from reviews, it does not explain how to use extracted data to predict ratings.
Lahlou et al. [12] proposed a review aware Recommender system based on users' reviews to build a contextual recommendation. The proposed architecture allows to automatically exploit contextual information from reviews to build recommendations. They also presented a Textual Context Aware Factorization Machines (TCAFM) which is tailored to context. This work shows good performances in terms of accuracy, but it considers the whole review as a context instead of extracting contextual data, and in the real world datasets only few reviews contain this kind of data.
III. CONTEXT EXTRACTION
To extract contextual information from reviews, we should firstly define context dimensions (a.k.a. categories of context). In the literature, many context modeling approaches have been introduced, but the most commonly used context representation is [13], the major of these approaches use ontologies to build context taxonomy. For instance, Castelli et al. [14] use the W4 model (a.k.a. Who, When, Where, What) as components of context, "Who" is linked to the Person, "When" is associated to the Time, "Where" refers to the Location and "What" refers to the Fact. Similarly, Kim et al. [15] instantiate the 5W1H model (a.k.a. Who, Why, Where, What, When, How) as contextual components associated respectively to Status, Goal, Location, Role, Time, Action. Chaari et al. [16] proposed a basic context descriptor to describe contextual components as Service, User, Activity, Loation, Device, Resource, Network. Table I resumes some principle modeling techniques of context. After revising and analysing proposed works, we choose to use four contextual dimensions, Time, Location, Companion and Environmental dimensions. where a given sentence is presented as a tokens sequence w = (w 1 , w 2 , w 3 , ..., w n ), and transformed to a token labels sequence y = (y 1 , y 2 , y 3 , ..., y n ) [18], the neural model is generally composed from three elements : word embedding layer, context encoder layer and decoder layer [19]. Bidirectional long short-term memory networks (Bi-LSTM) [20][21] is widely applied in Natural Language Processing tasks and adopted by most of NER, due to its sequential characteristic and its capacity to learn contextual word representations. Despite NER having been employed in several application domains, many application fields are still not discovered, such as in Context Aware Recommender Systems.
B. BERT
BERT (Bidirectional Encoder Representations from Transformers) As its name indicates, it is a model of language representation that relies on a module called "Transformer". A transformer is a component which relies on attention methods and which is built on the basis of an encoder and a decoder. In opposition with directional and shallow-bidirectional models (OpenAI GPT [22], ELMo [23]), BERT pre-trains deep bidirectional representations from unstructured text on both left and right context in all layers [24]. It has been pre-trained on large corpus such as the entire BookCorpus and Wikipedia. Language Modeling is a usual NLP task of predicting the next word given the start of the sentence. The Masked Language Model (MLM) allows to BERT to learn in an unsupervised way, the entry is sufficient on its own, no need to label anything. The principle of Masked Language Modeling is to predict "masked" tokens from the other tokens in the sequence. In the first step of BERT's pre-training, 15% of the tokens of each sequence are masked, randomly. This step is very essential because BERT gets its deep bidirectionality from it.
IV. MACHINE FACTORIZATION FOR CONTEXT AWARE RECOMMENDER SYSTEMS
Factorization machines (FM) proposed by Rendle [25], a general-purpose supervised learning algorithm that could be used in regression and classification tasks. It rapidly became one of the most popular algorithms for recommendation and prediction. It is a generalization of the linear model that is able to capture interactions between variables and also it can significantly reduce the polynomial complexity to linear computation time. FM is very efficient especially within high dimensional sparse datasets.
Let x ∈ R d be the feature vectors and y be the corresponding label. The model equation for a factorization machine is defined as: Where w 0 ∈ R is the bias term, w ∈ R d are weights corresponding to each feature vector, V ∈ R d×k the interaction matrix, v i is the i th row of the V matrix, v i , v j the interaction between the i th and j th variable. It is important to point out that this factorization has the ability to compute all pairwise interactions, even hidden feature interactions which can significantly reduce engineering efforts.
Context Aware Factorization Machines is an application of the origin FM algorithm without any tuning. In effect, it is easy for the algorithm to incorporate the additional dimensions without making any changes, since it uses a sparse vector representation. Fig. 2 represents how to transform the contextual dimensions into a prediction problem from realvalued features using Sparse Feature Vector Representation.
We used FM for two main reasons. The First reason is that the algorithm is designed to support sparse data, and the extracted contextual information will make the matrix more sparse. The second reason is that the computation cost of FM is a linear time complexity (O(kd)), even for additional contextual dimensions.
V. METHODOLOGY
The implementation of our model is a two-steps process. The first step is for context extraction from reviews using a custom NER and BERT model. As shown in Fig. 3, in this step we aim to switch from the two-dimensional mode used by classic Recommender Systems to multi-dimensions mode used by Context-aware Recommenders. In the second step a contextual Factorization Machine is applied to predict ratings and generate recommendations based on outputs from the previous step.
A. Context Extraction Step
In this step, The Named Entity Recognition is treated as a sequence labeling problem. Our model consists of three layers as shown in Fig. 4 namely, word embedding layer, Bi-LSTM layer and CRF layer. In the first layer, the BERT pretrained model takes a sequence of n words (w1, w2, . . . , wn), then outputs a contextual embedding vector representation of each word. In contrast to context independent word embedding techniques such as Word2Vec [26], BERT is a powerful model, highly bidirectional and utilizes contextual information to learn word's context. BERT has two variants: In this work we use the BERT Base model [27].
In the second layer, the Bidirectional Long Short-Term Memory (Bi-LSTM) took part. Bi-LSTM is an extension of LSTM proposed by [28] that uses forward and backward networks to process sequences. It is designed to avoid gradient vanishing and exploding and also escape the problem of long term dependency.
The output from the embedding layer is sent to the Bidirectional Long Short-Term Memory (Bi-LSTM) to extract vector features from words . Bi-LSTM concatenates the forward and the backward networks as a final result [H l , H r ]. In the last layer, Conditional Random Fields (CRF) [28] outputs the most probable tag sequences. CRF is a probabilistic discriminative model that is used to label sequences. The use of CRF helps the model to learn labels and constraints that ensure the validity of the sequence. For example, the BIO format (Beginning, Inside, Outside) is a common tagging format for tagging tokens, the first word label must begin by "B" or "O" not by "I", this constraint is learned automatically by CRF.
Let X be the input sequence and Y the corresponding tag sequences, P the matrix obtained from the previous layer and T the transition matrix which represents the probability from label y i to label y i+1 . The score of the tags sequence is computed as follow : (2)
B. Rating Prediction Step
In this step, a Contextual Factorization Machine (CFM) takes the output from the previous step and predicts the rating. CFM is similar to MF except that a matrix of weights is added to capture the importance of contextual dimensions. the CFM equation is given as follow: where w 0 ∈ R is the global bias , w ∈ R d are weights corresponding to each feature vector, V ∈ R d×k the interaction matrix, v i i th row of the V matrix, < vi, ji > the interaction between the i-th and j-th variable and B ∈ R p the matrix of weights of the importance. The parameter b i equal to 1 for item and user and b i x i for other dimensions.
A. Corpora and Dataset
We use three corpora to pre-train our custom NER : -Corpus 1 is the CoNLL-2003 [29] NER dataset which consists of 18,453 sentences, 254,983 tokens and four entities namely persons, locations, organizations and miscellaneous.
-Corpus 2 is the Groningen Meaning Bank (GMB) [30]]for name entity classification, developed at the University of Groningen. It comprises 63,256 sentences, 1,388,847 tokens and eight entities (Geographical Entity, Organization, Person, Geopolitical Entity, Time, Artifact, Event, Natural Phenomenon).
-Corpus 3 is a custom corpus that we build to face some lake in the two aforementioned corpus. In fact, after training our custom NER, it is still not able to extract some categories such as Companion context and Environmental context , e.g., "I watched the movies with my friend at the cinema" friend and cinema should be annotated as companion and location, but it's not the case. The new corpus allows us to fine-tune our custom NER, and tackle this problem. It is created in a BIO(Beginning, Inside, Outside) format and consists of 3500 sentences, 43,565 tokens and three entities.
To evaluate our model , we have selected Amazon Customer Reviews Dataset [31] and Yelp dataset [32]. Amazon dataset consists of customer reviews, ratings and product metadata(price, brand, descriptions, . . . ). It includes more than 233.1M reviews collected between 1996 and 2018 and 21 categories of products. This dataset is considered as the largest public dataset for rating. The Yelp dataset is a free to use dataset for academic and personal purposes, it contains more than 8,5M reviews and more than 160K businesses. We adopted three metrics to evaluate our custom NER namely Precision, F1 and Recall: P recesion(P ) = T P T P + F P .
Recall(R) = T P T P + F N .
And we use the Mean Square Error (MSE) to evaluate the performance of the CFM algorithm. The corresponding equation of the MSE is introduced as follow:
B. Experimental Setting
As previously mentioned the proposed approach consists of two steps: the context extraction step and the rating prediction step. In the first step, the custom NER is trained using the FLAIR library, an open source NLP framework for stateof-the-art text classification and sequence labeling. We use Google Colab to train the model (GPU 12GB). In order to not exceed the available GPU memory, we fix the mini-batch to 32 and the maximum sequence length 512. We use a Bi-LSTM with a single layer with a hidden size of 256 to process input sequences and a learning rate to 0.1.
In the second step, We build our CFM using Tensorflow framework [33], our implementation is inspired from [34] [35]. We split the data into 80% for training and 20% for test .As we are dealing with a regression problem, the model parameters are learned by minimizing the loss function, we also prevent overfitting by adding a L2 regularization term. Since it is efficient with sparse data, we use a gradient-based optimizer. We fix the number of iteration to 1000 since the CFM algorithm needs more time to converge.
C. Results and Discussion 1) Custom NER: Results of the custom NER for different corpus are represented in Fig. 5. As we can see, the custom NER achieves the best F1 score of 91.59% for Corpus1, 89.92% for Corpus3 and the worst F1 score of 80.17% for Corpus2. The disparity in results can be explained by the fact that the quality of data from Corpus to another. The GMB Corpus gets the worst performance because it is not perfect. It is not completely human, the corpus is built using existing annotation and must be corrected manually by humans. All works already mentioned have incorporated reviews except FM and PMF. In Fig. 6 The difference in results is mainly related to the sparsity of the two datasets and it is worth to note that with other datasets less sparse than the two datasets used in these experiments our model can perform better. Table III represents all obtained results.
VII. CONCLUSION
This article presented an automatic method to extract contextual information from users' reviews, then use it to improve recommendation quality with less time spent in feature engineering. This work is divided into two main steps: The first for context extraction using a custom NER, where the model used in this step consists of three layers, namely word embedding layer which takes a sequence of words and outputs a contextual embedding vector representation of each word using BERT model, the Bi-LSTM layer which extracts vector features from words generated by the previous layer and the last layer called the CRF layer which helps to automatically learn labels and constraints and guarantee the sequence validity. The second step is for ratings prediction. The CFM algorithm takes the output from the first step and computes the ratings. In contrast to the generic FM, the CFM is able to capture the importance of the contextual dimensions and incorporate them into the process of recommendation.
To evaluate the performance, the proposed model was compared with five models and obtained results show that the model achieves good results. For future work, the proposed model will be improved in both steps, namely the data extraction step and techniques utilized for this end and also for ratings prediction. | 4,824.4 | 2022-01-01T00:00:00.000 | [
"Computer Science"
] |
Sleeping Beauty Transposon Mutagenesis as a Tool for Gene Discovery in the NOD Mouse Model of Type 1 Diabetes
A number of different strategies have been used to identify genes for which genetic variation contributes to type 1 diabetes (T1D) pathogenesis. Genetic studies in humans have identified >40 loci that affect the risk for developing T1D, but the underlying causative alleles are often difficult to pinpoint or have subtle biological effects. A complementary strategy to identifying “natural” alleles in the human population is to engineer “artificial” alleles within inbred mouse strains and determine their effect on T1D incidence. We describe the use of the Sleeping Beauty (SB) transposon mutagenesis system in the nonobese diabetic (NOD) mouse strain, which harbors a genetic background predisposed to developing T1D. Mutagenesis in this system is random, but a green fluorescent protein (GFP)-polyA gene trap within the SB transposon enables early detection of mice harboring transposon-disrupted genes. The SB transposon also acts as a molecular tag to, without additional breeding, efficiently identify mutated genes and prioritize mutant mice for further characterization. We show here that the SB transposon is functional in NOD mice and can produce a null allele in a novel candidate gene that increases diabetes incidence. We propose that SB transposon mutagenesis could be used as a complementary strategy to traditional methods to help identify genes that, when disrupted, affect T1D pathogenesis.
forward genetics reverse genetics Slc16a10 Serinc1 amino acid transporter Type 1 diabetes (T1D) is an autoimmune disease in which lymphocytes mediate the specific destruction of insulin-producing pancreatic b cells (Bluestone et al. 2010). Genetic studies in human populations have detected .40 genomic intervals that harbor T1D-associated alleles (Polychronakos and Li 2011;Pociot et al. 2010). However, identification of the underlying genes for these T1D loci and the biological effects of putative causative alleles is often difficult due to genetic heterogeneity and limited tissue availability (Polychronakos and Li 2011;Pociot et al. 2010). Instead, it has proven useful to complement human genetic studies with strategies that not only aim to discover "naturally" occurring alleles but also engineer "artificial" null alleles in putative and novel candidate genes to determine their effect on disease pathogenesis in inbred animal models (Ermann and Glimcher 2012). The nonobese diabetic (NOD) mouse strain, in particular, has been widely used to investigate T1D pathogenesis (Driver et al. 2011;Jayasimhan et al. 2014). NOD mice spontaneously develop T1D, and genetic studies have identified .40 murine T1D susceptibility loci [termed insulin-dependent diabetes (Idd) loci], several of which overlap human T1D susceptibility loci (Burren et al. 2011). Although congenic NOD mouse strains have confirmed the majority of these Idd loci, relatively few of the underlying genes and their causative alleles have been definitively identified (Araki et al. 2009; Hamilton-Williams diabetogenic allele than NOD mice for a given Idd locus (Brodnicki et al. 2003;Wang et al. 2014;Ghosh et al. 1993;McAleer et al. 1995). This complex genetic architecture for T1D susceptibility in the Mus species is similar to that described in humans and further complicates the identification of "natural" causative alleles within genes underlying Idd loci using traditional outcross and congenic mouse studies (Driver et al. 2011;Ridgway et al. 2008).
Here, we propose an alternative approach for disease gene discovery using the Sleeping Beauty (SB) transposon mutagenesis system to generate "artificial" alleles on the NOD genetic background. The SB transposon can insert within genes and disrupt transcript expression (Horie et al. 2003;Carlson et al. 2003;Ivics et al. 1997). Its unique sequence also serves as a molecular tag to rapidly identify the site of insertion without additional breeding. Transposition (i.e., transposon "jumping") is catalyzed by the SB transposase, which can be expressed in trans and controlled by tissue-specific promoters to restrict transposition and subsequent gene mutation to germline or somatic cells. Once activated, transposition is relatively random, requiring only a target TA dinucleotide integration site and exhibiting some bias toward "jumping" in cis, i.e., within the same chromosome (Keng et al. 2005;Carlson et al. 2003;Horie et al. 2003). A major advantage of the SB transposon is its ability to carry gene-trap elements and reporter genes, which increase gene disruption efficiency and accelerate identification of mice with a disrupted gene (Izsvak and Ivics 2004;Horie et al. 2003). Due to these unique characteristics, this system has been successfully used to mutate and characterize both putative and novel genes in different mouse models of cancer (Moriarity and Largaespada 2015;Dupuy et al. 2009). We show here that a relatively small-scale mutagenesis screen using the SB transposon, combined with disease-specific prioritization criteria, is able to identify a novel candidate gene that contributes to T1D susceptibility in NOD mice.
MATERIALS AND METHODS
Constructs and production of transgenic and SB transposon mutant mice The transposase construct pRP1345 ( Figure 1A) was obtained from Prof. R. Plasterk (Hubrecht Laboratory, The Netherlands) and has been previously described (Fischer et al. 2001). The transposon construct, pTrans-SA-IRESLacZ-CAG-GFP_SD:Neo ( Figure 1A), was obtained from Prof. J. Takeda (Osaka University, Japan) and has been previously described (Horie et al. 2003). SB transposon and SB transposase transgenic mice were produced by pronuclear injection of linearized constructs into fertilized NOD/Lt (NOD) oocytes at The Walter and Eliza Hall Institute Central Microinjection Service using a previously described protocol (Marshall et al. 2004). Transposon transgenic mouse lines are NOD-TgTn(sb-Trans-SA-IRESLacZ-CAG-GFP-SD:Neo)1Tcb and NOD-TgTn(sb-Trans-SA-IRESLacZ-CAG-GFP-SD:Neo)2Tcb, but have been termed NOD-SBtson L1 and L2 in the text. Transposase transgenic mouse lines are NOD-Tg(Prm-sb10)1Tcb and NOD-Tg (Prm-sb10)2Tcb, but have been termed NOD-PrmSB L1 and L2 in the text. NOD-SBtson mice were mated to NOD-PrmSB mice and double-positive hemizygous male offspring (NOD-SBtson + /PrmSB + ) were identified by PCR genotyping. NOD-SBtson + /PrmSB + males were backcrossed to wild-type NOD females to produce G1 mice, carrying potential transposon insertions. Mice carrying transposon insertions, which activated the polyA trap, were noninvasively identified by visualizing GFP expression in newborn mice under UV light with confirmation by the presence of GFP fluorescence in ear biopsies using a fluorescent stereomicroscope equipped with a UV filter. Experiments involving mice were approved by the St. Vincent's Institute Animal Ethics Committee.
Transposon copy number analysis Southern blots were performed using standard protocols. An 898-bp probe specific for GFP labeled with a-32 P-dCTP using the DECA prime II Random Priming DNA labeling kit (Ambion) was used to detect the presence of the SB transposon. Copy number was calculated by comparison with standards of known copy number.
Transposon insertion site identification
Ligation-mediated PCR (LM-PCR) was performed based on previously described protocols (Largaespada and Collier 2008;Takeda et al. 2008;Horie et al. 2003;Devon et al. 1995). Briefly, 1 mg genomic DNA was digested with HaeIII, AluI, BfaI, or NlaIII (New England Biolabs), and splinkerettes (Table 1) compatible with appropriate blunt or cohesive ends were ligated to the digested genomic DNA fragments using T4 DNA ligase (New England Biolabs). The two oligonucleotides to Figure 1 Sleeping Beauty transposon mutagenesis strategy. (A) Constructs used for production of NOD-PrmSB and NOD-SBtson lines. The transposon construct, pTrans-SA-IRESLacZ-CAG-GFP_SD:Neo, has been described (Horie et al. 2003). The transposase construct pRP1345 comprising SB10 transposase driven by the mouse proximal protamine 1 promoter has also been described (Fischer et al. 2001). IR/DR: inverse repeat/direct repeat transposase recognition motifs. (B) Breeding scheme for SB transposon mutagenesis. NOD-SBtson mice (lines 1 and 2) were mated to NOD-PrmSB mice (lines 1 and 2). Double positive male offspring (seed mice) were backcrossed to wild-type NOD females to produce G 1 mice carrying potential transposon insertions. (C) Mice carrying transposon insertions that activated the polyA trap were detected by fluorescence under UV light prior to weaning.
produce the double-stranded splinkerette were annealed at a concentration of 50 mM in the presence of 100 mM NaCl by incubating at 95°f or 5 min and then allowing the mixture to cool to room temperature in a heat block before ligation. A second digest, with XhoI or KpnI, was used to remove ligated splinkerettes from transposon concatemer fragments. DNA was purified after each step using QiaQuick PCR Purification kit (Qiagen). Two rounds of PCR were performed using nested primers within the linker and transposon (Table 1) with the subsequent PCR product sequenced. Restriction enzymes, splinkerettes, and primers were used in the following six combinations (first digest; splinkerette; second digest; primer set 1; primer set 2): (i) and (ii) AluI or HaeIII; SplB-BLT; XhoI; T/JBA · Spl-P1; TJBI · Spl-P2; (iii) and (iv) AluI or HaeIII; SplB-BLT; KpnI; TDR2 · Spl-P1; T/BAL · Spl-P2; (v) BfaI; Bfa linker; KpnI; LongIRDR(L2) · Linker primer; NewL1 · linker primer nested; and (vi) NlaIII; Nla linker; XhoI; LongIRDR(R) · Linker primer; KJC1 · Linker primer nested. The resulting sequence was aligned to mouse genome build GRCm38 to identify the transposonflanking genomic sequence (i.e., insertion site) and determine which gene was disrupted based on current annotation for build GRCm38. The different combinations give a number of chances to identify genomic DNA adjacent to both the 39 and 59 ends of the transposon after transposition. Additional primers were designed for genotyping the transposon insertion site in established mutant mouse lines (Table 2).
Diabetes monitoring
Mice were tested once per week for elevated urinary glucose using Diastix reagent strips (Bayer Diagnostics). Mice with a positive glycosuria reading (.110 mmol/L) and confirmed by a positive glucose reading (.15 mmol/L), using Advantage II Glucose Strips (Roche), were diagnosed as diabetic.
Data availability
Mouse lines are available upon request.
Generation of transposition events in NOD mice using SB transgenic NOD lines
To perform germline mutagenesis of the NOD mouse, we used two SB constructs previously used in mice ( Figure 1A). The pRP1345 construct contains the transposase gene under the control of the proximal protamine 1 (Prm1) promoter, which restricts expression to spermatogenesis and limits transposon mutations to the germline, thus preventing somatic mutations (Fischer et al. 2001). The SB transposon construct (sb-pTrans-SA-IRESLacZ-CAG-GFP-SD:Neo) contains splice acceptor and donor sequence motifs encompassing a promoter trap comprising the lacZ gene, and a polyA trap with the gene encoding enhanced green fluorescent protein (GFP) driven by the CAG promoter (Horie et al. 2003). This construct enables efficient identification of mutant mice in n Table 1 Oligonucleotides used as splinkerettes and primers for LM-PCR a Primer names and purpose are based on the previously described LM-PCR protocols (Keng et al. 2005;Largaespada and Collier 2008).
which the transposon has inserted in a gene, as activation of the polyA trap results in ubiquitous GFP expression, i.e., mutant mice fluoresce.
The transposase and transposon were introduced separately into NOD mice to establish independent transgenic NOD lines for each SB component. Briefly, SB transposon and SB transposase transgenic mice were produced by pronuclear injection of linearized constructs into fertilized NOD/Lt (NOD) oocytes. NOD-Tg(Prm-sb10) mice (called NOD-PrmSB hereafter) harbor the transposase; and NOD-TgTn(sb-pTrans-SA-IRESLacZ-CAG-GFP-SD:Neo) mice (called NOD-SBtson hereafter) harbor the transposon. The breeding scheme to generate mutant NOD mice is outlined in Figure 1B: mice from the two transgenic lines are mated, bringing together the two components of the SB system, and transposition occurs within the sperm cells of the doublepositive hemizygous "seed" males. These NOD seed males are mated to wild-type NOD females to generate G 1 litters. G 1 pups with potential transposon-disrupted genes are efficiently identified before weaning by visualization of bodily GFP expression under UV light ( Figure 1C). This early detection allows cage space to be minimized; only those litters that contain fluorescent pups are weaned and kept for additional analysis.
To assess the feasibility of the SB mutagenesis system in the NOD mouse strain, two NOD-SBtson lines were established for which the number of copies of the transposon within the transgene concatemer was determined by Southern Blot analysis ( Figure 2A) and the site of transgene integration was determined by ligation-mediated PCR (LM-PCR) ( Figure 2B). These lines were bred with two established NOD-PrmSB lines, in which expression of SB transposase in testes had been confirmed by quantitative RT-PCR (data not shown), to produce seed males. NOD seed males (i.e., double-positive, hemizygous for both SB constructs) were backcrossed to NOD females to produce G 1 mice. Three different breeding combinations of transposon/transposase transgenic strains gave rise to GFP-positive offspring at the rate of 2.0%, 2.3%, and 2.7% respectively ( Figure 2B). A fourth combination did not produce any mutant pups (0%), but breeding of this fourth combination was stopped before reaching a similar total of G 1 mice to the other combinations ( Figure 2B). Eleven GFP-positive G 1 mice were sired, confirming that these transgenic lines can facilitate SB transposition events on the NOD genetic background and resultant GFP-positive mice can be detected.
Identification and prioritization of transposition sites in GFP-positive G 1 NOD mice LM-PCR followed by sequencing and genome alignment was used to identify the transposition sites for nine of the 11 GFP-positive mice (Table 3). Of the two GFP-positive mice that were not determined, one died before characterization. It was not clear why the second GFPpositive mouse was refractory to site identification by LM-PCR despite using six different restriction digest/PCR combinations. Consistent with the published .30% rate of local chromosomal hopping (Keng et al. 2005), seven of the nine identified transposition sites fell on the same chromosome as the transposon donor concatemer. Of the seven insertion sites derived from the NOD-SBTsonL1 concatemer on chromosome (Chr) 10, five were on Chr10, with the other two identified on Chr1 and Chr4. The two insertion sites arising from the NOD-SBTsonL2 concatemer on Chr1 were both mapped to Chr1. LM-PCR also indicated that each GFP-positive mouse contained a single, rather than multiple, transposon insertion site.
Once GFP-positive NOD mice and their transposon mutations are identified, there are two options. One option is to generate and monitor diabetes onset in cohorts of mice from every GFP-positive G 1 mouse. Although such a full-scale phenotype-driven approach is aimed at identifying novel genes not suspected to play a role in diabetes, as well as known and putative candidate genes, this option requires substantial animal housing capacity and monitoring numerous cohorts for diabetes over a .200-d time course. Like many investigators, we have limited resources and this option was not feasible. However, the SB transposon mutagenesis strategy allows for a second option: prioritizing mutant mice based on the genes that are disrupted and establishing homozygous mutant lines for expression and diabetes monitoring. We therefore prioritized transposon-disrupted genes based on the following criteria: 1) The transposon insertion site is predicted to disrupt the gene in some way. Has the transposon inserted within an exon, an intron, or in a regulatory region? Is the mutation predicted to disrupt gene expression? An insertion that is predicted to completely abrogate expression of the normal gene product would be of high priority; however, in some instances a predicted hypomorphic allele may also be of value.
2) The gene is a known or putative candidate for a described mouse and/or human T1D susceptibility locus. T1D loci and candidate genes are curated in a searchable form at T1Dbase (https://t1dbase. org/) (Burren et al. 2011).
3) The gene, not previously considered as a putative T1D susceptibility gene, encodes a known protein involved in an immunerelated molecular pathway that could affect T1D pathogenesis (e.g., cytokine or chemokine signaling/regulation, pattern recognition pathways, costimulatory molecules, apoptosis, regulation of immune tolerance), but is not likely required for general immune cell development. 4) The gene, not previously considered as a putative T1D susceptibility gene, is expressed in relevant cells (e.g., immune cell subsets, b cells) as determined in the first instance using gene expression databases [e.g., Immunological Genome Project (https://www. immgen.org/)] (Heng et al. 2008) followed by additional expression analyses if needed.
Of the nine transposon insertion sites identified (Table 3), only four were located within or nearby (within 5 kb) annotated genes or expressed sequence tags (ESTs): SB4 in intron 1 of Slc16a10; SB7, 4.6 kb 59 of Serinc1; SB8, 5 kb 59 of AK018981; and SB9 in intron 2 of Pard3b. In particular, the transposon within Pard3b inserted in the opposite orientation to the direction of transcription of the gene, so it seems unlikely that it would disrupt expression of Pard3b, which encodes a cell polarity protein most highly expressed in the kidney (Kohjima et al. 2002). There is, however, an antisense gene, Pard3bos1, that overlaps Pard3b approximately 102 kb downstream of the transposon insertion site. Thus, it is more likely that the polyA gene-trap has n been activated by splicing into this antisense gene. The remaining five transposons inserted in regions currently without any annotated genes, suggesting either the presence of unannotated genes or the presence of cryptic polyA sequence motifs as the GFP polyA trap was activated.
The following observations were made regarding the prioritization of the nine identified transposon insertion sites. Only one fell within a T1D susceptibility locus (Pard3bos1 in Idd5.1) (Burren et al. 2011), but unfortunately this mouse (SB9) died before it could be bred. AK018981 has no publicly available information except that it was sequenced from RNA isolated from adult mouse testes. Both Serinc1 and Slc16a10 have been previously disrupted in mice: Serinc1 on a mixed C57BL/6 · 129 genetic background (Tang et al. 2010) and Slc16a10 on a C57BL/6 background (Mariotta et al. 2012). Neither knockout mouse strain was reported to develop spontaneous disease. The strains either have not been analyzed for immunological phenotypes (Slc16a10) or have only been analyzed in small cohorts (n # 4) with no differences observed (Serinc1). Despite this, Serinc1 and Slc16a10, although not directly attributed in the literature with immune-related roles, have functions that could be postulated to affect immune cell responses and/or b-cell activity, which could be revealed in the context of the "sensitized" NOD genetic background (i.e., the NOD mouse strain enables detection of mutated genes that increase or decrease diabetes incidence). Notably, both genes according to expression databases (The Immunological Genome Project and BioGPS) (Heng et al. 2008;Wu et al. 2009) were highly expressed in macrophages, an immune cell population with a key role in T1D pathogenesis (Driver et al. 2011;Jayasimhan et al. 2014). SB4 and SB7, which were both male G 1 mice, did not carry the transposase construct; consequently, secondary jumping of the transposon was not possible in their offspring. SB4 and SB7 G 1 mice were thus prioritized for establishment of new lines, bred to homozygosity, and investigated for the effect of their transposon insertions.
Characterization of transposon effects in two prioritized GFP-positive G 1 NOD mice The transposon insertion site in the SB7 mouse (NOD. Serinc1 Tn(sb-Trans-SA-IRESLacZ-CAG-GFP-SD:Neo)1.7Tcb ) localized 4.6 kb upstream of Serinc1 (Table 3, Figure 3A). SERINC1 facilitates the synthesis of serine-derived lipids, including the essential membrane lipids phosphatidylserine and sphingolipid (Inuzuka et al. 2005). These are important components of membrane structures known as "ordered membrane domains" or "lipid rafts," which are required for appropriate signaling in immune cells (Szoor et al. 2010;Yabas et al. 2011). Due to the position of the transposon insertion, we postulated that, rather than completely disrupting expression of Serinc1, the mutation may affect gene regulation. RT-PCR analysis of homozygous SB7 splenic RNA using primers within GFP and Serinc1 identified two fusion transcripts in addition to the normal transcript. The first contains GFP spliced to sequence upstream of exon 1 with normal splicing of the entire gene. The second contains GFP spliced directly to exon 2 of Serinc1 ( Figure 3A). The skipped exon 1 encodes the first 13 amino acids of the Serinc1 coding sequence. The production of any SERINC1 protein from these fusion transcripts is unlikely because there is a stop codon following the GFP sequence and no obvious internal ribosomal entry site prior to the Serinc1 sequence, but this still remains to be tested. In either case, the presence of the transposon insertion results in a relatively small, but significant, decrease in expression of Serinc1 as measured by quantitative RT-PCR using a Taqman probe spanning the exon 1/2 splice junction ( Figure 3B). This splice junction is used in both the "normal" Serinc1 and the fusion transcript that includes exon 1; therefore, the reduction in "normal" Serinc1 expression is greater than measured by this assay. These analyses suggest that, rather than completely disrupting expression of Serinc1, this mutation modulates expression and represents a hypomorphic allele. Nonetheless, homozygous mutant SB7 mice exhibited a similar diabetes incidence compared to wild-type littermate females ( Figure 3C), indicating that a minor reduction in Serinc1 expression does not affect T1D pathogenesis. The transposon insertion identified in the SB4 mouse (NOD. Slc16a10 Tn(sb-Trans-SA-IRESLacZ-CAG-GFP-SD:Neo)1.4Tcb ) localizes to intron 1 of Slc16a10 (Table 3, Figure 4A), which encodes the aromatic amino acid transporter SLC16A10 (also known as TAT1) (Mariotta et al. 2012;Ramadan et al. 2006). It is becoming increasingly evident that regulation of amino acid transport is crucial for the proper regulation of immune cell activation and function (Nakaya et al. 2014;Sinclair et al. 2013;Thompson et al. 2008), and also impacts glycemic control (Jiang et al. 2015). We predicted that the strong splice acceptor encoded by the transposon would result in splicing from exon 1 of Slc16a10 into the transposon sequence, thus truncating the Slc16a10 transcript. Expression analysis of homozygous mutant SB4 mice showed that the Slc16a10 transcript was not detected ( Figure 4B), which was associated with a significant increase in diabetes incidence compared to wild-type littermate females ( Figure 4C). This result demonstrates that SB transposon mutagenesis can be used to identify novel genes that affect T1D pathogenesis. Moreover, SB4 is a promising mutant mouse line for investigating the role of Slc16a10 and aromatic amino acid transport in macrophage function and the development of T1D in NOD mice.
DISCUSSION
We demonstrate here that SB transposon mutagenesis can successfully generate both null and hypomorphic mutations in the NOD mouse. A significant advantage of this strategy is the use of the GFP reporter to screen out, before weaning, those mice unlikely to be carrying a functional mutation, thereby significantly reducing mouse handling and housing. Mutation sites can then be determined in individual GFPpositive mice and prioritized for further analysis based on the requirements of the investigator. We used our prioritization criteria to select two mutant NOD mice (SB4 and SB7) for further characterization and subsequently found that disruption of Slc16a10 expression in NOD mice resulted in an increased T1D incidence. Although further studies are required to determine how Slc16a10 and amino acid transport contributes to diabetes pathogenesis, this result indicates that SB transposon mutagenesis can be used as a complementary approach to other T1D gene discovery strategies. RT-PCR led to the identification of two fusion transcripts: one that splices from GFP (green) to sequence upstream of exon 1 resulting in otherwise normal splicing of Serinc1 (middle diagram), and another that skips exon 1 (bottom diagram). (B) Expression analysis was performed using RNA isolated from tissues of wild-type (wt/wt) and homozygous mutant (sb/sb) littermates (n = 4). Expression was determined by quantitative real-time PCR. Exon locations of Serinc1 primers are indicated by arrows in (A). Error bars represent 6 SEM. Statistical significance for the difference in expression was obtained using pairwise t-tests. (C) The cumulative diabetes incidence was determined for age-matched female cohorts monitored concurrently. Pairwise comparisons of diabetes incidence curves were performed using the log-rank test.
We observed 2-3% fluorescent G 1 offspring, which is lower than that reported for a similar mutagenesis scheme using the same transposon construct (7%) (Horie et al. 2003). If one aims for a single transposition event per G 1 offspring, it would be expected that 20% of the G 1 offspring should fluoresce [i.e., 40% of the mouse genome contains exons/introns (Sakharkar et al. 2005), with a 50% chance of the transposon landing in the correct orientation to trap the polyA sequence motif]. There are a number of possibilities that could explain a lower rate of fluorescent G 1 offspring, pointing to improvements that can be made to our screen. The NOD-SBtson "mutator" transgenic lines we generated harbored few copies of the transposon. A higher copy number in the donor transposon concatemer may increase mutation efficiency due to more "jumping" transposons in the sperm of NOD seed males (Geurts et al. 2006). Use of a more efficient transposase could also increase transposition efficiency. As the first described transposon for use in vertebrates, the SB system has been the most widely used and developed. Improvements from the firstgeneration SB transposases, such as that used in our study, have seen 100-fold increases in transposition efficiency (Mates et al. 2009).
Trying to obtain too high of a mutation rate, however, is not necessarily favorable. For example, increasing transposon copy number by using transgenic mice with large transposon concatemers (.30 copies) may lead to local chromosomal rearrangements in subsequent offspring and complicate characterization of causative transposon mutations (Geurts et al. 2006). Increasing transposon number and/or transposase efficiency will also lead to GFP-positive offspring with multiple gene mutations. It would then require additional work to identify and confirm all the gene mutations in a given mouse, as well as additional breeding to segregate mutations and test mutant mice with only one gene mutation, all of which increases breeding times and cage costs. Thus, an advantage of a lower efficiency is that it leads to mice with only one gene mutation, eliminating the need for extensive segregation analysis and facilitating more efficient gene identification and prioritization. Although we did not establish and characterize all of our mutant mouse lines due to limited resources, cryopreservation of sperm from mutant male mice could be used to allow archiving of mutants for future investigation. Empirically, it will be up to an individual laboratory to determine how many offspring they can efficiently screen and how they will prioritize mutant mice for subsequent analysis based on the mutation rate of their transgenic lines and available resources.
Interestingly, several of our transposon insertion sites mapped to regions at some distance from annotated genes. Although it is possible that the GFP could be activated by splicing into a cryptic polyA site, polyA gene trap strategies have been successfully used to identify novel unannotated genes (Zambrowicz et al. 1998). However, it may be difficult to predict a priori if an unannotated gene will be of interest, especially if it has little to no sequence homology with known genes. In this regard, the SB mutagenesis approach enables a phenotypedriven (i.e., forward genetics) approach to test novel genes that might not otherwise be targeted using a candidate gene approach (i.e., reverse genetics). SB mutagenesis may also benefit characterization of regions (e.g., Idd loci) that are known to contain putative susceptibility genes, but for which the "natural" causative alleles have not yet been identified. Saturation mutagenesis could effectively be performed in these regions by generating a transgenic NOD mouse line containing a transposon concatemer near the region of interest and taking advantage of the propensity of SB transposons to reintegrate close to the donor transposon concatemer (Keng et al. 2005). Nonetheless, investigating unannotated genes is riskier (i.e., the disrupted gene may not affect T1D) or more costly in the case of saturation mutagenesis (i.e., more mutant Figure 4 Analysis of Slc16a10 expression and diabetes incidence in SB4 mutant mice. (A) Schematic diagrams of Slc16a10 gene and transcript with transposon insertion. Slc16a10 consists of six exons and gives rise to a 5394-bp spliced transcript (top diagram). The transposon insertion occurs within intron 1 and is predicted to prevent splicing between exons 1 and 2, instead generating a truncated transcript [exon 1 -transposon (gray box)] and a transcript initiated by the CAG promoter and GFP gene [transposon (green box) -exon 2] (bottom diagram). (B) Detection of transcript expression was performed by RT-PCR using RNA isolated from tissues of wild-type (wt/wt) and homozygous mutant (sb/sb) littermates. Spliced products for exon 1/exon 2 or exon 1/exon 6 were not detected in any tissues tested from homozygous mutant mice. Expression analysis is representative of three mice per genotype. (C) The cumulative diabetes incidence was determined for age-matched female cohorts monitored concurrently. Pairwise comparisons of diabetes incidence curves were performed using the log-rank test.
lines need to be generated and characterized). Hence, prioritizing genes based on function postulated to contribute to T1D pathogenesis may be more favorable to most labs.
Transposon mutagenesis is one of a range of techniques that can be used to identify gene variants that affect development of T1D. Conventionally, "artificial" null alleles for candidate genes have been generated in other strains and bred onto the NOD background by serial backcrossing. This approach, however, results in the null allele being encompassed by a "hitchhiking" congenic interval from the other strain, which may also affect T1D susceptibility (Simpfendorfer et al. 2015;Armstrong et al. 2006;Leiter 2002;Kanagawa et al. 2000). Although NOD ES cell lines are available (Hanna et al. 2009;Nichols et al. 2009;Ohta et al. 2009), there are relatively few reports of their use in targeting genes (Kamanaka et al. 2009;Morgan et al. 2013). Alternatively, random mutagenesis using N-ethyl-N-nitrosourea (ENU) does not require ES cells or prior knowledge about candidate genes. ENU mutagenesis, however, creates tens to hundreds of mutations per mouse. Substantial breeding and sequencing, more than for SB transposon mutagenesis, would be required to segregate and test individual ENU mutations for their effect on T1D susceptibility (Hoyne and Goodnow 2006). Finally, emerging gene-editing techniques using the CRISPR-Cas9 system (Ran et al. 2013) can facilitate hypothesis-driven investigation of known and putative candidate genes in NOD mice (Ran et al. 2013;Li et al. 2014). Modifications of the CRISPR-Cas9 system also allow a range of strategies to be used, including the production of conditional alleles, insertion of reporters, activation or repression of alleles, and specific gene editing allowing the recreation of particular variants affecting immune function (Pelletier et al. 2015). Nonetheless, this approach requires genes to be specifically targeted, whereas SB transposon mutagenesis is random and may identify genes not otherwise considered. Thus, a combination of different approaches for gene discovery and characterization of allelic effects is available and will likely prove useful for understanding the genetic architecture of T1D.
The increasing number of causative "natural" alleles identified in human populations and inbred mouse strains will undoubtedly aid our understanding and prediction of genetic risk for T1D, as well as aid future clinical trials in selecting appropriate patient treatment groups based on their genetic profile (Bluestone et al. 2010;Polychronakos and Li 2011). Nonetheless, many of the identified T1D loci and underlying causative alleles have subtle biological effects that are not therapeutically amenable or are difficult to investigate due to tissue availability. Generating random "artificial" null alleles in the NOD mouse provides an alternative strategy to test and identify both putative and novel genes that: (i) have larger diabetes effects when more grossly disrupted; (ii) represent potential drug targets; and (iii) are less likely to be identified in population-based studies of natural variation. The NOD mouse exhibits a number of immunological abnormalities that are associated with T1D pathogenesis and provides a "sensitized" background to investigate the effect of artificial mutations upon the development of T1D (Driver et al. 2011;Ridgway et al. 2008;Jayasimhan et al. 2014). Our study indicates that SB transposon mutagenesis in NOD mice is feasible and provides a new strategy that combines the advantage of both forward genetics (random mutagenesis) and reverse genetics (gene prioritization) for the potential discovery of new genes that affect T1D pathogenesis. | 7,256.2 | 2015-09-30T00:00:00.000 | [
"Biology"
] |
Controlling self-assembly of diphenylalanine peptides at high pH using heterocyclic capping groups
Using small angle neutron scattering (SANS), it is shown that the existence of pre-assembled structures at high pH for a capped diphenylalanine hydrogel is controlled by the selection of N-terminal heterocyclic capping group, namely indole or carbazole. At high pH, changing from a somewhat hydrophilic indole capping group to a more hydrophobic carbazole capping group results in a shift from a high proportion of monomers to self-assembled fibers or wormlike micelles. The presence of these different self-assembled structures at high pH is confirmed through NMR and circular dichroism spectroscopy, scanning probe microscopy and cryogenic transmission electron microscopy.
Hydrogels composed of short peptides have rapidly gained attention over the past ten years, owing to their high biocompatibility, ease of synthesis and tunability [1][2][3] . Due to their resemblance to the native extracellular matrix, these materials have been used to culture various cell lines, release drug molecules in a controlled manner and promote neural outgrowth [4][5][6][7][8] . While the amino acid sequence used plays a major part in determining the physical and chemical properties of the hydrogel, short peptides often require an aromatic capping group at their N-terminus to induce gelation 9,10 . The hydrophobic interactions associated with this capping group combine with hydrogen bonding of the peptide backbone to drive self-assembly in these systems. A variety of functional capping groups have been employed, from photo-switchable moieties, to fluorophores, to heterocycles [11][12][13][14] . These capping groups play as important a role in the final hydrogel structure as the amino acid sequence selected.
Small angle neutron scattering (SANS) is an extremely useful tool when studying the native environment of a hydrogel. Whereas microscopy techniques such as transmission electron microscopy (TEM) and atomic force microscopy (AFM) are liable to suffer from the presence of sample preparation artefacts, small angle scattering is conducted on the native hydrogel, providing an accurate statistical three dimensional perspective of the actual hydrogel structure. SANS measurements have been extensively used to probe the native structure of peptide-based hydrogels, most often on hydrogel samples which already exhibit a well-defined network [15][16][17][18][19][20][21][22][23][24][25][26] . Less well studied is small angle scattering on transitions within gelators; however there are examples of SANS being used to examine the transition of a hexapeptide from ribbons to fibers 27 and using SANS to look at the evolution of a pyromellitamide organogel over several days 28 . Curiously, there are limited examples of small angle scattering being used to probe the early stages of gelation in a supramolecular peptide hydrogel 15,29 , perhaps owing to difficulties in obtaining usable time resolution.
Herein, we probe the early stages of gelation using small angle neutron scattering and show that the choice of N-terminal heterocyclic capping group plays a vital role in determining the mechanism of self-assembly for the dipeptide hydrogels indole-diphenylalanine 1 and carbazole-diphenylalanine 2, which is likely controlled through the presence of pre-formed aggregates at high pH. From analysis of the SANS scattering profile of 2, it can be seen that minimal changes occur over 4 h, suggesting that fibers or wormlike micelles spontaneously form at high pH. This is supported through cryogenic TEM (cryo-TEM) and AFM imaging. This extends the applicability of findings previously established for naphthalene-capped dipeptide hydrogels 30 , by showing that the self-assembly of N-capped short peptides at high pH is more than likely controlled by the hydrophobicity of the heterocyclic capping group 31 .
Results and Discussion
Indole-diphenylalanine 1 and carbazole-diphenylalanine 2 ( Fig. 1) were synthesized using solid phase peptide synthesis, employing a 2-chlorotritylchloride resin and standard Fmoc chemistry (see Supplementary Information for details on synthesis, preparation of gels, SANS, rheology, spectroscopy and microscopy measurements). Yields and characterization of these dipeptides and their subsequent hydrogels were in agreement with that previously reported 32,33 . These gelators were selected owing to their vastly different hydrogel stiffnesses (3 × 10 5 Pa for 1 vs. 600 Pa for 2) 32,33 , which arises from a very different fiber arrangement within the hydrogel network. Indole-diphenylalanine 1 is composed of thick, bundled fibers (> 100 nm) with minimal cross-linking 32 , whereas carbazole-diphenylalanine 2 consists of highly interwoven 3 nm fibers 33 . We theorized that this difference in gel network was likely due to different self-assembly processes occurring for each gelator, something which we then examined in greater detail.
To trigger gelation in these dipeptides, a pH switching mechanism using glucono-δ -lactone (GdL) was employed 34 . As GdL dissolves rapidly yet hydrolyses to gluconic acid slowly, this allows for a slow, controlled and homogenous pH drop throughout the sample, compared to the use of mineral acids. For time resolved neutron scattering measurements (Fig. 2), hydrogels were prepared in D 2 O (at pD 9) and quickly transferred to a demountable titanium SANS cell which was then placed in a 20 position sample changer and measurements undertaken. For pre-formed gels, the samples were allowed to age in the demountable cell overnight to reach a fully formed network structure before measurements were performed. All hydrogels were formed at 1% (w/v). SANS measurements were performed over a q range of 0.01-0.4 Ǻ −1 , which corresponds to probing a length scale of 1.6-63 nm. It should be noted that for these measurements, time zero (i.e. t = 0 min, pD = 9) corresponds to the peptide sol as confirmed by rheological measurements (see Supplementary Fig. S1), before the addition of any GdL to the gelator. For indole-diphenylalanine 1 (Fig. 2a), a distinct difference in scattering profile between the sol (t = 0 min, pD = 9) and gel (t = 240 min, pD = 4) is evident. The evolution of this scattering profile over four hours is shown in Fig. 1a, with the majority of the changes occurring in the first hour. Using SasView 35 , the data was fitted to a cylindrical model, which from existing AFM and TEM measurements on this network 32 , is physically realistic due to the limited cross-linking between fibers. It can be seen in Table 1 that over the first hour, whilst the radius of the fibers decrease slightly, possibly due to a dynamic rearrangement process, the length of the fibers dramatically decreases. This is to be expected, as the growth of an interwoven gel network will result in a greater frequency of intersecting fibers, thus resulting in a lower effective fiber length.
For carbazole-diphenylalanine, 2 (Fig. 2b), there is no such evolution of the scattering pattern over time. There is a slight change between sol and pre-formed gel scattering patterns at low q, suggesting a change in the large-scale network, which is consistent with network formation on a larger scale than what could be probed in these measurements. However, when fit to a flexible cylinder model, the majority of the scattering pattern shows no difference, and this is also reflected in the outputs obtained from this sample (Table 1 and see Supplementary Figs S2 and S3). The presence of a peak in the sol and gel scattering patterns at q = 0.19 Ǻ −1 suggests a high fiber monodispersity. The lack of change in the scattering profile, from the sol to gel formation over 4 h means that self-assembled structures similar to those comprising the hydrogel network must be present at high pH.
Attempts were made to visualize the evolution of hydrogel networks of 1 and 2 using transmission electron microscopy. Both cryo-TEM and normal TEM using negative staining were performed. Snapshots of hydrogel formation for 1 and 2 were taken each hour over four hours, in order to correlate the timescale of network evolution with that observed through SANS.
For hydrogels of indole-diphenylalanine 1 after one hour, only long, sparse fibers are visible (Fig. 3a). Over the course of four hours, these fibers laterally aggregate, forming bundles ( Fig. 3b and c). The changes seen in these cryo-TEM measurements cannot be directly correlated to the results obtained from SANS, as these > 100 nm fibers fall outside the q range measured. However, taken together, the results indicate a hierarchical self-assembly process occurring across multiple length scales.
For carbazole-diphenylalanine 2, even after only one hour, a fibrous network consisting of long, intersecting fibers is visible (Fig. 3d). This network does not change notably over time ( Fig. 3e and f, however this can be better visualized using the negative stain TEM images, see Supplementary Fig. S4), with the fiber radius from cryo-TEM corresponding very well to that determined form SANS.
The cryo-TEM data for both gelators, taken together with SANS results, suggests a different mechanism of self-assembly for each gelator. As the amino acid sequence of each gelator is identical, this must be controlled by the choice of capping group. That the more hydrophilic indole capping group does not form self-assembled fibers or wormlike micelles at high pH shows that previous design rules established for naphthalene-capped short peptides 31 appear applicable to heterocycle-capped peptides, with the more hydrophobic carbazole capping group yielding self-assembled structures at high pH.
Previously, the formation of wormlike micelles in the self-assembly of naphthalene-capped diphenylalanine peptides at high pH has been reported 30 . Addition of a variety of salts triggered gelation in these wormlike micelle-based systems 15 . The effect of electron donating and withdrawing substituents at the 6-position of the naphthyl group was investigated 31 , with hydrophobicity shown to be the main driving force behind self-assembly at high pH. The SANS and cryo-TEM data presented in our work seem to support this model in terms of the role of hydrophobicity in controlling self-assembly at high pH. Now that the role of heterocyclic capping group on self-assembly at high pH has been established, we sought to define the nature of these high pH self-assembled structures. 1 H NMR of the peptides dissolved in basic D 2 O (pD 10) was performed, and shows a sharp set of signals for indole-diphenylalanine, 1 (Fig. 4a), which matches its spectrum in DMSO 33 . This suggests that in basic environments, 1 is well solvated and a high proportion of monomers exist. This may also indicate the presence of self-assembled structures undergoing fast exchange processes. Carbazole-diphenylalanine 2 in basic D 2 O, however, shows no sharp NMR signals aside from the solvent peak ( Fig. 4b) with all the remaining proton resonances broadened out over the entire baseline. Lowering the pD to 9, yielded very similar spectra (see Supplementary Fig. S5). This is consistent with an assembly of the molecule into a "frozen-in" supramolecular structure exhibiting limited exchange and consequent effect of immobility on spin-spin relaxation.
Atomic force microscopy (AFM) was performed on the peptide sols, spread coated onto a mica substrate. For 1, at all concentrations that were imaged (0.1 to 1% (w/v)), only amorphous aggregates were observed (Fig. 4c). However for 2, it can clearly be seen that fibers are present at concentrations as low as 0.01% (w/v) (Fig. 4d). Both large and small fibers can be seen, with the small fibers having a diameter of 3-4 nm, which is consistent with AFM previously performed on hydrogels of 2 33 . The presence of larger fibers is most likely due to aggregation during sample drying. These images clearly confirm that self-assembled structures, either fibers or wormlike micelles, are present in sols of 2 but not 1. Shear rate measurements have previously been used in conjunction with imaging to confirm the presence of wormlike micelles for short peptides 30 . Here, shear rate sweeps were performed on 1% (w/v) solutions of 1 and 2, which both show shear thinning behaviour at high shear rates and possess viscosities at 1 s −1 of 2 and 10 mPa•s for 1 and 2, respectively. This implies that 2 may exist at high pH as wormlike micelles. This is less likely for 1, therefore dynamic light scattering (DLS) was conducted on a 1% (w/v) sol of 1 to check for the presence of aggregates. Using DLS, a peak centred at approximately 150 nm was observed, confirming the presence of aggregates. It should be noted however, that the standard deviation of this peak was significant (66 nm), which is either suggestive of polydispersity of the aggregates or dynamic behaviour of the aggregates (i.e. rapid exchange of monomers between the aggregates and the solution phase). Based upon the aforementioned AFM and NMR measurements, it is likely that both of these processes are present for sols of 1 at high pH.
Conclusions
In conclusion, we show that the selection of N-terminal capping group on a short peptide plays a crucial role in the self-assembly mechanism for diphenylalanine-based hydrogels. Indole-diphenylalanine 1 forms dynamic small aggregates in basic water, before self-assembling upon the addition of a pH trigger to give a highly bundled fiber network. Replacing indole with the more hydrophobic carbazole resulted in the formation of fibers or wormlike micelles at high pH. A suite of complementary techniques such as SANS, cryo-TEM, AFM, rheology and 1 H NMR have been used to elucidate these different self-assembly pathways across different timescales. These results are in line with earlier studies on naphthyl-based dipeptides 30 , where self-assembly at high pH appears to be controlled by the hydrophobicity of the capping group for these peptides. This has important implications for the future rational design of responsive short peptide hydrogels bearing different heterocyclic capping groups. Controlling the presence of pre-formed aggregates is especially useful, as in these systems gelation can be triggered by mixing with buffers or cell media, with salts in the media screening charges on these aggregates. This will result in hydrogels in which formation does not have to be triggered by an acid, which is advantageous for potential applications in cell culture and biomedicine. | 3,192 | 2017-03-08T00:00:00.000 | [
"Chemistry",
"Materials Science"
] |
Enhanced Energy Conversion by Turbulence in Collisionless Magnetic Reconnection
Magnetic reconnection and turbulence are two of the most significant mechanisms for energy dissipation in collisionless plasma. The role of turbulence in magnetic reconnection poses an outstanding problem in astrophysics and plasma physics. It is still unclear whether turbulence can modify the reconnection process by enhancing the reconnection rate or energy conversion rate. In this study, utilizing unprecedented high-resolution data obtained from the Magnetospheric Multiscale spacecraft, we provide direct evidence that turbulence plays a vital role in promoting energy conversion during reconnection. We reached this conclusion by comparing magnetotail reconnection events with similar inflow Alfvén speed and plasma β but varying amplitudes of turbulence. The disparity in energy conversion was attributed to the strength of turbulence. Stronger turbulence generates more coherent structures with smaller spatial scales, which are pivotal contributors to energy conversion during reconnection. However, we find that turbulence has negligible impact on particle heating, but it does affect the ion bulk kinetic energy in these two events. These findings significantly advance our understanding of the relationship between turbulence and reconnection in astrophysical plasmas.
Introduction
Magnetic reconnection is a prevalent process for releasing energy in various explosive spaces and astrophysical phenomena.Magnetic energy is rapidly converted into plasma kinetic and thermal energy during reconnection (Parker 1957;Sonnerup 1984;Schindler et al. 1988;Zhou et al. 2019).Reconnection is not limited to large-scale current sheets, such as the Earth's magnetotail neutral sheet and magnetopause current sheet, but also occurs in small-scale current sheets resulted from chaotic magnetic field lines in turbulence, which is a multiscale energy cascade process that regulates the transfer of energy, mass, and momentum (Retinò et al. 2007;Sahraoui et al. 2010;He et al. 2019;Phan et al. 2018;Xu et al. 2023).Turbulence plays a crucial role in controlling energy dissipation in plasma.
Reconnection and turbulence are intimately correlated, and their interplay/coupling has been the subject of extensive research in recent decades.Reconnection has been discovered in turbulent bow shock and highly turbulent magnetosheath downstream of the bow shock and solar wind, as reported in various studies (Retinò et al. 2007;Yordanova et al. 2016;Vörös et al. 2017;Phan et al. 2018;Wang et al. 2023;Zhong et al. 2022).On the other hand, reconnection also drives turbulence, leading to turbulent reconnection as observed in the Earth's magnetosphere (Eastwood et al. 2009;Huang et al. 2012;Osman et al. 2015;Fu et al. 2017;Zhong et al. 2018;Zhou et al. 2021;Li et al. 2022).One well-known mechanism for generating turbulence in reconnection is the repeated formation of kinetic-scale magnetic flux ropes in filamentary currents and their interactions (Che et al. 2011;Daughton et al. 2011;Wang et al. 2016Wang et al. , 2023)).In turbulent reconnection, magnetic field lines are entangled and induce considerable variations in the out-of-plane direction, in contrast with laminar reconnection, where the reconnection layer is structured and quasi-2D.A large number of energetic particles is produced in turbulent reconnection (Ergun et al. 2018(Ergun et al. , 2020a(Ergun et al. , 2020b)).Recent studies have highlighted that secondary reconnection is formed in the turbulent outflow, contributing substantially to the overall energy release (Lapenta et al. 2015;Zhou et al. 2021;Yi et al. 2023).
The role of turbulence in reconnection is an important scientific question in plasma physics and astrophysics.Lazarian & Vishniac (1999) proposed a turbulent reconnection model (known as the LV99 model) to explain fast reconnection in presence of turbulence.They suggest that wandering field lines could open the exhaust angle, leading to fast reconnection (Matthaeus & Lamkin 1986;Lazarian & Vishniac 1999;Vishniac et al. 2012;Lazarian et al. 2020).Recent magnetohydrodynamic (MHD) simulations show that external forces driving turbulence facilitate the conversion of magnetic energy into plasma kinetic energy, thereby increasing the reconnection rate (Sun et al. 2022), while 2D particle-in-cell (PIC) simulation of reconnection with imposed turbulent force found that turbulence enhances the energy conversion rate but does not obviously increase the reconnection rate (Lu et al. 2023).Conversely, 3D PIC simulations demonstrate that the reconnection rate and total energy conversion rate in turbulent reconnection (turbulence driven by reconnection) closely resemble those in laminar reconnection by 2D simulations (Daughton et al. 2014).Until now, the role of turbulence in reconnection remains a controversial and open question.This paper provides direct observational evidence that turbulence enhances energy conversion in reconnection by using Magnetospheric Multiscale (MMS) spacecraft observations in the terrestrial magnetotail.
This study utilized various instruments on board MMS.The Flux Gate Magnetometer instrument and the Electric Double Probes provide the magnetic field and electric field data with a time resolution of 128 and 8192 samples per second in burst mode, respectively (Ergun et al. 2016;Lindqvist et al. 2016;Russell et al. 2016;Torbert et al. 2016); the Fast Plasma Investigation (FPI) supplies the plasma moments with a time resolution of 0.03 s for electrons and 0.15 s for ions in the burst mode (Pollock et al. 2016); The Fly's Eye Electron Proton Spectrometer and Energetic Ion Spectrometer were used to measure the energetic electron and ion data, respectively, with time resolutions of 0.3 and 20 s (Blake et al. 2016;Mauk et al. 2016).
Observations
MMS recorded nearly 30 magnetotail reconnection events between 2017 and 2020, encompassing a broad range of inflow Alfvén speed ä [400, 3000] km s −1 and plasma beta ä [0.01, 3].Since energy conversion and particle acceleration during reconnection depend on inflow parameters such as the inflow Alfvén speed V A,in and plasma beta β in (Phan et al. 2013(Phan et al. , 2014;;Lu et al. 2019;Yi et al. 2019), we single out two turbulent reconnection events that have similar V A,in , β in and guide field strength but with distinct magnitudes of turbulent field for comparison.This procedure helps to highlight the effects of turbulence on reconnection.In the magnetotail, the ambient level of turbulence is relatively weak compared to that in the solar wind and magnetosheath.Hence, turbulent reconnection in this paper means that reconnection is primarily initialed in a laminar current sheet, then drives turbulence and evolves into turbulent state.
Figure 1 provides an overview of the MMS1 observations from 15:40 UT to 15:58 UT on 2017 July 6.This event was observed by MMS near [−24.1, 1.3, 4.5] R E (Earth radius) in the magnetotail.MMS observed a clear reversal of ion bulk flow from tailward to earthward, accompanied by a change of the magnetic field B z from negative to positive, indicating that the spacecraft encountered a tailward retreating X-line.Although the magnetic field B x was mostly negative during this period, it frequently approached zero.The plasma β is almost greater than 0.5 between 15:42 and 15:52 UT, suggesting that MMS was inside the magnetotail plasma sheet.Additionally, Hall electromagnetic fields, typical evidence for the ion diffusion region, were detected around 15:46 UT (Rogers et al. 2019).Figure 1(f) shows that energetic electrons and ions with energies up to 100 keV were observed in this reconnection region.A few energetic electron flux bursts were observed in the tailward flow, while the flux exhibits consistent enhancement in the earthward flow.The inflow region is identified between 15:51:10 and 15:51:18 UT when the outflow is absent and a large stable |B X | notably decreased in density and electron flux were observed by MMS (shown in Figure 1).
As seen in Figure 1(a), the magnetic field exhibits considerable fluctuations.The power spectrum of the magnetic field follows a power-law spectrum, with a slope close to −1.67 in the inertial range, from 0.01 to 0.1 Hz, which is consistent with previously reported power-law spectra in plasma turbulence (e.g., Leamon et al. 1998;Sahraoui et al. 2009Sahraoui et al. , 2013;;Huang et al. 2012Huang et al. , 2017Huang et al. , 2014;;Hadid et al. 2015;Breuillard et al. 2016;Ergun et al. 2018;Zhou et al. 2021).Figure 1(b) displays the disturbed magnetic field dB, which is the high-pass filtered magnetic field above 0.05 Hz.Here, 〈|dB|〉 is used to quantify the fluctuation level of the observed turbulence, which is calculated as , where B PSD x y z , ,
~(
) is the magnetic field power spectral density (PSD) with a frequency greater than 0.05 Hz, df is the frequency bandwidth, and 〈〉 represents the time average.In this event, 〈|dB|〉 is approximately 2.2 nT, and the relative amplitude of the fluctuation 〈|dB|/|B 0 |〉 is about 0.5, where B 0 is the low-pass filtered magnetic field below 0.05 Hz (shown in Table 1).Here, we employ high-pass and low-pass filtering on the magnetic field to extract disturbances generated by turbulent reconnection and to retain the reconnected laminar current sheet without interference from disturbances.We also calculated the root mean square of dB and B 0 to quantify the amplitude of turbulence.We find that the calculated turbulence levels are different by different methods.Nevertheless, a consistent observation is that the ratio of turbulence levels between a strong turbulent reconnection (STR) and weak turbulent reconnection (WTR) consistently hovers around 2 to 3.
MMS recorded another reconnection event on 2017 July 12 (Figures 2(a)-(j)), during which two consecutive flow reversals from tailward to earthward were observed.Similar to the aforementioned event, the flow reversals coincided with the polarity changes of B z from negative to positive.This suggests the possibility of either passing two sequentially tailward retreating X-lines or the oscillation of a single X-line along the Sun-Earth line in the magnetotail (Zhou et al. 2019).The inflow region is between 14:22:30 and 14:25:08 UT. Figure 2(k) shows that the magnetic field power spectrum presents a slope of −1.76 in the inertial range, close to the Kolmogorov index.The fluctuation level 〈|dB|〉 is 0.65 nT and 〈|dB|/|B 0 |〉 is 0.14.These observations confirm that both events are turbulent reconnection events, with varying degrees of turbulence strength.Energetic electrons and ions with energies up to 100 keV were also observed in this event, similar to the previous one.
It is worth noting that 〈|dB|〉 of 2.2 ranks 2nd and 〈|dB|〉 of 0.65 ranks 24th among the 29 surveyed reconnection events.As a result, we refer to the first reconnection event as an STR and the second one as a WTR henceforth.The fluctuating magnetic energy in the STR is almost 1 order of magnitude larger than that in the WTR.Moreover, we have also examined 〈|dB|〉 and the power-law index of the magnetic power spectrum in the central plasma sheet before the onset of reconnection.We find that 〈|dB|〉 in the prereconnection plasma sheet is small (less than 0.25 nT in both events; see Figure A1 of the Appendix), and the magnetic power spectrum in the plasma sheet preceding reconnection exhibits a spectral index of −3.4 and −2.6 within the inertial range in the STR and WTR, respectively (see Figure A2 of the Appendix).The different slopes between the prereconnection plasma sheet and reconnection outflow region, along with a quiet plasma sheet prior to reconnection, indicate that turbulence observed during reconnection is not a result of preexisting turbulence in the plasma sheet but rather is driven by reconnection.
Energy Conversion
First, we investigate the impact of turbulence on the energy conversion rate (J • E) during reconnection.The electric current density J is estimated at the tetrahedron barycenter using the curlometer technique based on the four spacecraft's measured magnetic fields (Dunlop et al. 2002).The electric field E has been averaged over the four spacecraft.Figure 3(a) compares the probability density distribution (PDF) of J • E in the two events.It can be observed that the PDF of J • E in both events is signed indefinitely, implying that energy exchange in turbulent reconnection goes both ways.The PDF of J • E in the STR is much broader than that in the WTR, which indicates that the local energy exchange in the STR is greater than that in the WTR.Furthermore, the average J • E in the STR is nearly 10 times greater than that in the WTR (shown in Table 1).Note that the 〈J • E〉 varies depending on the chosen time intervals for averaging; however, there is always a fiveto tenfold difference between the two values if the selected interval covers the major flow reversal.Figure 3 where n in , V A,in , and B in are the inflow plasma density, ion-Alfvén speed, and magnetic field, respectively.The normalized J • E is also larger in the STR although the PDF is less broad compared to that in Figure 3(a) because B 0 is smaller in the WTR than in the STR. Figure 3(c) exhibits the PDF of J • E/n normalized by n in , which represents the energy gain of a particle pair (one ion and one electron) per unit time.It is noticeable that the PDF of J • E/n has a heavier tail and wider distribution in the STR.The average J • E/n in STR is still greater than that in WTR.Thus, it can be safely concluded that turbulence promotes energy exchange between electromagnetic fields and plasma and also the net energy conversion from fields to plasma.
To gain a better understanding of the factors contributing to the larger energy exchange in the STR compared to the WTR, we express J • E as |J||E| cosθ, where θ is the angle between J and E, and we examine which variable is responsible for the evident difference in J • E. Figure 4(a) illustrates that the PDFs of θ maximize at approximately 90°and gradually decrease toward 0°and 180°, implying that the electric field E tends to be perpendicular to the electric current J and that the difference of J • E between the two events is not caused by the angle between J and E. As illustrated in Figures 4(b) and (c), regardless of STR or WTR, large J • E mainly corresponds to large current density and electric field.Both current density |J| and electric field |E| are greater in the STR than in the WTR.
Figures 5(a) and (b) provide a comparison of J ⊥ • E ⊥ and J || • E || in these two events.The broader PDF of J ⊥ • E ⊥ compared to J || • E || suggests that energy exchange primarily occurs through the perpendicular channel rather than the parallel channel.The generalized Ohm's law allows J • E to be decomposed into where E MHD is the MHD electric field, V MHD is the bulk velocity defined as ) , and E ni is the nonideal electric field, including the electric field contributed by the divergence of the electron pressure tensor and the electron inertial term.Note that the Hall term does not contribute to energy conversion because J • (J × B) = 0.
To evaluate E MHD and E ni accurately, it is important that the ion/electron bulk velocity is measured as accurately as possible.Here we compare the electric current derived by the curlometer technique (denoted as J c ) and that directly calculated from the plasma density and velocity (denoted as J p ) to validate the accuracy of the ion/electron bulk velocity.We find that the correlation between J c and J p is good for the STR event, with a correlation coefficient larger than 0.8, while it is poor in the other event, with the correlation coefficient less than 0.5, which implies that the particle data of the WTR event are not very accurate; hence, we compared J • E MHD and J • E ni for the STR only.Figure 5(c) compares the contribution of the MHD term and nonideal term to J • E for the STR.The analysis indicates that the nonideal term is the main contributor to J • E.
Coherent Structures and Current Filaments
Numerous studies have demonstrated the significance of kinetic-scale intermittent structures and current filaments in energy conversion during turbulent reconnection (Ergun et al. 2020a;Huang et al. 2022;Zhou et al. 2021).Therefore, we investigate the relationship between the coherent structure (CS)/current filament (CF) and energy conversion in this study.Here, the multispacecraft partial variance of increments (PVI) method is employed to identify the CS (Greco et al. 2008(Greco et al. , 2009;;Chasapis et al. 2018).The PVI index is calculated as where B(t) is a time series of the magnetic field, and the pair i, j = 1, 2, 3, 4 indicates the different MMS spacecraft.For each time instant, the PVI index is determined as the average value of PVI ij over the six pairs of the different spacecraft (Chasapis et al. 2018).To identify the CS, the peak value of PVI index should exceed the threshold value 〈PVI〉 + σ PVI , where σ PVI is the standard deviation of the PVI index in the whole interval (Greco et al. 2008).The boundary of each CS is defined as the rms of the PVI index (Califano et al. 2020;Man et al. 2022). is the inflow plasma β, where P B is the magnetic pressure; n e is the plasma density; T i and T e are the ion and electron temperature, respectively; and K B is the Boltzmann constant.The energetic electron rate (EER) is defined as the ratio of the differential energy flux between the energetic electrons (47-550 keV) and thermal electrons (50 eV-25 keV), while the energetic ion rate (EIR) is defined as the ratio between the energetic ions (45-200 keV) flux and thermal ions (250 eV-25 keV).B g represents the guide field strength.
Figure 2. The same format as Figure 1, except for the turbulent reconnection event observed on 2017 July 12 (the WTR event).Due to incomplete burst mode data coverage for the entire period in this event, we used survey mode data to conduct the spectral analysis, with a maximum frequency of 4 Hz, as shown in (k).
Finally, we identified 191 CSs in the STR and 116 CSs in the WTR based on the CS definition.The corresponding occurrence rates of CS in the STR and WTR events are approximately 19.1 and 9.3 minute −1 .In addition, we evaluated the number of CFs.A CF is required to have a peak current density larger than 2J rms , where J rms is the rms of the current density in the entire time interval.The boundary of the CF is determined by the value of J rms (Califano et al. 2020;Man et al. 2022).Our findings reveal 104 CFs in the STR and 58 CFs in the WTR (see Table 2).The corresponding occurrence rates are approximately 10.4 minute −1 and 4.6 minute −1 , respectively.These results suggest that stronger turbulence induces more intermittent structures in reconnection, and the ratio of the occurrence rate of CF between the two events is consistent with that of the CS.
Figure 6 displays the proportion of data points within the CSs conditioned on the binned (J • E)/(J • E) rms in the two events.Approximately, 12% and 28% of the total data points in the WTR and STR, respectively, are contained in the CSs.Among data points with J • E > 4(J • E) rms , about 72% (38%) of them are located in the CSs in the STR (WTR).About 78% (64%) of the data points with J • E < − 4(J • E) rms are located in the CSs in the STR (WTR).It is noteworthy that there is an obvious asymmetry in the profile of the WTR, with the proportion corresponding to the positive J • E being remarkably lower than that corresponding to negative J • E. The reason behind this asymmetry remains unclear and warrants further investigation.Since the PVI method may not detect all the CSs in the reconnection, such as vortices (Sundkvist et al. 2005;Roberts et al. 2016;Hou et al. 2021), the contribution of CSs to the intense J • E may be even higher than estimated here.
Spatial Scale of Magnetic Fields and CF Thickness
We performed further analysis on the variation of the spatial scale of the magnetic field L dB as a function of J • E. The spatial scale of the magnetic fields is determined using the expression as L dB = |B|/(|∇B|), where |∇B| denotes the norm of the Jacobian matrix of the magnetic field, i.e., Elements in the Jacobian matrix are estimated within the MMS tetrahedron by assuming a linear magnetic gradient within the tetrahedron (Shen et al. 2003).
Figure 7(a) shows the L dB conditioned on (|J • E|)/(J • E) rms .We find that the average L dB is typically smaller in the STR than that in the WTR and the average L dB is below 1 d i,in (d i,in is the inflow ion inertial length) in both events.Moreover, the average L dB gradually decreases with the increment of J • E in both events.Figure 7(b) compares the PDF of the L dB within the CS between the two events.The PDF of L dB in the STR shifts toward to smaller scale compared to that in the WTR.Additionally, the L dB corresponding to the peak PDF in the STR is also smaller than that in the WTR, which is in accordance with Figure 7(a).These findings indicate that L dB within the CS in the STR is typically smaller than that in the WTR.The average L dB within the CS is 0.4 d i,in and 0.73 d i,in in the STR and WTR, respectively.
Figure 7(c) presents the proportion of CFs with different CF thicknesses.The CF thickness is determined by multiplying the CF normal speed by the duration.To estimate the normal speed of the CS, a timing analysis on the magnetic field is performed (Russell et al. 1983).To obtain reliable results, it is required that the cross-correlation coefficient of the magnetic field among the four spacecraft be greater than 0.9 (refer to Man et al. 2022 for details).The CF thickness has been normalized by d i,in .We can see that most CFs have thicknesses around 0-0.6 d i,in , However, the proportion of CFs with thickness greater than 1.0 d i,in in the WTR is higher than that in STR, implying that there are more sub-ion scale CFs in the STR than in the WTR.The average thickness of the CF in the WTR is 1 d i,in , which is larger than the average thickness of CF in the STR, approximately 0.5 d i,in (refer to Table 2).This result agrees well with the distribution of the spatial scale L dB shown in Figures 7(a) and (b).
Discussions and Conclusions
Table 1 presents a comparison of the energetic ions and electrons produced in the STR and WTR.It is shown that the STR generated slightly more energetic ions and electrons than the WTR.Specifically, the maximum energetic electron rate (EER), defined as the ratio between the differential energy flux of energetic electrons (>47 keV) and the thermal electrons (50 eV-25 keV), is approximately twice as high in the STR compared to the WTR.Similarly, the maximum energetic electron flux is also greater in the STR.Additionally, both the energetic ion flux and energetic ion rate (EIR) are 2 times higher in the STR than in the WTR.However, the difference in the energetic particle fluxes and EER/EIR between the two events is minor.Hence, it can be concluded that the strength of turbulence does not significantly affect the production of energetic particles in reconnection.
In addition, there was no prominent difference in the degree of ion/electron bulk heating between the two events.The average electron temperatures in the outflow region 〈T e,out 〉 is ∼1515 eV in the STR and ∼1910 eV in the WTR.Here, the outflow region is defined as the region where the outflow speed is greater than 0.2 V A,in .The average electron temperatures in the inflow region 〈T e,in 〉 is ∼970 eV (1250 eV) in the STR (WTR).Hence, the average electron heating was ΔT e ∼546 and 666 eV in two reconnection events, respectively.Notably, these values agree remarkably with the empirical formula m V 0.017 i A,in 2 ∼ 570 eV ∼ ΔT e proposed in Phan et al. (2013).In terms of ion temperatures, the average inflow ion temperatures 〉 is ∼1942 eV (∼1980 eV) in the STR (WTR), and the increase of the ion temperature in the outflow region ΔT i is ∼2356 eV (∼2414 eV) in STR (WTR).The observed ion heating ΔT i is about 0.07 m V i A,in 2 , which is half of the empirical value of m V 0.13 i A,in 2 reported by Phan et al. (2014), namely, ΔT i /T i,in = 0.14/β i,in .A caveat needs to be given that the ion temperature may be underestimated in the outflow region since high energy ions are beyond the energy Note.The second and third columns represent the number of CSs and their occurrence rate; the fourth and fifth columns represent the number of CFs and their occurrence rate.The sixth column represents the average CF thickness.Here, the CF thickness is normalized by d i,in .
Figure 6.The proportion of data points within CSs conditioned on the binned J • E/(J • E) rms , where (J • E) rms is the rms of J • E in the corresponding event.
(J • E) rms is about 0.01 nW m −3 and 0.07 nW m −3 in the WTR and STR, respectively.coverage of the FPI used in this study to measure the ion temperature.
To gain insights into the partition of the released magnetic energy in turbulent reconnection, we roughly assessed the energy balance for the two events.Assuming that the converted magnetic energy during reconnection is totally absorbed by the plasma from the inflow region, we obtain the following relation based on the principle of energy conservation: where is the averaged energy conversion rate (see Table 1); L x , L y , and L z represent the length of the effective reconnection region (ERR), which generally refers to the reconnection region where effective energy conversion takes place, in the x-, y-, and z-directions, respectively; V z denotes the inflow speed; n in is the inflow plasma number density; and ΔW i and ΔW e is the average energy gain per ion and electron, respectively.Here we have and thermal energy 1.5k B n in 〈ΔT i,e 〉 for the inflow plasma are listed in Table 3.By substituting typical inflow speed of V z ∼ 0.1 V A,in into Equation (2), we find that L z ∼ 1 d i,in in the STR and L z ∼ 5 d i,in in the WTR.The estimated thicknesses of the ERR are reasonable (e.g., Nakamura et al. 2006).Notably, the relative thicker ERR in the WTR is also consistent with our analysis on L dB and CF thickness.It becomes evident from Table 3 that the magnetic energy conversion enhanced in the STR is mainly manifested as increased ion bulk kinetic energy compared to that in the WTR.The increased ion bulk kinetic energy in the STR is probably due to the stronger magnetic tension force in the STR than in the WTR.Magnetic tension depends on magnetic field intensity and gradients, which are larger in the STR.Moreover, the role of magnetic tension in reconnection is to produce a net acceleration of the plasma, which involves mainly the ions.It is also possible that what MMS observed is premature turbulence in the fast flows.With the development of turbulent reconnection, the instabilities and turbulence can eventually transform bulk kinetic energy into thermal energy.
Magnetic reconnection not only converts magnetic energy to bulk kinetic energy, which may then be further converted to thermal energy, but also directly transfers magnetic energy to thermal energy.In turbulent reconnection, bulk kinetic energy cascades from large scale to small scale, ultimately dissipating at the smallest scale (the dissipation range), in the form of disordered thermal energy.Various mechanisms may account for the dissipation, such as wave-particle interactions (Khotyaintsev et al. 2019) or secondary reconnection (Zhou et al. 2021).Nevertheless, reconnection persistently drives bulk flow, the energy of which is partially converted to thermal energy.Thus, an equilibrium might be achieved between the bulk kinetic energy and thermal energy.The partition of magnetic energy between plasma bulk kinetic energy and thermal energy during reconnection is still an outstanding question and remains a subject for future research.
In the following, we provide some hints of why J • E is typically larger in the STR than in the WTR. Figure 4 demonstrates that the intense J • E generally occurs in regions with large |J| and |E|.We have shown that the CF thickness and the local scale of magnetic fields are larger in the WTR than in the STR.It is reasonable to expect that a smaller spatial scale corresponds to a larger current density as J ∼ B/L according to Ampere's law, where L is the spatial scale of the magnetic fields.Furthermore, it is shown that J • E ni dominates over J • E MHD .Since either the divergence of the electron pressure tensor or electron inertial term, both of which vary inversely with spatial scale, contribute to E ni , it is also greater in regions with smaller L. On the other hand, it is possible that turbulence produced in STR can break the thicker CF into multiple thinner CFs.While these thinner CFs may individually occupy a smaller volume compared to the thicker CFs, the higher quantity of CFs in the STR may ensure that the total volume constituted by the CFs in the STR is not significantly smaller than that in the WTR.Therefore, the integrated J • E is also larger in the STR.
In summary, we have conducted an analysis of turbulence amplitude on energy conversion in magnetotail reconnections with similar inflow conditions.The most prominent result is that the reconnection with stronger turbulence exhibits a higher energy exchange/conversion rate than the reconnection with weaker turbulence, i.e., turbulence tends to enhance the energy conversion during reconnection.Stronger turbulence generates more coherent structures with smaller spatial scales, which contribute significantly to energy conversion during reconnection.Furthermore, turbulence impacts the ion bulk kinetic energy and exerts a minor positive influence on the generation of energetic particles.This study contributes to a more comprehensive understanding on the role of reconnectiondriven turbulence in reconnection.These results hold significant potential for advancing our comprehension of the nonlinear interplay between reconnection and turbulence in a broader astrophysical context.
It is worth noting that the turbulence strength varies under similar inflow conditions.To understand why the two reconnection events produce turbulence with different amplitudes, we did a preliminary statistical analysis based on the 30 Note.The second and third columns represent the average electron and ion temperature enhancement; the fourth and fifth columns represent the average electron and ion bulk kinetic energy in the outflow region.The sixth and seventh columns represent increments of the electron and ion thermal energy.n in is the plasma density in the inflow region.
magnetotail reconnection events observed by MMS between 2017 and 2020.We find a clear correlation between the magnetic field disturbance in the outflow (dB t ) and V A,in , and β in .The inflow disturbance (dB in ) also exhibits a positive correlation with dB t (not shown here).For instance, the inflow disturbance (dB in ) in WTR and STR is approximately 0.2 and 1.6 nT, respectively.Therefore, we speculate that reconnections with similar inflow conditions producing distinct strengths of turbulence may be caused by the different fluctuation levels the inflow region.Nevertheless, the correlation between dB in and dB t does not necessarily imply a causal relationship.
Fluctuations in the outflow region may leak into the inflow region, stirring the inflow region (e.g., Lapenta 2008).These inflow fluctuations, in turn, impact the reconnection, creating a cyclic process that ultimately contributes to the formation of different turbulence.The effect of inflow parameters on the reconnection-driven turbulence falls beyond the scope of this paper and is worth further investigating.
Figure 1 .
Figure 1.Observations of a turbulent reconnection by MMS on 2017 July 6 (referred to as the star turbulent reconnection event).From the top to bottom are (a) three components of the magnetic field and magnetic field strength; (b) high-pass filtered magnetic field above 0.05 Hz; (c) electron number density; (d) three components of the ion bulk velocity and (e) the electric field; (f) energetic and (g) thermal electron differential energy flux; (h) energetic and (i) thermal ion differential energy flux; (j) plasma β; and (k) trace of the power spectral density (PSD) of the magnetic field.The magenta straight lines represent linear fits of the PSD.The black and green vertical dashed lines indicate the ion cyclotron frequency ( f ci ) and low hybrid frequency ( f lh ), respectively.Due to the small spacing among the four spacecraft, the observations from all four spacecraft are similar; therefore, only MMS2 observations are presented here.The red vertical line represents the selected inflow region.
Figure 3 .
Figure 3. PDFs of (a) the energy exchange rate (J • E); (b) the normalized (J • E) by qn in V A,in 2 B in ; and (c) (J • E)/n.The red and blue curves represent the STR event (2017 July 6) and the WTR event (2017 July 12), respectively.
Figure 4 .Figure
Figure 4. PDFs of (a) the angle θ between electric current J and electric field in E. (b) and (c) show the scatterplot of |J • E| as a function of current intensity |J| and electric field intensity |E| in the STR and WTR, respectively.Both the color and size of the dots represent the value of |J • E|, with larger sizes representing higher values.
Figure 7 .
Figure 7. (a) Average value of the L dB conditioned on (|J • E|)/(J • E) rms in the two turbulent reconnection events.The vertical bars in panel (a) represent the standard error of the mean; (b) PDF of the L dB within CSs; (c) PDF of the normalized CF thickness.Here, L dB and the CF thickness are normalized by d i,in .
The left-hand side of Equation (2) represents the total released magnetic energy in the ERR during the time interval δt, while the right-hand side of Equation (2) represents the total energy gain of the inflowing particles into the ERR during δt.The estimated increment of the bulk
Figure A1 .
Figure A1.The overview of the STR event.From top to bottom are (a) three components of the magnetic field and magnetic field strength; (b) magnetic field strength; (c) electron number density; (d) three components of the ion bulk velocity and (e) electron bulk velocity; and (f) plasma β.
Table 1
Key Parameters Are Provided to Describe the Two Reconnection Events Observed by MMS on 2017 July 6 and 2017 July 12 The parameter 〈dB〉 refers to the averaged amplitude of the high-pass filtered magnetic field above 0.05 Hz, representing the magnetic field fluctuation.The relative amplitude of the fluctuations is expressed by 〈|dB|/|B 0 |〉, where B 0 is the low-pass filtered magnetic field below 0.05 Hz.V
Table 2
Statistics of the CS and CF in the Two Events
Table 3
The Estimated Increment of the Bulk Kinetic Energy and Thermal Energy for the Inflow Plasma in WTR and STR | 8,054.2 | 2024-04-01T00:00:00.000 | [
"Physics"
] |
Corporate governance, capital structure, and performance in family and non-family firms
This study aimed to examine the effect of corporate governance and capital structure on the performance of family and non-family firms. Corporate governance is measured by independent boards of directors and commissioners. Capital structure was measured by debt-to-equity ratio and performance as proxied by return on equity. The data in this research was secondary data. The sampling method used the purposive sampling method. The samples were 11 family firms and 30 non-family firms. The data analysis used multiple regression to identify the effect of the dependent variable on the independent variable. The results of the research showed that the capital structure has a significant impact on the performance of family and non-family firms.
INTRODUCTION
The financial performance of a firms is a significant factor to be considered by an investor in investing. Investors can see the financial performance of a firms from the financial statements it issues. Conflicts of interest between the agent and the principal that occur in a firms occur. The firm's financial performance cannot be improved if there is a conflict of interest between the agent and the principal, often referred to as an agency conflict (Brigham and Ehrhardt, 2011). Agency problems in the firms can occur because of asymmetric information. This asymmetric information occurs between managers and shareholders when one party has information not owned by the other party (Brigham and Ehrhardt, 2011).
Corporate governance is a form of management carried out by a firm in its operations. The structure of good corporate governance can determine the success or failure of a company. The success of a firms is primarily determined by their strategic characteristics, which include the strategy of implementing a good corporate governance system. Good corporate governance can improve firm performance (Putri, 2020). The current state of the country is being hit by the Covid-19 pandemic, which has caused enormous losses. In order for the firms to survive, it requires consistency and optimization of the firms in implementing good corporate governance.
Firms in running their business need funds for operations and business development through debt and equity. Agency theory focuses on potential conflicts of interest that arise from information asymmetry between principals and agents Jensen and Meckling (1976). The use of debt in the firms is expected to reduce agency conflict (Crutchley and Hansen, 1989). According to the capital structure theory, the addition of debt made by the firms when the position of the capital structure is above the target of the optimal capital structure can cause a decrease in the value of the company. Previous research on the effect of capital structure on performance has not shown consistent results.
Research in Indonesia has not tested how the influence of corporate governance and capital structure on the performance of family and non-family firms listed on the Indonesia Stock Exchange.
Corporate governance in family firms in Indonesia is still mostly based on family relationship, and has not seen the implementation of good corporate governance that can improved company performance. The success of the enforcement of good corporate governance is largely determined by the quality of the management, namely the commissioners as supervisors and directors as executor. Based on the research gaps stated previously, this study intends to examine the effect of corporate governance and capital structure on firms' value. The samples in this study were grouped into family and nonfamily firms.
The urgency of this research looks at whether there are differences in company performance between family businesses and non-family businesses. Is the company that focuses more on the structure of the family business is very different from the non-family business.
Agency theory
Agency theory discusses the relationship between agents and principals. The agent is the management of the company, while the principal is the shareholder. The agent and principal are bound by a contract that states their respective rights and obligations. Principals provide facilities and funds to run the company. This agency theory arises because of asymmetric information between principal and agent, asymmetric interest between principal and agent, due to unobservable behavior or bounded rationality. With these three things, the principal and agent will prioritize their respective welfare. The agent will try to maximize his prosperity by expecting a large compensation. Meanwhile, shareholders will maximize welfare through a maximum dividend distribution. The agency problem can arise because of the asymmetric information that occurs between managers and investors. The conflict occurs when one party has information that the other party does not have. Usually, the manager have more information than their investors. The manager will significantly influence the optimal firm capital structure (Brigham and Ehrhardt, 2011). Family firms in Indonesia tend to employ family members as agents in their firms. The involvement of the family as an agent will be able to trigger agency conflicts within the company.
Corporate governance and performance
Corporate governance can be defined as a set of laws, rules, and procedures that affect a firm operations and the decisions made by its managers (Brigham and Ehrhardt, 2011). According to Lukviarman (2004), experts agree that the corporate governance system adopted by Indonesia follows the pattern of the Continental European system. Shleifer and Vishny (1997) explain that corporate governance can protect minority parties in a firm from exploitation by managers and controlling or majority shareholders in the company. Good corporate governance in a firm is expected to improve firm performance. The quality of the firm's management, in which the commissioner is the supervisor and the director is the executor, will determine good corporate governance. The existence of an independent board of directors and commissioners is a vital governance role in the company. An independent commissioner in a firm is a commissioner who is not an employee of the company, is not a relative or family member, is not a majority shareholder, and has no serious business interest/interest in the company. Putri (2020) found that corporate governance has a positive effect on firm performance. The family firm is a company whose main shareholder is a family, and the manager of the position is controlled by family members. The characteristics of a company owned by a family will be different from a company that is not owned by a family.
Based on the results of these previous studies, the following hypotheses are formulated: H1a: The board of directors has a positive effect on the performance of family firms H1b: The board of directors has a positive effect on the performance of non-family firms H1c: Independent Commissioner has a positive effect on the performance of family firms H1d: Independent Commissioner has a positive effect on the performance of nonfamily firms
Capital structure and performance
Several theories explain the choice of capital structure. Trade-Off Theory says that the firms will use the benefits of tax deductions on the cost of debt in connection with the leverage of the capital structure. The level of corporate leverage lies in the capital structure. Therefore, it is used to balance the capital structure with the cost of debt. The Pecking Order theory states that financial decisions follow a hierarchy in which funding sources from within the firms take precedence over funding sources from outside the company. In the event that the firms use external funding, loans are prioritized over funding with additional capital from new shareholders. Jensen and Meckling (1976) say that using debt in the firms can reduce agency conflict because the agent in the firms cannot take actions that harm the shareholders. After all, the free cash flow in the firms is used to pay the firms debt interest obligations.
The theory proposed by Modigliani and Miller (1963) adds an element of tax to their capital structure analysis. Modigliani and Miller (1963) suggest that firms with debt have a higher value than firms that do not have debt. The financial performance of firms that have debt will also be better when compared to firms that do not have debt. Firms that are in debt have an interest expense that can be used as a tax deduction. High-risk, high return firms that have high debt also have a high risk because of the interest costs. Optimal capital structure can improve firm performance. The firm needs to determine the optimal capital structure. According to pecking order theory, firms prefer to use internal cash funds first in an optimal capital structure. The firms will issue securities if it requires external funding. The results of research conducted by Nugraha (2013) and Mai and Setiawan (2020) found that capital structure has a positive effect on firm performance. Based on the results of these previous studies, the following hypotheses were formulated as follows: H2a: Capital structure has a positive effect on the performance of family firms H2b: Capital structure has a positive effect on the performance of non-family firms
Data description
The type of data used is secondary data, quantitative data taken from the Indonesia Stock Exchange website.The population in this study are manufacturing firms listed on the Indonesia Stock Exchange from 2016 to 2019 were 170 firms. The sample taken in this study was using purposive sampling which was conducted by selecting samples with previously known population characteristics. Sample used 11 family firms and 30 non family firms.
Dependent variable
The dependent variable used in this study is the return on equity (ROE). The firm performance reflects how and how many financial resources are available to carry out the firm production activities. Firms show their performance through the firm's annual financial reports. This study uses the calculation of return on equity using the ratio of net profit to total equity.
Corporate governance
The independent variables in this study follow the study of Henry (2010) for the proxies used. The proxy for corporate governance uses the size and composition of the board of directors and the percentage of independent commissioners. Size of the board of directors using the number of board of directors in the company. Then this study uses the ratio of number of independent commissioners to total board of commissioner to measuring independent commissioner variable.
Capital structure
Capital structure is measured by dividing total debt by total equity. Jensen and Meckling (1976) stated that debt is a mechanism to unite the interests of managers and shareholders. This study uses debt to equity ratio within the ratio between total debt to total equity.
Control variable
The control variable used in this study refers to research conducted by Vieira (2016).A control variable in the analysis used firm size that measured from the total asset of the sample firms. Another control variable is the age of the firm since it listed in the Indonesia Stock Exchange until the research period. The linear regression model used to test the research hypothesis is as follows: Model 1 ROE(FF)i,t = β0 + β1 DIRi,t + β2 KOMINi,t+ β3 DER +β4 SIZEi,t + β5 AGEi,t + ɛi,t Model 2 ROE(NFF)i,t = β0 + β1 DIRi,t + β2 KOMINi,t+ β3 DER +β4 SIZEi,t + β5 AGEi,t + ɛi,t In model 1 revealed that model ROE(FF)i,t was used to measure performance of family firms (i) in the period (t) that measured by the return on equity proxy. Furthermore, model ROE(NFF)i,t that presented by model 2 expose to measure performance of nonfamily firms (i) in the period (t) that used return on equity proxy. Model from the study used several proxies to measure corporate governance, such as the board of directors of firms in period (t) used term DIRi,t, independent commissioner of company in period (t) used term KOMINi,t, size of company in period (t) used term SIZEi,t, and the age of company in period (t) used AGEi,t.
Data analysis method
The study examines the effect of corporate governance, capital structure on firms performance. Based on the research objectives, the analytical method used is multiple linear regressions.
The analysis procedure begins by cleaning the data from outliers and classical assumptions consisting of four assumptions: normality, multicollinearity, autocorrelation, linearity, and heteroscedasticity. To answer the hypothesis, multiple linear analysis was used by taking into account the F statistic, R squared, and the significance value.
Panel Data Regression Model Selection Test
In estimating the regression model with panel data. To estimate model parameters with panel data, there are three techniques (models) that are often used, as follows: (a) common effects models to estimates the parameter of the panel data model by combining cross-section and time series data as a single unit without looking at differences in time and entities (individual); (b) fixed effect model assumes the intercept from the differences of each individual while the slope between individuals are similar; and (c) random effect model that used in random effects its assumed that each firm has the difference intercept, whereas the intercept is a random or stochastic variable.
Family firms
The descriptive statistical analysis results that shown at table 1 revealed that the maximum value of ROE was 0.279, and the minimum value was 0.011. The average value was 0.117 with a standard deviation of 0.068.
Model selection test results
Based on the model test results, the Chow Test results were used, which compare the Common Effect with the Fixed Effect, obtained a chi-square probability value of 0.000 which means that the Chow Test prefers Fixed Effect as the best model. Furthermore, the Hausman Test model that compares the Fixed Effect with the Random Effect obtained a chi-square profitability value of 0.029, which means that the Hausman Test prefers Fixed Effect as the best model because the probability value is <0.05. Therefore, for this research, Fixed Effect is used as the appropriate model.
Hypothesis testing results
After all research variables were normally distributed and free from classical assumptions, which include multicollinearity, autocorrelation, and heteroscedasticity, the multiple linear regression testing stages can be carried out immediately. (2021) From the table 2, the variables of the director boards number, independent commissioners, capital structure, firms size, and firms age have no effect on performance as proxied by return on equity (ROE). This result can be seen from the probability value > 0.05. Table 3 shown that The descriptive statistical analysis results for non-family firms show that the maximum value of ROE is 1.399, and the minimum value is 0.008. The average value is 0.211 with a standard deviation of 0.288.
Classic assumption test
To test the normality of non-family firms, the Jarque-Bera Test (JB test) was applied. The test results show that the probability is 0.118 > 0.05. Based on these results, it can be stated that the data is normally distributed or the assumption of normality has been met. Multicollinearity test is done by calculating the correlation coefficient of each variable. From the results, it can be concluded that there is no multicollinearity between the independent variables. A heteroscedasticity test was carried out using the White test. The results of White's test show that the probability value was 0.277> 0.05the regression model used in this study does not have symptoms of heteroscedasticity. The autocorrelation test was carried out by calculating the Durbin Watson value (DW Test). The test results showed the DW value of 2.334. Therefore, there is no autocorrelation in the regression model in this study.
Model selection test results
Based on the results of the model test, the Chow Test results were used, which compare the Common Effect with the Fixed Effect, and obtained a chi-square probability value of 0.000. Furthermore, the Hausman Test model test was carried out, which compared the Fixed Effect with Random Effects. The chi-square profitability value was obtained at 0.029, and the Hausman Test prefers Fixed Effect was suitabledue to the probability value of<0.05. Therefore, for this study, Fixed Effect was used. Table 4 revealed that the variable number of independent directors and commissioners has no effect on performance as proxied by return on equity (ROE). This result can be seen from the probability value > 0.05. Meanwhile, debt to equity ratio, firm size, and firms age have a positive and significant effect on return on equity.
The effect of corporate governance on performance in family firms
Based on the results of testing hypothesis 1a, the regression coefficient for the number of the board of directors is -9.47E-05 with a probability of 0.9945. These results indicate a greater probability value of 0.5860 > alpha 0.05. From these results, it can be concluded that the number of directors has no effect on the performance of family firms. The results of this study are in line with the research of Buallay et al. (2017) and Putri (2020). The board of directors in a family firm has not (2021) been able to improve the performance of the family company. Hypothesis 1c obtained a regression coefficient for the independent commissioner variable of 0.051 with a probability of 0.834. From these results, it can be concluded that independent commissioners have no effect on the performance of family firms. The results of this study are in line with the research conducted by Dervish (2009). However, these results are not in line with research conducted by Putri (2020), Prasinta (2012), andWidyati (2013). The existence of independent commissioners in family firms has not been able to monitor the running of the firms so that supervision by independent commissioners in family firms has not been able to improve firms performance. An independent commissioner in a family firms is a formality. Therefore, the existence of independent commissioners in the family firms has not been able to carry out a good monitoring function. Independent commissioners do not use their independence to oversee the policies of directors in the firms.
Effect of capital structure on performance in family firms
The regression coefficient of the debt to equity ratio variable is 0.051 with a probability of 0.058. This indicates that the capital structure as proxied by the debt to equity ratio does not affect the performance of family firms, and it shows that the larger or smaller debt to equity ratio does not affect the volatility from the firm's financial performance. Thus, the debt to equity ratio does not affect the firm's financial performance. These results align with research conducted by Azis& Hartono (2017) and Fachrudin (2011).
The effect of firms size and firms age on performance in family firms
Based on the results of hypothesis testing, the regression coefficient for the firms size variable is -0.047 with a probability of 0.1092. These results indicate that firms size has no effect on firms performance. The size of the firms is not a guarantee that the firms will have good performance. This result is in line with the research conducted by Fachrudin (2011).
The regression coefficient for the firms age variable is -0.001 with a probability value of 0.8296. The probability value is 0.8296 > 0.05. These results indicate that the age of the firms does not affect the performance of family firms. This shows that whether or not the firms has been in existence for a long time does not affect the performance of the family firms.
The effect of corporate governance on performance in non-family firms
The regression coefficient for the number of boards of directors is -0.466, with a probability of 0.15. These results indicate a greater probability value of 0.586 > alpha 0.05. The number of directors has no effect on the performance of family firms. The results of this study are not in line with the results of research conducted by Widyati (2013). The number of the board of directors in the firms has not been able to improve the firms performance.
The results of the independent commissioner variable regression coefficient were 0.507 with a probability of 0.07. From these results, it can be concluded that independent commissioners have no effect on the performance of family firms. Independent commissioners in non-family firms have not been able to monitor the running of the company. Supervision carried out by independent commissioners has not been able to influence the behavior of managers in an effort to improve firms' performance. The greater the independent commissioner, the supervision of the firm's management has not been able to improve the firm's financial performance. The results of this study are in line with the research conducted by Buallay et al. (2017).
Effect of capital structure on performance in non-family firms
The regression coefficient of the debt to equity ratio variable is 0.972 with a probability of 0.05. The probability value is 0.04 < 0.05. This indicates that the capital structure as proxied by the debt to equity ratio has a positive and significant effect on the performance of family firms. The higher the debt to the firms equity value indicates that the composition of the firms debt is higher than the composition of its capital owned. The higher the level of debt, the greater the burden of paying debt and interest borne by the company. In line with the Trade-Off Theory, the firms will use the benefits of a tax reduction on the cost of debt in connection with the leverage of the capital structure. The use of non-family firms debt will be able to reduce the amount of taxes borne by the firms so that reducing the amount of taxes borne by family firms will increase the firms performance in terms of profitability. The results of this study are in line with research conducted by Nugraha (2013), Kristianti (2016), and Mai and Setiawan (2020).
The effect of firms size and firms age on performance in non-family firms
Based on the results of hypothesis testing, the regression coefficient for the firms size variable was 2.671, with a probability of 0.041. These results indicate that the size of the firms has a positive and significant effect on firms performance.
The regression coefficient for the firms age variable is -0.617, with a probability value of 0.005. The probability value is 0.829 > 0.05. These results indicate that the firms age has a negative and significant effect on the performance of family firms.
CONCLUSION
This study separates the sample into groups of family firms and non-family firms. Based on the results for the family firms, it was concluded that corporate governance as proxied by independent boards of directors and commissioners, capital structure proxied by debt to equity ratio, firms size, and firms age did not affect the performance of family firms. Meanwhile, for the group of non-family firms, the number of independent directors and commissioners does not affect the firms. This research has several limitations including the year of observation and the variables used. Further research is suggested to be able to extend the research time so that the results obtained are more accurate. Researchers also use other proxies of the variables used that can improve the performance of family and nonfamily companies. This research is useful for practitioners in providing information related to company performance for family companies and non-family companies. | 5,310.2 | 2022-06-30T00:00:00.000 | [
"Business",
"Economics"
] |
Discovering missing reactions of metabolic networks by using gene co-expression data
Flux coupling analysis is a computational method which is able to explain co-expression of metabolic genes by analyzing the topological structure of a metabolic network. It has been suggested that if genes in two seemingly fully-coupled reactions are not highly co-expressed, then these two reactions are not fully coupled in reality, and hence, there is a gap or missing reaction in the network. Here, we present GAUGE as a novel approach for gap filling of metabolic networks, which is a two-step algorithm based on a mixed integer linear programming formulation. In GAUGE, the discrepancies between experimental co-expression data and predicted flux coupling relations is minimized by adding a minimum number of reactions to the network. We show that GAUGE is able to predict missing reactions of E. coli metabolism that are not detectable by other popular gap filling approaches. We propose that our algorithm may be used as a complementary strategy for the gap filling problem of metabolic networks. Since GAUGE relies only on gene expression data, it can be potentially useful for exploring missing reactions in the metabolism of non-model organisms, which are often poorly characterized, cannot grow in the laboratory, and lack genetic tools for generating knockouts.
Scientific RepoRts | 7:41774 | DOI: 10.1038/srep41774 predictions of essential genes is very important for a metabolic network model to be considered reliable, identifying essential genes experimentally, is a very hard and time-consuming task, and additionally, requires specific genetic tools for generating knockouts. Therefore, this type of data may not be available for many organisms. This is also true about 13 C labeling data or metabolomics data which are required for OMNI 14 and minimal extension 15 methods, respectively. The four last methods in Table 1 usually take a draft metabolic network that cannot produce biomass, and try to add minimum number of reactions to the model, such that biomass producing reaction can carry flux [16][17][18][19] . In the present study we aim to use gene expression data for finding the gaps. Today, transcriptomes are relatively easy to obtain, which makes them attractive sources of information for being used in gap analysis of metabolic networks.
Flux coupling and gene co-expression
Flux coupling analysis (FCA) is a computational method to determine, for each pair of reactions i and j in a metabolic network, how their fluxes (v i and v j ) depend on each other 20 . Two reactions i and j are "fully coupled" if they always have proportional flux values, v i = cv j , where c is a constant indicating the ratio of v i and v j . If zero flux through one reaction, v i = 0, always implies zero flux through the other reaction, v j = 0 (but not vice versa), j is said to be "directionally coupled" to i. If two reactions are not flux-coupled, they are defined to be "uncoupled" 20 . It has been previously shown that those genes that encode fully coupled reactions show higher levels of co-expression compared to other genes 21 . Later, it was suggested that flux coupling relations may change considerably when the network becomes more complete, i.e. reactions are added to the network 22 . More precisely, it was shown that with a more complete network, the coupling relations will become more consistent with the experimental data of gene co-expressions 22 .
GAUGE: A novel Gap Analysis method by Using Gene Expression data
In this work, we present a novel gap analysis method, GAUGE, which uses FCA of metabolic networks together with the publicly available gene expression data, in order to propose a strategy for gap finding and gap filling. It has been previously suggested that existence of a pair of fully coupled reactions with (nearly) uncorrelated gene expression suggests a gap in the network 22 . In other words, we hypothesize that such reactions must be directionally coupled or uncoupled in the complete network.
As a test case, we try to analyze the gaps in the genome-scale metabolic model of E. coli. The goal of our method is to improve a metabolic network such that there is maximum consistency between experimental gene expression data and theoretical flux coupling relationships. For this purpose, we first find gaps by identifying pairs of fully coupled reactions with "low" gene co-expression. Then, in a two-step algorithm based on mixed integer linear programming (MILP) formulation, we try to minimize the inconsistencies by adding a minimum number of reactions from a dataset of all known reactions to the network.
Methods
Suppose that we have an incomplete metabolic network model with known gene-protein-reaction relationships. The goal of GAUGE is to suggest extra reactions whose addition to the network makes gene co-expression relations become more consistent with flux coupling relations. Here we present a brief description of this method and the data we used.
Metabolic network model. The metabolic network model of E. coli, iJR904, which was reconstructed in 2003 23 was used as the model. This model includes 904 genes and 1075 reactions, and therefore, includes a considerable number of gaps compared to our current knowledge of E. coli metabolism. Universal dataset of reactions. The universal dataset of reactions was obtained from KEGG 26 . The initial version of the dataset comprised of 10882 reactions. From this set, we excluded those reactions which have the same metabolite as both a substrate and a product. In addition, identical reactions which have the same metabolites with different names were also manually excluded from the dataset. The final version of this dataset of reactions, which will be referred to as the "universal" dataset, includes 9587 reactions. If addition of reactions from KEGG could not resolve some of the inconsistencies, we use another universal dataset which is the set of exchange reactions for all of the metabolites in the model.
Calculating gene coupling relations. In the first step, gene pairs like (g 1 , g 2 ) are found such that deletion of g 1 inactivates all the reactions associated with g 2 , and vice versa. We require to find such gene pairs since our experimental data is for the co-expression of genes, not reactions. For this purpose, the following procedure is performed for every pair of metabolic genes in the model. First, g 1 is removed from the model. Then, those reactions that cannot carry nonzero flux after the removal of this gene are identified. If all of the reactions that are associated with g 2 are inactivated, then g 2 is said to be coupled to g 1 . If g 1 is coupled to g 2 and vice versa, then we say that g 1 and g 2 are fully coupled. This procedure ensures that two genes entirely depend on the function of each other, and that they do not exhibit multiple functions which would justify independent gene expression. Figure 1 is a simple network which shows the difference between coupling of genes and reactions. As it is indicated, R 4 and R 5 are fully coupled reactions. However, according to the above procedure their corresponding genes, G 1 and G 2 , are not fully coupled. This is because G 1 is also associated to R 7 and its function is independent of function of G 2 . Furthermore, R 7 and R 10 are also fully coupled reactions. However, since there is an "or" relationship between R 10 genes, G 1 in not dependent to any of them. We computed gene coupling relations for all of the gene pairs in the model. FCA for finding inconsistencies. As a preprocessing step, the biomass producing reaction is removed from the model in order to avoid a large set of fluxes to be detected as fully coupled 20,27 . Consequently, for each biomass component that could not be exported from the model, an export reaction was added. Therefore, all biomass components were allowed to be exported independently.
In the next step, flux coupling relations are calculated for every pair of reactions using F2C2 28 . Then, from the list of coupled gene pairs (previous section), those pairs that are linked to at least one pair of fully coupled reactions are selected. Now, for each of these gene pairs, if the gene expression values are uncorrelated based on the wet-lab experimental data, i.e., the Pearson correlation coefficient of the gene expression values are found to be below a certain threshold, the corresponding fully coupled reaction pairs are labeled as inconsistent. Such cases are considered as potential candidates for gap filling.
MILP formulation.
A two-step MILP formulation is used for resolving the discrepancies observed above.
The inputs of the algorithm are: (i) the reaction pairs identified as discrepancy candidates; and (ii) the universal dataset of metabolic reactions. The goal is to add the smallest possible number of reactions from the universal dataset to the network, such that the highest possible number of inconsistencies are resolved.
We assume two consecutive steps for fixing model gaps. First, we assess whether addition of new reactions from the KEGG dataset or changing the reversibility type of irreversible reactions can change the coupling type of the candidate reaction pairs. If some of the discrepancies cannot be resolved by this way, we then check whether the addition of exchange reactions can fix the model gaps.
MILP formulation of the first step. In the first step, the algorithm takes S, U and U e matrices, and L, H, R, D, D e and IRR sets as inputs which are defined as follows. S, U and U e contain the stoichiometric coefficients However, their associated genes, G 1 and G 2 , are not fully coupled since G 1 can also code for R 7 . As another example, R 7 and R 10 are fully coupled reactions. However, gene associated to R 7 (G 1 ) is not coupled to any of the genes associated to R 10 (G 3 and G 4 ). This is because there is an "or" relationship between G 3 and G 4 .
for the reactions in the original model, the "universal" dataset of KEGG reactions and the universal dataset of exchange reactions, respectively. L is the set of fully coupled reaction pairs whose corresponding genes are uncorrelated, i.e., their Pearson correlation coefficient values are below a certain threshold. Similarly, H, is the set of fully coupled reaction pairs whose corresponding genes are highly correlated, i.e., their Pearson correlation coefficient values are above a certain threshold. In this study 0.2 and 0.8 are chosen as thresholds for reactions in L and H, respectively. R, D and D e are set of the reactions of the original model, the universal dataset of KEGG reactions and the universal dataset of exchange reactions, respectively. Finally, IRR is the set of irreversible reactions in the original model.
By imposing the following constraints, the output of the algorithm would be the maximum number of the inconsistencies that can be resolved between gene co-expression and flux coupling relations. Note that first we use the U matrix to select reactions for addition to the model from KEGG dataset. If some of the inconsistencies could not be resolved this way, we run the MILP again with U e and D e as inputs, to select the candidate reactions from the dataset of exchange reactions.
Constraint (1) imposes a stoichiometric mass balance on all of the metabolites, where v and y are vectors that contain the fluxes through reactions of the original model and the universal dataset, respectively.
To count the number of resolved inconsistencies, we need another constraint: In constraint (2), every i represents a pair of fully coupled reactions with fluxes u i and w i , whose corresponding genes are uncorrelated, and λ i = u i /w i is a constant. Vector d is a binary vector such that d i = 1 if the two reactions of the reaction pair i are not fully coupled anymore after the network completion, and d i = 0 otherwise. Here ɛ is used to avoid numerical errors of the MILP solver. In this work, we assume that ɛ = 10 -6 .
Note that Constraint (2) is not presented in the form of a linear constraint. However, it can be shown that application of the following four linear constraints is equivalent to Constraint (2): where e, f and g are binary vectors and M is a sufficiently large value. It is also necessary to ensure that all of the reaction pairs in H remain fully coupled after addition of reactions from the dataset. Constraint (7) is used for this purpose: In Constraint (7), every j represents a fully coupled reaction pair with fluxes p j and q j . Similar to Constraint (2), μ j = p j /q j is a constant.
Whenever the capacity constraints are known, it is important to include them in the MILP: Equations (8) and (9) constrain the fluxes of the reactions in the original model and the dataset between the specified lower bounds and upper bounds. Since we are looking for the possibility of changing the reversibility of some reactions, all of the reactions in the original model are considered reversible and have negative lower bounds.
By applying the above mentioned constraints, it is possible to find solutions in which inconsistent reaction pairs of L are not fully coupled anymore. To obtain the best solution, the following objective function is used: Using this objective function one can determine the maximum number of inconsistencies that are resolved using the MILP.
MILP formulation of the second step. When the above-mentioned MILP is solved, the maximum number of inconsistencies which can be resolved, say Z * , will be determined. Now, in the second step of GAUGE, the goal is to minimize the number of reactions that should be added to the network or made reversible to resolve the inconsistencies. The inputs and constraints of the MILP of the second step is the same as the first step, with four additional constraints (11 to 14): In Constraint (11) binary variables b l are used for counting the number of added reactions from the universal dataset. Here, b l = 1 if the corresponding reactions in the dataset carries nonzero flux. In other words, by imposing this constraint, b l = 1 if y l > 0.
We also consider the possibility of making some reactions reversible, for resolving model inconsistencies. To count the number of reactions which are made reversible, the following constraints are added: Constraints 12 and 13 ensure that if an originally irreversible reaction takes a negative flux after network completion, the value of binary variable, h m , is set to 1. Again M is a large value and ɛ = 10 −6 . Now, to ensure that maximum possible number of inconsistencies are resolved in the second step, we apply the following constraint: i Constraint (14) fixes the sum of elements in d to its maximum value obtained by solving the first MILP. Finally, to ensure that minimum number of reactions are added to the network or made reversible to resolve the inconsistencies, we use the following objective function for the second MILP: Altogether, by solving this MILP, we will find the minimum number of modifications which should be made to the network to maximally resolve the network inconsistencies.
Alternative solutions. For calculating alternative solutions, additional constraints were iteratively added to the second MILP and the problem is solved again. This procedure is repeated until all of the optimal solutions are found. The additional constraints are as follows: where Q is the number of solutions that are already identified. This constraint ensures that every solution vector differs with the previously found solutions by at least one element. We should emphasize that if some of the network inconsistencies are not resolved by adding reactions from KEGG or making some reactions reversible, addition of exchange reactions are considered. For this purpose, the following changes are made to the above-mentioned MILPs: Instead of U and D, U e and D e are used as inputs; Constraints 12 and 13 are not considered anymore; and all of the irreversible reactions of the original model will take the lower bounds of zero.
Code availability. The code is implemented for COBRA toolbox and is available here https://github.com/ zhalehhosseini/GAUGE GapFind/Gapfill, Smiley and GrowMatch methods. GapFind/Gapfill 29 , Smiley 12 and GrowMatch 13 were used for gap filling and comparison with GAUGE. Single gene knockout data for GrowMatch and growth profiles for Smiley were obtained from EcoCyc database 30 . For GapFind, the COBRA Toolbox implementation was used 31 . For Smiley, GrowMatch and GapFill we used in-house implementations of these algorithms.
Results and Discussion
Application of GAUGE: E. coli as a case study. Here, we describe the use of GAUGE to resolve the inconsistencies between experimental gene co-expression data and in silico flux coupling relationships of the iJR904 metabolic network of E. coli 23 . Characterizing a pair of metabolic genes as "correlated" or "uncorrelated" pair requires a cutoff for computed values of co-expressions. In this study we define L as the set of reaction pairs with absolute correlation coefficients of less than 0.2 and H as a set of reaction pairs with absolute correlation coefficients of greater than 0.8. The goal of GAUGE is to change the coupling type of the reaction pairs in L while keeping the coupling type of the reaction pairs in H unchanged. For the iJR904 metabolic model, L and H include 134 and 41 reaction pairs, respectively. The existence of each inconsistent reaction pair in L implies that there are some missing reaction(s) in the network. Addition of such reactions to the model will change the coupling type of the reaction pair in L from full coupling to other types of flux (un) coupling relations. We used GAUGE to resolve the inconsistencies by adding reactions from KEGG and changing the reversibility type of reactions or by adding exchange reactions to the model.
Computing globally optimal solutions for resolving the inconsistencies. Resolving the inconsistencies can be done by two different approaches: all inconsistencies can be resolved at once or they can be resolved once at a time. Clearly, these two approaches may result in different set of predicted gap-filling reactions. Figure 2 is a simple example that shows this difference. In this figure, suppose that (1 and 2) and (3 and 4) are inconsistent reaction pairs. Trying to resolve these inconsistencies one by one, results in the addition of reactions 5 and 6 for pair (1 and 2) and reactions 10 and 11 for pair (3 and 4). However, if these cases are resolved together, reactions 7, 8 and 9 are the minimal set of reactions that are needed for addition to the network. In order to have a globally minimal solution, we input the inconsistent reaction pairs all at once to the first step of the algorithm to calculate the maximum number of these cases that could be resolved. GAUGE identified consistency-returning suggestions for 132/134 pairs of L. Out of the 132 inconsistency cases, 54 cases were resolved by adding reactions from KEGG, 2 cases were resolved by forcing irreversible reactions to have flux in the backward direction, and the others by allowing the exchange of metabolites between extracellular space and cytoplasm. At minimum, addition of 31 KEGG reactions and 18 exchange reactions and changing the reversibility type of 1 reaction are needed to resolve the inconsistencies of these 132 cases. The detailed information about these results and the procedure of computing alternative solutions are described in the Supplementary file.
Here we discuss a few examples of resolved inconsistencies by GAUGE predictions that evidence from databases or literature exist for their presence in E. coli.
Methylglyoxal metabolism. Figure 3a shows a part of Methylglyoxal metabolism. Reactions GLYOX and MGSA are two reactions in this pathway which are fully coupled in iJR904 with Pearson correlation coefficients of less than 0.2. GAUGE predicts 5 separate reactions to resolve the inconsistency in this case. Interestingly, for two of these reactions (R02260 and R09796) evidence can be found for their presence in E. coli [32][33][34] . In addition, one other reaction, R00203, is catalyzed by an enzyme which is known to be encoded in the E. coli K12 genome. More precisely, R00203 is catalyzed by lactaldehyde dehydrogenase (E.C. number 1.2.1.22). This enzyme is encoded in E. coli genome and catalyzes the conversion of l-lactaldehyde to l-lactate. It is also shown that this enzyme catalyzes the conversion of methylglyoxal to pyruvate (reaction R00203) in E. coli 35 . However, the K m for this conversion is higher compared to the conversion of l-lactaldehyde to l-lactate. It should be noted that R00203 and R00205 in the Figure, differ in the cofactor used by their catalyzing enzymes.
Folate metabolism. Figure 3b shows part of the Folate metabolism pathway. In this Figure, ADCS and DHPS2 are inconsistent reaction pairs. GAUGE predicts R03066 to be added to the model. This reaction is catalyzed by dihydropteroate synthase (E.C. number 2.5.1.15) which is encoded by a gene present in E. coli genome (b3177). Additionally, based on the KEGG database this reaction is present in folate metabolism pathway of E. coli K12.
Tartrate metabolism. TARTD and TARTRt7, as another inconsistent pair found by GAUGE, are shown in Fig. 3c. GAUGE predicts the addition of R01751, which is catalyzed by tartrate decarboxylase (E.C. number 4.1.1.73). d-malate oxidase is an enzyme with the same E.C. number which is encoded by b1800 gene in E. coli model, and interestingly, this enzyme is also annotated as "putative tartrate dehydrogenase" in E. coli. Fig. 3d. GAUGE could not identify any reactions in KEGG to resolve the inconsistency of this case. In addition, no change in reversibility types can resolve it. However, GAUGE predicts the addition of exchange reactions for orotate or s-dihydroorotate. The gene for transporting orotate to the cell is also known to be present in E. coli 36 . Comparison of GAUGE results with other gap filling methods. We have run GapFind/GapFill 29 , Smiley 12 and GrowMatch 13 algorithms on the same metabolic network, to compare their results with GAUGE. We should emphasize here that there are no standard benchmark for comparing gap analysis methods. Each method uses different kind of inputs and searches for different types of gaps. In addition, false negativity, true negativity, or even false positivity cannot be defined for the results of gap analysis methods, since a comprehensive and perfect knowledge about the metabolism of organisms does not exist. Therefore, we can only try to compare the results of different gap analysis methods by searching for evidence for the reactions predicted by each method and calculating the frequency of supported predictions for each method. GrowMatch solves two MILPs to add and remove reactions for resolving the NGG (in silico no growth vs. in vivo growth) and GNG (in silico growth vs. in vivo no growth) cases respectively. Since GAUGE only predicts reactions for being added to the model, only the MILP for resolving NGG cases was run to obtain comparable results. Altogether, 37 NGG cases were identified. Every NGG case was used separately and all of the alternative optimal solutions were calculated for each case. From these cases, 18 cases could be resolved using one of the three possible strategies, namely, addition of reactions from KEGG, changing irreversible reactions to reversible ones, and addition of exchange reactions. The total of 69 reactions were predicted for being added to the model or changing their reversibility type.
Purine and pyrimidine biosynthesis. DHORTS and ORPT are inconsistent reaction pairs in
Smiley is a method that resolves the inconsistency between observed in vivo growth phenotypes and predicted in silico growth patterns. This algorithm uses information of growth profiles on different carbon and nitrogen sources as inputs and solves an MILP formulation to add minimum number of reactions to the model to resolve false negative model predictions. Reactions were selected from KEGG dataset or dataset of exchange reactions. Using Smiley, 34 false negatives were identified and 17 out of these 34 cases could be resolved. By calculating all alternative solutions, the algorithm predicted a total number of 55 reactions for gap filling.
GapFind/GapFill finds no-production metabolites in the model and add minimal set of reactions to restore the connectivity of these metabolites to the rest of the network. Using GapFind, 64 inconsistent metabolites were found in iJR904. From these cases, 63 cases could be resolved using one of the three possible strategies, namely, addition of reactions from KEGG, changing irreversible reactions to reversible ones, and addition of exchange reactions. This method predicts 84 reactions for addition to the model or changing the reversibility type.
Since all these algorithms resolve the inconsistency cases one by one, to obtain comparable results, we input each inconsistent reaction pair separately to GAUGE and identified all of the possible alternative optimal solutions for each case. GAUGE predicts 89 reactions as the candidates for being added to the model or being made reversible.
In the next step, the correctness of the predictions of each algorithm was validated by: (1) looking for the presence of a link between these reactions and a gene in E. coli genome annotations in KEGG database. In other words, if, according to the KEGG database, a gene from E. coli genome can code for the catalyzing enzyme of the predicted reaction, we suppose that this reaction can occur in this organism. (2) performing BLASTP against the E. coli K12 genome. More precisely, the best hits in the E. coli genome which have the BLASTP E value of less than 10 −20 are considered as potential coding genes for the predicted enzyme activities in E. coli.
We also searched the literature to find evidence regarding the presence of enzyme activities in E. coli which are predicted by GAUGE. The detailed information about the predicted reactions by each method is presented in the Supplementary Tables S2, S3, S4 and S5. Figure 4 shows the percentage of correct predictions of each algorithm. As shown, GapFind/GapFill and Smiley have the most successful predictions. This observation is presumably due to the logic behind these algorithms. Smiley tries to correct the false negative predictions of the model grown on different media. When the cell can grow in vivo on a media, it must have the capability to convert the available nutrients to biomass precursors. Therefore, the failure of in silico prediction definitely implies missing reactions which leads to the precise predictions by Smiley. The same is true for GapFind/GapFill. It is not reasonable to have a metabolite in the cell with no production route and GapFind searches for these metabolites. Therefore, there is a high probability that its predictions are correct. The reason for less correctness of GrowMatch predictions may be that the presence of NGG cases are not necessarily because of missing reactions. For example, the reason may be that another isozyme which is missing from the model catalyzes the same reaction. In case of GAUGE, the algorithm was tested based on gene co-expression data obtained by Pearson correlation. Using more accurate and complete gene expression datasets, choosing different thresholds for the co-expression of genes, and the application of better correlation measures 37 can potentially improve the predictions of GAUGE.
"Orthogonality" of gap analysis methods. Figure 5 shows the Venn diagram of the number of gap-filling reactions predicted by each method. As shown, among the reactions predicted by GAUGE only three reactions are in common with the results of Smiley, GrowMatch or GapFind/GapFill. If only positively validated reactions are considered, there will be no common reactions between predictions of GAUGE with other methods (See Supplementary Figure S2). These results show that GAUGE can predict different sets of reactions for being added to the model during gap filling. This finding was expected, as the logic behind our method is principally different from other gap filling approaches. In other words, GAUGE can be used as a complementary strategy to the existing strategies for filling the gaps of metabolic networks.
We also investigated in which biological pathways, the predicted gap-filling reactions are involved. Only the reactions which were positively validated, are considered. The pathways for each method are shown in Supplementary Figure S3. As it is shown, there are some biological pathways that are only captured by one single method. For example, lipopolysaccharide metabolism, riboflavin metabolism, nitrogen metabolism, valine, luecine and isoleucine degradation, lysine biosynthesis, d-alanine metabolism, and synthesis and degradation of ketone bodies are pathways that are only identified by GAUGE. In other words, Smiley, GrowMatch and GapFind/ GapFill are found to be unable to explore the missing reactions of these pathways. Identification of biological pathways that are unique to only one method shows that each gap analysis method examines specific parts of the metabolism that are not considered by other methods. This may be the result of the fact that each method looks for model errors from a particular point of view, and application of popular methods like Smiley, GrowMatch or GapFind/GapFill does not eliminate the necessity of application of GAUGE.
As a final note, it should be mentioned that although GAUGE is based on MILPs, in practice it works acceptably fast. For example, for iJR904 network, when inconsistencies are resolved one by one, the mean computation time of GAUGE is ~13 seconds on a PC. The computation time of resolving inconsistencies all at once is ~30 minutes.
Robustness analysis of GAUGE.
In order to investigate how sensitive GAUGE is to the lack of GPRs, we randomly removed some of the GPRs from the model and performed GAUGE on them. Two groups of 100 random networks were generated in which 10 and 40 percent of the reaction GPRs were removed respectively. Then for each network, gene coupling relations were calculated and inconsistent reaction pairs were identified. All of the alternative solutions were computed for resolving each inconsistency. The results are shown in Fig. 6. When we remove 40 percent of the reaction GPRs from the E. coli network, the accuracy of predictions decreases from 36 percent to about 30 percent. Therefore, GAUGE predictions is not significantly affected by the varying degrees of coverage of the GPRs. This 6 percent reduction in accuracy is probably due to the fact that by deletion of some GPRs some of the genes will become fully coupled to each other. GAUGE will mistakenly predict some reactions to be added to the model for resolving inconsistency of these cases.
Another method for robustness analysis of GAUGE, is to randomly remove reactions from the model and analyze what percentage of them could be returned back using GAUGE. In the Supplementary file, we explain that this analysis is not suitable for evaluating GAUGE, since there is not a high probability that removed reactions are associated to a fully coupled gene with low co-expression.
Conclusion
In the present work we have developed a gap analysis method, GAUGE, to resolve the cases where in silico flux coupling relationships is not in agreement with experimental gene co-expression patterns. GAUGE resolves the inconsistencies by adding reactions from KEGG database, changing the reversibility type of reactions or allowing exchange of metabolites between cytoplasm and extracellular space. We tested GAUGE on iJR904 metabolic network model of E. coli as a model that we know contains a large number of gaps. We were able to find out missing reactions that may not be recognizable by other gap filling methods. Therefore, GAUGE can be used as an alternative and complementary strategy for gap filling of metabolic networks. Usually, those methods that use topological flaws of the network such as dead-end metabolites are preferred for gap filling, since these methods can be applied without the need for an experimental dataset. For instance, obtaining gene essentiality data for every gene in an organism is not a simple task and such data is not available for many organisms. A benefit of GAUGE is that it uses a type of experimental data which is readily available for many organisms. Another beneficial feature of GAUGE is that there is the possibility to find globally optimal solutions, instead of finding solutions to solve the inconsistencies case by case. This is the approach that is also considered in very recent study of Hartleb et al. 38 . In this study, the authors present GlobalFit, a bi-leveloptimization method, which identifies minimal set of model changes to achieve a model that can correctly predict all of the experimental data of growth and non-growth cases.
Here, we have validated our results by searching in the literature and databases and also by performing BLASTP to find genomic evidence of genes. It should be noted that there is a new version of E. coli metabolic network model, iJO1366, which is reconstructed in 2011 39 . We have also looked for the predicted reactions in this version of the model. Interestingly, some of these reactions are included in iJO1366. These reactions are presented in Supplementary Tables S2, S3, S4 and S5. We should note that in the universal dataset of reactions used in this study, all of the reactions are included without considering their directionalities. It is definitely a valuable analysis to compute the Gibbs free energy change for each reaction and see in which direction it will carry flux. However, this is not a necessary step in validation of our gap filling results, since addition of reactions in each of the two directions will resolve the identified gaps. More clearly, as it is shown in Fig. 2, addition of reactions 5 to 11 (in forward or reverse directions) will change the coupling type of reactions 1 and 2 and reactions 3 and 4 from fully coupled to directionally coupled. If one needs to add the predicted reactions of GAUGE to a metabolic network, the Gibbs free energy changes should necessarily be computed to know in which direction the reactions should be added.
One should note that there are not a large number of inconsistent reaction pairs in the model. As shown in Supplementary Figure S3, the majority of genes are involved in a low number of full coupling relations, while a large number of genes are not fully coupled to any other genes (not shown in the graph). In addition, Supplementary Figure S4 shows that those genes which are associated with larger number of reactions are generally involved in lower number of full coupling relations. Therefore, fully coupled gene pairs which are associated to fully coupled reaction pairs are not frequent in metabolic networks. However, the results presented here show that even in these situations, GAUGE can successfully predict the novel reactions for being added to the model. Another point is that, other inconsistencies may exist between experimental gene co-expressions and theoretical flux coupling relations. One such inconsistency is when a highly co-expressed gene pair is not associated to fully coupled reactions. However, in this case one cannot draw any conclusion about the incorrectness of the model. The high co-expression may exist, for example, for functionally related genes, while these genes should not necessarily be fully coupled. Another point is that if some specific biochemical pathways are activated in the cell, some genes may not be highly co-expressed anymore. The environmental conditions which activate these pathways may not be captured in the experimental gene expression data. Therefore, having highly co-expressed gene pairs with no fully coupled reactions do not mean that the model should be modified, e.g., reactions should be deleted from the model. Using more comprehensive gene expression data may decrease the number of such inconsistencies. Furthermore, as our results suggest, only certain gaps are found, and can be filled, based on gene expression data. Moreover, regulation of protein expression may occur at the post-transcriptional level, which again means that gene expression data might not be sufficient for a comprehensive gap finding. Despite these shortcomings, we show that GAUGE can be used in practice to find and fill the metabolic gaps, and its performance is comparable to the other well-known widely used gap filling tools. Therefore, it is relevant to use transcriptional level gene expression data for gap filling.
GAUGE is presented here as a potential strategy for gap analysis of metabolic networks that predict different sets of reactions for addition to the model. Several parameters can be adjusted for improving the predictive power of GAUGE. Setting different thresholds for low and high correlation coefficients are one such parameter. Additionally, instead of computing Pearson correlation coefficients of gene expressions, the Boolean version of expression values may be considered, i.e., expression values higher and lower than a certain cut-off are considered as expressed or not expressed, respectively. Then, the genes which are always expressed together can be identified and labeled as fully coupled gene pairs. Another possibility is to use other measures of correlation like mutual information, instead of Pearson correlation coefficients. The gene expression data can also have an important effect on the results predicted by GAUGE. As mentioned above, completeness of dataset can affect the correlation of gene expressions, which in turn may affect the inconsistencies found between experimental observations and model predictions. For obtaining an optimized version of GAUGE with more reliable results all of the above mentioned points should be taken into account.
One may also think of using protein abundance data, e.g., from proteomic databases, instead of gene co-expression data. However, we should note that protein abundance data may contain more noise, including more false negatives, compared to gene expression data, which in turn may result in more unreliable predictions. On the other hand, in the study of Notebaart et al. 21 it is shown that there is a good correlation between gene co-expressions and flux coupling relations.
We should also note that, one way to improve the GAUGE predictions is to use BLAST-weighted dataset of reactions like strategies used recently 18,19 . This way, the presence of unrelated or orphan reactions may be reduced in possible solutions of GAUGE. | 8,863.4 | 2017-02-02T00:00:00.000 | [
"Computer Science"
] |
Residual Stress Evaluation in Friction Stir Welding of Aluminum Plates by Means of Acoustic Emission and Ultrasonic Waves
The residual stress assessment in structures is essential for optimization of the structures’ design. The attention of this paper is focused on how acoustic emission signals caused by tensile loading of the friction stir welded aluminum plates are expected to vary depending upon the residual stress. To this aim, the distribution of residual stresses in two friction stir welded aluminum specimens was firstly evaluated by ultrasonic stress measurement. AE signals were then produced during tensile tests and captured using AE sensors. The obtained AE signals were analyzed using statistical features including crest factor, cumulative crest factor and sentry function. It was found that the crest factor could be used to identify the presence of the residual stresses and that the trends of sentry function are in good agreement with the results of crest factor and cumulative crest factor.
INTRODUCTION
Residual stresses are described as the self-balanced stresses inside the material after a manufacturing process, in the absence of any thermal gradients or external loads [1].Friction stir welding (FSW) is a solid state welding process introduced in 1991 at TWI, in which a pin serves as a rotating tool and is embedded into the adjoining edges of the plates to be welded with a pre-defined tilt angle [2][3][4].This manufacturing procedure is considered in many advanced engineering fields, ranging from aerospace [5] to railway and shipbuilding industry, where the strength and plastic properties of FSW welded specimens might be significantly higher than MIG and TIG welded [6].
Over the years, different methods have been suggested to measure residual stress for both fusion and solid state welding processes in order to obtain reasonable assessments of its value.These methods can be divided into three main categories: destructive, semidestructive and non-destructive methods.
The destructive and semi-destructive techniques are founded on measuring the deformations produced by releasing residual stresses upon material removal from the workpiece.The hole-drilling method is one of the most effective methods providing good reliability and accuracy for stress measurement.Primarily applied by Mathar in 1934 [7], this technique is standardized by ASTM: E837 [8][9][10] and the majority of studies have extensively employed it to validate other stress measurement methods [11,12].
Non-destructive evaluation of residual stresses is an essential step in the design of structures as well as estimation of their reliability under real service conditions.Non-destructive methods involving X-ray diffraction, neutron diffraction and ultrasonic technique often measure some parameter that is associated with the stress.The development of such techniques accompanied by semi-destructive methods like incremental hole-drilling technique is appropriate for stress measurement and health monitoring of the instruments.
Acoustic Emission Method
Acoustic emission (AE) is the result of elastic propagation of waves generated in a material by rapid release of collected energy.The various factors including the defects in microscopic and macroscopic scale are responsible for AE generation in materials [13].Intensive AE may take place during plastic straining, which is straightly connected with dislocation movements.In fact, when a mobile dislocation is suddenly arrested during plastic straining, the kinetic energy is transferred among atoms by a considerable increase of their vibration amplitude: such vibration progresses in the form of elastic waves.At stresses exceeding the yield strength, AE is largely occurred by rapid surmount of dislocation density [14][15][16][17].
Studies conducted on acoustic emission have shown that AE has a high potential in the investigation of the material behavior.Kaiser [15] evidently demonstrated that internal stress wave generation is a widely occurring phenomenon and acoustic emission is basically related to deformation.Tatro [18] and Schofield [19] developed the work of Kaiser through comprehensive studies of the relation between AE and deformation in distinct materials.
Moreover, in the past decades, several studies have been carried out on the signal analysis of acoustic emission in the material subjected to stress.Sedek et al. made use of AE signals for monitoring the reduction of residual stresses in a pressure vessel using RMS value confirming that this parameter can be used for evaluating the variation of the stresses inside the material [20].Venkitakrishnan et al. utilized the acoustic emission technique for analysis of residual stress change due to secondary processes [21].By using acoustic sensors and feature extraction, Shao et al. could identify laserinduced thermal damage in metallic materials.In order to de-noise the acoustic signals, they exploited the ensemble empirical mode decomposition (EEMD) method [22].Newman used AE to predict fatigue crack growth of friction stir welded joints of aluminum, validating a research scope similar to the present work [23].Additionally, there are some studies which employed AE as an online method to investigate the crack beha-vior and the fracture toughness of the specimens [24,25].
Ultrasonic Stress Measurement
There is a linear relationship between the velocity of the ultrasonic wave and the stresses inside the material in the ultrasonic stress measurement.This correlation in the elastic limit is called acoustoelastic effect and shows that flight time of the ultrasonic wave changes linearly with the stress inside the material.The applicability of acoustoelastic effect for measurement of stress was indicated by Crecraft [26].The longitudinal waves propagated parallel to the surface are called longitudinal critically refracted (L CR ) waves and Egle and Bray demonstrated that the L CR waves have the highest sensitivity to the material stress [27].Furthermore, Sadeghi et al. [28] investigated longitudinal residual stress distribution through the thickness of 5086 aluminum plates welded by friction stir welding.In their study, four different frequencies of ultrasonic transducers were employed so as to analyze the stresses within different depths of the material.
Although there have been many researches in the residual stress analysis in friction stir welding employing FEM, there has been no attempt so far to investigate the aforementioned issue using acoustic emission technique.The main goal of this study is a feasibility study for evaluation of the residual stresses produced by friction stir welding by AE and ultrasonic stress measurement.Therefore, the ultrasonic method was firstly employed to measure the longitudinal residual stress in the FSW joint using the L CR waves.The results obtained by the ultrasonic stress measu-rement were then verified by employing the holedrilling standard technique.Additionally, acoustic emission technique was employed to investigate the validated residual stress obtained by ultrasonic method.The methods of sentry function, crest factor and cumulative crest factor were applied in order to analyze the longitudinal residual stress in FSW.
It was concluded that the AE investigation and LCR can be used as nondestructive methods for evaluation of longitudinal residual stresses of aluminum plates joined by friction stir welding and a good agreement was achieved between the results.It was also found that the AE features including crest factor, cumulative crest factor and sentry function can identify the stress severity of the friction stir welded specimens.
L CR method
The residual stresses measurements using L CR waves could be conducted with various ultrasonic setups.As it is usual in experimental configurations, transmitter transducer supplies longitudinal waves at the first critical angle parallel to surface of a material, then two transducers detect the waves.According to (1), Egle and Bray [27] presented the relationship between the relevant uniaxial stress and the change in the measured time-of-flight of the wave to be: In (1), Δσ is variations of stress, L is the acoustoelastic coefficient for L CR wave propagated parallel to the stress considered and E is the elastic modulus.Furthermore, t is the time-of-flight measured experimentally on the tested sample; t 0 is time-of-flight for stress-free and isotropic sample at the environment temperature.Further details about L CR can be found in some literatures [29][30][31].The acoustoelastic coefficient is measured by uniaxial tensile test carried out on the specimens cut from the examined workpiece.In a fixed distance between transducers, the time-of-flight of the L CR wave rises in presence of tensile stress and decreases due to compressive stress.Finally, by measurement of the acoustoelastic constant and the variation in the time-of-flight induced by the welding, the welding residual stresses can be determined.
Sentry function
Over the years, several different approaches have been suggested and successfully applied to the high frequency AE signals to analyze the material behavior in various types of structural applications such as concrete structure, pipelines and rotating machinery.Although mechanical and AE information can be investigated separately to determine residual stress behavior, a comprehensive residual stress analysis cannot be achieved when one is taken into account and the other is neglected.Hence, in order to perform a deeper evaluation of residual stress using AE, a function that integrates both the mechanical and acoustic energy information [32,33] is utilized.This function is called sentry function and is indicated in terms of the logarithm of the ratio between the strain energy (E) and the acoustic energy (E as ), as shown in (2), where x is the test driving variable in the form of strain or displacement.
Crest Factor
Most of the previous researches focused on the statistical features extracted from the AE signals involving energy, standard deviation, duration and counts [34][35][36], but higher order statistical features such as kurtosis and crest factor have been recently applied for condition monitoring in rotating machinery [37][38].These features are good indicators to detect structural damage and engineering faults [39].One of statistical features employed in this work is crest factor using RMS and energy.The crest factor is known as the ratio of peak amplitude of the signal over the RMS in which RMS is the second statistical moment used as a measure of the energy of a signal.It should be noticed that the RMS has a direct proportionality to the amount of energy of the signal, meaning that to signals with higher peak amplitude a higher crest factor is attributed.
In this work, AE technique using two methods of sentry function and crest factor are employed to investigate the residual stress behavior of two friction stir welded specimens with dissimilar welding conditions.
EXPERIMENTAL DETAILS
The process parameters of FSW include feed rate, rotational speed, pin diameter and shoulder diameter.The samples S1 and S2 are manufactured by FSW process joining two 150×100×8 mm plates made from 5086 aluminum.The selected welding parameters for the samples are listed in Table 1.
The FSW tool is made from H13 tool steel while the FSW process is accomplished by using a vertical milling machine.
During the FSW process, steel fixture is used to clamp the specimens at its sides and a back plate is employed to support bottom of the specimens (as shown in Figure 1).The measurement devices, shown in Figure 2, includes ultrasonic software, ultrasonic box with sample rate of 100 MHz and equipments for time-of-flight measuring.The automatic table is assisted by 3 step motors to be able to move the transducers in 3 axes with resolution of 1 μm.The laser cutting and CNC milling are utilized to create the wedge from poly methyl methacrylate (PMMA) material, under the trademark Plexiglas which helps the distance between the mentioned probes to be kept constant.A simple setup based on three transducers is employed for the residual stress measurement and the frequency of 5 MHz is exploited for the probes in order to generate waves at surface of the material.Moreover, the plastic screws exert a constant pressure on the transducers and a couplant layer between the transducer and the wedge help measurement of the longitudinal residual stress to be more accurate.
Measurement devices for ultrasonic stress measurement
The standard uniaxial tensile test is used to investigate the acoustoelastic coefficient based on the other form of (1) as follows: Locating the TOF measurement devices on the tensile test specimens (which are cut from the workpiece in accordance with the Sheet type sample in ASTM: E8 standard), makes possible the measurement of the time-of-flight (t) of the L CR wave.During the tensile test, time-of-flight (t) is measured while stress relief treatment is previously conducted to determine the flight-time of the stress-free sample (t 0 ).While the standard tensile test machine is exploited, the tensile stress is increased gradually; meanwhile the time -of-flight (t) of the wave is measured in each step.So as to measure the E and L in the weld zone and in the parent material (PM) separately, each of these zones needs to be exposed to the tensile stress test.Consequently, the acoustoelastic coefficient (L) is calculated for the two welded specimens in the weld metal and PM.The results of acoustoelastic coefficient measurement are listed in Table 2. Since using L CR waves for stress measurement is still regarded as a method under development, another stress measurement technique such as the hole-drilling method (according to the ASTM: E837) can be taken on to verify the ultrasonic stress measurement results.In this paper, 3 points are considered for this semi-destructive technique.The first strain gauge is located in the weld centerline and the two others are attached on the distance of 15 and 30 mm from the weld centerline (as shown in Figure 3).The hole-drilling method is based on measuring the strains released by the incremental drilling of a hole with 1.78 mm diameter and 2 mm depth using strain gauge rosette and the biaxial stress field is then computed employing equations established by ASTM: E837 [40][41].
AE devices
AE events were achieved by using Acoustic emission software (AEWin) along with a data acquisition system Physical Acoustics Corporation (PAC) PCI-2 with a maximum sampling rate of 40 MHz.PICO which is a broadband, resonant-type and single-crystal piezoelectric transducer from PAC was employed as the AE sensor.The sensor contains a resonance frequency of 453.12 kHz with the optimum operating range of 100-750 kHz.In order to provide good acoustic coupling between the sensor and specimen, grease was employed as a cover for the surface of the sensor.The signal was detected by the sensor and enhanced by a 2/4/6-AST preamplifier.The gain selector of the preamplifier was set to 40 dB while the test sampling rate was 1 MHz with 16 bits of resolution between 10 and 100 dB.Prior to starting experiments, the data acquisition system was calibrated in accordance with pencil lead break procedure.This procedure enables the generation of waves at the specimen surface that are used for the device calibration.At the same time, the velocity and attenuation of the AE waves were measured several times at different locations between the sensors.Afterward, AE signals were captured during tensile test while signal descriptors such as amplitude, rise time, duration, counts, and energy were computed by the AE software (AEWin).Figure 4 shows the AE equipment needed for recording AE events during tensile test.
RESULTS AND DISCUSSION
Figure 5 shows the longitudinal residual stress measured by the ultrasonic method in samples S1 and S2.It can be clearly seen that the sample S1 has higher longitudinal residual stress compared to the sample S2.In fact, the maximum residual stress value is 64.7 MPa in the sample S1 while this value is 45.3 MPa in the sample S2.In addition, the residual stress peak has occurred at the advancing side (AS) of the welding zone for both samples due to the non-symmetric temperature distribution and also more material flow occurring at the AS compared to retreating side (RS) which lead to higher residual stress at AS.In order to validate the results obtained by ultrasonic stress measurement, the hole-drilling method is exploited.Figure 6 shows the longitudinal residual stress measured by the L CR waves and hole-drilling method on sample S1.It can be observed that the difference between ultrasonic and hole-drilling results are less than 3.81MPa (9.07%), 2MPa (4.03%), 3.74MPa (13.85%) in the distance of 0, 15 and 30 mm from the weld centerline, respectively which indicates the agreement between the achieved results.In order to introduce an online method for evaluation of the residual stress, the specimens were analysed by AE features.In fact, the effect of residual stress intensity on AE behavior of the welded components is investigated when subjected to tensile load.Figure 7 and Figure 8 show the crest factor distribution of the resultant AE signals captured by the AE transducers during the tests for the samples S1 and S2.It can be observed from Figure 7 and Figure 8 that the crest factor distribution could be divided into 4 regions.These regions are as follows: Region 1 (Elastic deformation): This region is associated with the elastic portion of the load-time diagram where the exerted stress is lower than the yield stress.At this step, when the applied stress is removed, the material will deform elastically and will return to its actual shape.As a consequence, it is clear that only some weak AE signals exist in this region.The AE behavior indicates that there are no substantial active microstructural defects in the specimens as the result of the applied load.
Region 2 (uniform plastic deformation): When the yield point is passed, some fraction of the deformation will be non-reversible.The physical mechanisms that lead to plastic deformation in the specimens are commonly consequence of dislocations.The tensile loading applied to the sample will cause it to behave in an elastic form.However, once the load exceeds threshold (the yield strength), the deformation rises more rapidly in uniform plastic deformation region compared to the elastic region.During tensile deformation, the material shows strain hardening in which the amount of hardening varies with extension of deformation.At the start of the first region, the strain hardening effect is negligible.This fact can be identified by the uniform appearance of the AE signals with lower crest factor in comparison with the AE signals in region 3.
Region 3 (strain hardening): In this region, the plastic deformation causes the specimens to strengthen.This strengthening occurs mostly due to the dislocation movements and dislocation generation within the crystal structure of the sample.At region 3, where the loading takes place, a significant increase of crest factor value is noticed for the specimens.
Region 4 (non-uniform plastic deformation, necking): Necking event arises in this region while there are comparatively large amounts of strain localized in a small zone of the specimens.It should be noted that before deformation, the specimens have heterogeneities such as local variations or flaws in dimensions or composition that result in local fluctuations in stresses and strains.Furthermore, a considerable increase in crest factor is discerned in region 4 while these peaks in the crest factor could be a signal for existence of different events in this region.
The maximum load values of the specimens can be observed in Figures 7 and 8.It is noticeable that maximum load in sample S2 is higher than peak load in sample S1 which could be justified by the presence of different level of the residual stress in the specimens.
According to the results of ultrasonic stress measurement, the sample S2 has lower residual stress than sample S1 due to the dissimilar welding parameters.As can be seen from Figures 7 and 8, distribution of the crest factor is different in the mentioned regions of the specimens.In the case of sample S2, the value of the crest factor increases uniformly during region 3, but there is a burst type appearance of the crest factor for sample S1 in this region.In region 4, there is a uniform increasing trend in the crest factor value for sample S2, whereas in sample S1 this trend is not uniform.Other interesting differences between the AE behaviors of the specimens could be noted by illustrating the cumulative crest factor plot.
As can be clearly seen from Figure 9 and Figure 10, there are two main slope changes in the cumulative trends of the crest factor plots, but the slope changes are varying significantly for the specimens.In fact, for the sample S2, the first slope gradually increases and this trend does not change until the final failure.In addition, the slope of the second line is greater than the slope of the first line.The slope variation in sample S1 has completely opposed trend, meaning that there is an abrupt increase in the slope of the first line while there is a decreasing trend in the slope of the second line.Therefore, the cumulative crest factor behavior for the specimens with high and low residual stress varies significantly: this could therefore be employed as an appropriate indicator for representing the residual stress intensity.Another method for investigation of the residual stress using AE is plotting the sentry function trends for the specimens.As previously discussed, the sentry function combines the strain energy i.e., the mechanical energy determined by using the load-displacement diagram, and the cumulative AE energy which is considered as the summation of the AE event energy.The results of the sentry function trends obtained for the samples S1 and S2 are shown in Figure 11 and 12, respectively.
It could be seen from Figure 11 and Figure 12 that the sentry function diagram shows some drops, but one of them is dominant compared to the other ones.In fact, the major drop of the sentry function in sample S2 is lower than sample S1.In addition, there is decreasing trend in the sentry function diagram of sample S1 after the major drop, while an increasing trend is observed in sample S2 after the sudden drop.This means that the sentry function trend is a practical technique to predict the severity of the residual stress as its results are in good agreement with the results obtained by ultrasonic method.This is due to the fact that the variation in the amount of residual stress leads to releasing of the strain energy, different defects and also dislocation in the microstructure of the specimens that generates an AE event with energy content.Therefore, different AE behavior would be possible depending upon magnitude of welding residual stress.Additionally, it should be noted that the sentry function trend is in a good agreement with the trends of crest factor and cumulative crest factor diagrams.Since it is possible to employ the crest factor and sentry function trends online during loading, it would be very helpful for investigation of structures under their designed loading in order to prevent undesired failures.Hence, all mentioned AE methods can be a good tool for evaluation of different levels of residual stress in friction stir welding.
CONCLUSIONS
The main goal of this study is to show how acoustic emission signals caused by different level of residual stress is expected to vary during tensile test of friction stir welding of aluminum plates.To reach this goal, the previously measured residual stress by L CR ultrasonic method were validated with hole-drilling technique.The specimens were then exposed to the tensile stress test while the AE signals were stored simultaneously and then analyzed.Based on the obtained results, it can be concluded that: 1-The crest factor of the AE signals could be employed to identify the presence of the different levels of residual stress in the specimens during the loading.Since it is possible to see the crest factor trends online, it would be obviously helpful to use their variations as an alarm to prevent undesired failures caused by residual stress.2-The ultrasonic stress measurement can be an effective nondestructive testing method for measurement of residual stress in friction stir welding, but AE has some benefits in comparison with this method, such as ability of evaluating the structures during loading.The comparison between the ultrasonic and AE results showed an acceptable agreement; hence, the different levels of residual stress can be determined from the measured AE signals.
Finally, it can be concluded that the AE investigation could be used as an online nondestructive method for in-situ monitoring of residual stresses intensity in real structures which are produced by friction stir welding.
Figure 3 .
Figure 3. Hole-drilling setup for friction stir welding of aluminum plates.
Figure 6 .
Figure 6.Verification of the ultrasonic stress measurement by the hole-drilling method on sample S1.
Figure 7 .
Figure 7. Distribution of crest factor for sample S1.
Figure 8 .
Figure 8. Distribution of crest factor for sample S2. | 5,500.6 | 2018-01-01T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
Recognition of Distress Calls in Distant Speech Setting: a Preliminary Experiment in a Smart Home
This paper presents a system to recognize distress speech in the home of seniors to provide reassurance and assistance. The system is aiming at being integrated into a larger system for Ambient Assisted Living (AAL) using only one microphone with a fix position in a non-intimate room. The paper presents the details of the automatic speech recognition system which must work under distant speech condition and with expressive speech. Moreover, privacy is ensured by running the decoding on-site and not on a remote server. Furthermore the system was biased to recognize only set of sentences defined after a user study. The system has been evaluated in a smart space reproducing a typical living room where 17 participants played scenarios including falls during which they uttered distress calls. The re-sults showed a promising error rate of 29% while emphasizing the challenges of the task.
Introduction
Life expectancy has increased in all countries of the European Union in the last decade. Therefore the part of the people who are at least 75 years old has strongly increased and solutions are needed to satisfy the wishes of elderly people to live as long as possible in their own homes. Ageing can cause functional limitations that -if not compensated by technical assistance or environmental management-lead to activity restriction [1] [2]. Smart homes are a promising way to help elderly people to live independently at their own home, they are housings equipped with sensors and actuators [3] [4][1] [5]. Another aspect is the increasing risk of distress, among which falling is one of the main fear and lethal risk, but also blocking hip or fainting. The most common solution is the use of kinematic sensors worn by the person [6] but this imposes some constraints in the everyday life and worn sensors are not always a good solution because some persons can forget or refuse to wear it. Nowadays, one of the best suited interfaces is the voice-user interface (VUI), whose technology has reached maturity and is avoiding the use of worn sensors thanks to microphones set up in the home and allowing hands-free and distant interaction [7]. It was demonstrated that VUI is useful for system integrating speech commands [8].
The use of speech technologies in home environment requires to address particular challenges due to this specific envi-ronment [9]. There is a rising number of smart home projects considering speech processing in their design. They are related to wheelchair command [10], vocal command for people with dysarthria [11] [8], companion robot [12], vocal control of appliances and devices [13]. Due to the experimental constraints, few systems were validated with real users in realistic situation condition like in the SWEET-HOME project [14] during which a dedicated voice based home automation system was able to drive a smart home thanks to vocal commands with typical people [15] and with elderly and visually impaired people [16].
In this paper we present an approach to provide assistance in a smart home for seniors in case of distress situation in which they can't move but can talk. The challenge is due to expressive speech which is different from standard speech: is it possible to use state of the art ASR techniques to recognize expressive speech? In our approach, we address the problem by using the microphone of a home automation and social system placed in the living room with ASR decoding and voice call matching. In this way, the user must be able to command the environment without having to wear a specific device for fall detection or for physical interaction (e.g., a remote control too far from the user when needed). Though microphones in a home is a real breach of privacy, by contrast to current smart-phones, we address the problem using an in-home ASR engine rather than a cloud based one (private conversations do not go outside the home). Moreover, the limited vocabulary ensures that only speech relevant to the command of the home is correctly decoded. Finally, another strength of the approach is to have been evaluated in realistic conditions. The paper is organised as follow. Section 2 presents the method for speech acquisition and recognition in the home. Section 3, presents the experimentation and the results which are discussed in Section 5.
Method
The distress call recognition is to be performed in the context of a smart home which is equipped with e-lio 1 , a dedicated system for connecting elderly people with their relatives as shown in Figure 1. e-lio is equipped with one microphone for video conferencing. The typical setting and the distress situations were determined after a sociological study conducted by the GRePS laboratory [17] in which a representative set of seniors were included.
From this sociological study, it appears that this equipment is set on a table in the living room in font of the sofa. In this way, an alert could be given if the person falls due to the carpet or if it can't stand up from the sofa. This paper presents only the audio part of the study, for more details about the global audio and video system, the reader is referred to [18].
Speech analysis system
The audio processing was performed by the software CIRDOX [19] whose architecture is shown in Figure 2. The microphone stream is continuously acquired and sound events are detected on the fly by using a wavelet decomposition and an adaptive thresholding strategy [20]. Sound events are then classified as noise or speech and, in the latter case, sent to an ASR system. The result of the ASR is then sent to the last stage which is in charge of recognizing distress calls.
In this paper, we focus on the ASR system and present different strategies to improve the recognition rate of the calls. The remaining of this section presents the methods employed at the acoustic and decoding level.
Acoustic modeling
The Kaldi speech recognition tool-kit [21] was chosen as ASR system. Kaldi is an open-source state-of-the-art ASR system with a high number of tools and a strong support from the community. In the experiments, the acoustic models were contextdependent classical three-state left-right HMMs. Acoustic features were based on Mel-frequency cepstral coefficients, 13 MFCC-features coefficients were first extracted and then expanded with delta and double delta features and energy (40 features). Acoustic models were composed of 11,000 contextdependent states and 150,000 Gaussians. The state tying is performed using a decision tree based on a tree-clustering of the phones. In addition, off-line fMLLR linear transformation acoustic adaptation was performed.
The acoustic models were trained on 500 hours of transcribed French speech composed of the ESTER 1&2 (broadcast news and conversational speech recorded on the radio) and REPERE (TV news and talk-shows) challenges as well as from 7 hours of transcribed French speech of the SH corpus (SWEET-HOME) [22] which consists of records of 60 speakers interacting in the smart home and from 28 minutes of the Voix-détresse corpus [23] which is made of records of speakers eliciting a distress emotion.
Subspace GMM Acoustic Modelling
The GMM and Subspace GMM (SGMM) both model emission probability of each HMM state with a Gaussian mixture model, but in the SGMM approach, the Gaussian means and the mixture component weights are generated from the phonetic and speaker subspaces along with a set of weight projections.
The SGMM model [24] is described in the following equations: where x denotes the feature vector, j ∈ {1..J} is the HMM state, i is the Gaussian index, m is the substate and cjm is the substate weight. Each state j is associated to a vector vjm ∈ R S (S is the phonetic subspace dimension) which derives the means, µjmi and mixture weights, wjmi and it has a shared number of Gaussians, I. The phonetic subspace Mi, weight projections w T i and covariance matrices Σi i.e; the globally shared parameters Φi = {Mi, w T i , Σi} are common across all states. These parameters can be shared and estimated over multiple record conditions.
A generic mixture of I gaussians, denoted as Universal Background Model (UBM), models all the speech training data for the initialization of the SGMM.
Our experiments aims at obtaining SGMM shared parameters using both SWEET-HOME data (7h), Voix-détresse (28mn) and clean data (ESTER+REPERE 500h). Regarding the GMM part, the three training data set are just merged in a single one. [24] showed that the model is also effective with large amounts of training data. Therefore, three UBMs were trained respectively on SWEET-HOME data, Voix-détresse and clean data. These tree UBMs contained 1K gaussians and were merged into a single one mixed down to 1K gaussian (closest Gaussians pairs were merged [25]). The aim is to bias specifically the acoustic model with the smart home and expressive speech conditions.
Recognition of distress calls
The recognition of distress calls consists in computing the phonetic distance of an hypothesis to a list of predefined distress calls. Each ASR hypothesis Hi is phonetized, every voice commands Tj is aligned to Hi using Levenshtein distance. The deletion, insertion and substitution costs were computed empirically while the cumulative distance γ(i, j) between Hj and Ti is given by Equation 1.
The decision to select or not a detected sentence is then taken according a detection threshold on the aligned symbol score (phonems) of each identified call. This approach takes into account some recognition errors like word endings or light variations. Moreover, in a lot of cases, a miss-decoded word is phonetically close to the good one (due to the close pronunciation). From this the CER (Call Error Rate i.e., distress call error rate) is defined as:
CER =
Number of missed calls Number of calls (2) This measure was chosen because of the content of the corpus Cirdo-set used in this study. Indeed, this corpus is made of sentences and interjections. All sentences are calls for help, without any other kind of sentences like home automation orders or colloquial sentences, and therefore it is not possible to determine a false alarm rate in this framework.
Live Experiment
An experiment was run in the experimental platform of the LIG laboratory in a room whose setting corresponds to Figure 1 and equipped with a sofa, a carpet, 2 chairs, a table and e-lio. A Sennheiser SKM 300 G2 ME2 omnidirectional microphone was set on the cupboard. In these conditions, the microphone was at a distance of above 2 meters from the speaker (Distant speech conditions). The audio analysis system consisted in the CIR-DOX software presented in Section 2 which was continuously recording and analysing the audio streams to detect the calls.
Scenarios and experimental protocol
The scenarios were elaborated after field studies made by the GRePS laboratory [17]. These studies allowed to specify the fall context, the movements during the fall as well as the person's reaction once on the floor. Phrases uttered during and after the fall were also identified "Blast! What's happening to me? Oh shit, shit!". The protocol was as follows [18]. Each participant was introduced to the context of the research and was invited to sign a consent form. The participants played four scenarios of fall, one blocked hip scenario and two other scenarios called "true-false" added to challenge the automatic detection of falls by the video analysis system. If the participant's age was under 60, he wore a simulator which hampered his mobility and reduced his vision and hearing to simulate aged physical conditions. Figure 3 shows a young participant wearing the simulator at the end of a fall scenario. The average experiment duration of an experiment was 2h 30min per person. This experiment was very tiring for the participants and it was necessary to include rehearsals before starting the recordings so that the participant felt comfortable and was able to fall securely.
Voice commands and distress calls
The sentences of the AD80 corpus [19] served as basis to develop the language model used by our system. This corpus was recorded by 43 elderly people and 52 non-aged pepole in our laboratory and in a nursing home to study the automatic recognition of speech uttered by aged speakers. This corpus is made of 81 casual sentences, 31 vocal commands for home automation and 58 distress sentences. An excerpt of these sentences in French is given Table 2, the distress sentences identified in the field study reported in section 3.1.1 were included in the corresponding part of AD80. The utterance of some of these distress sentences were integrated into the scenarios with the exception of the two "truefalse" scenarios.
Acquired data: Cirdo-set
In this paper we focus on the detection of the distress calls, therefore we don't consider the audio event detected and analyzed on the fly but only the full records of each scenario. These data sets were transcribed manually using transcriber [26] and the speech segments were then extracted for analysis.
The targeted participants were elderly people that were still able to play the fall scenarios securely. However, the recruitment of such kind of population was very difficult and a part of the participants was composed of people under 60 years old but they were invited to wear a special suit [18] which hampered their mobility and reduced their vision but without any effect on speech production. Overall, 17 participants were recruited (9 men and 8 women). Among them, 13 participants were under 60 and worn the simulator. The aged participants were between 61 and 83 years old.
When they played the scenarios, some participants produced sighs, grunts, coughs, cries, groans, pantings or throat clearings. These sounds were not considered during the annotation process. In the same way, speeches mixed with sound produced by the fall were ignored. At the end, each speaker uttered between 10 and 65 short sentences or interjections ("ah", "oh", "aïe", etc.) as shown Table 1.
Sentences were often close of those identified during the field studies ("je peux pas me relever -I can't get up", "e-lio appelle du secours -e-lio call for help", etc.), some were different ("oh bein on est bien là tiens -oh I am in a sticky situation"). In practice, participants cut some sentences (i.e., inserted a delay between "e-lio" and "appelle ma fille -call my daughter"), uttered some spontaneous sentences, interjections or non-verbal sounds (i.e., groan).
Off line experiments
The methods presented in Section 2 were run on the Cirdo-set corpus presented in Section 3.1.3.
The SGMM model presented in Section 2.2 was used as acoutic model. The generic language model (LM) was estimated from French newswire collected in the Gigaword corpus. It was 1-gram with 13,304 words. Moreover, to reduce the linguistic variability, a 3-gram domain language model, the specialized language model was learnt from the sentences used during the corpus collection described in Section 3.1.1, with 99 1-gram, 225 2-gram and 273 3-gram models. Finally, the lan-
The interest of such combination is to bias the recognition towards the domain LM but when the speaker deviates from the domain, the general LM makes it possible to avoid the recognition of sentences leading to "false-positive" detection. Results on manually annotated data are given Table 3. The most important performance measures are the Word Error Rate (WER) of the overall decoded speech and those of the specific distress calls as well as the Call Error Rate (CER: c.f. equation 2). Considering distress calls only, the average WER is 34.0% whereas it a 39.3% when all interjections and sentences are taken into account.
Unfortunately and as mentionned above, the used corpus doesn't allow the détermine a False Alarm Rate. Previous studies based on the AD80 corpus showed recall, precision and Fmeasure equal to 88.4%, 86.9% and 87.2% [19]. Nevertheless, this corpus was recorded in very different conditions, text reading in a studio, in contrary of those of Cirdo-set.
Discussion
These results are quite different from those obtained with the AD80 corpus (with aged speakers and speaker adaptation): WER was 14.5% [19]. There are important differences between the recording conditions used for AD80 and for the Cirdo-set corpus used in our study that can explain this performance gap: • AD80 is made of readings by speakers sitting in comfortable position in front of a PC and the microphone ; • AD80 was recorded in nearest conditions in comparison with distant setting for Cirdo-set ; • Cirdo-set was recorded by participants who fell on the floor or that are blocked on the sofa. They were encouraged to speak in the same way that they would speak if they would be really put in these situations. Obviously, we obtained expressive speech, but there is no evidence that the pronunciation would be the same as in real conditions of a fall or a blocked hip.
Regarding the CER, its global value 26.8% shows that 74.2% of the calls were correctly recognized ; furthermore, at the exception of one speaker (CER=71.4%), CER is always below 50% consequently more than 50% of the calls were recognized. For 6 speakers, CER was below 20%. This suggests that a distress call could be detected if the speaker is able to repeat his call two or three times. However, if the system did not identify the first distress call because the person's voice is altered by the stress, it is likely that this person will fill more and more stress and as a consequence future calls would be more difficult to identify. In a same way, our corpus was recorded in realistic conditions but not in real conditions and frail elderly people may not be adequately simulated by healthy human adults. A relatively small number of missed distress calls could render the system unacceptable for use amongst the potential user and therefore some efforts in this regard would need to be pursued.
Conclusion and perspectives
This study is focused on the framework of automatic speech recognition applications in smart homes, that is in distant speech conditions and especially in realistic conditions very different from those of corpus recording when the speaker is reading a text.
Indeed in this paper, we presented the Cirdo-set corpus made of distress calls recorded in distant speech conditions and in realistic conditions in case of fall or blocked hip. The WER obtained at the output of the dedicated ASR was 36.3% for the distress calls. Thanks to a filtering of the ASR hypothesis at phonetic level, more than 70% of the calls were detected.
These results obtained in realistic conditions gives a fairly accurate idea of the performances that can be achieved with state of the art ASR systems for end user and specific applications. They were obtained in the particular case of the recognition of distress calls but they can be extended to other applications in which expressive speech may be considered because it is inherently present.
As stated above, obtained results are not sufficient to allow the system use in real conditions and two research ideas can be considered. Firstly, speech recognition performances may be improved thanks to acoustic models adapted to expressive speech. This may be achieved to the record of corpora in real conditions but this is a very difficult task. Secondly, it may be possible to recognize the repetition, at regular intervals, of speech events that are phonetically similar. This last method does not request the good recognition of the speech. Our future studies will address this problem. | 4,526.2 | 2015-09-11T00:00:00.000 | [
"Computer Science"
] |
EFFECT OF DATA QUALITY ON WATER BODY SEGMENTATION WITH DEEPLABV3+ ALGORITHM
.
INTRODUCTION
Image segmentation, particularly semantic image segmentation, is a crucial task in computer vision. This process involves labelling each pixel in an image with a class, thereby providing a detailed understanding of the image at a granular level. Semantic image segmentation has broad applications ranging from autonomous driving to remote sensing (Subramanian et al., 2022) (Ramiya et al., 2016), medical image analysis, and robotics (Badrinarayanan et al., 2017) (Long et al., 2015) (Zhao et al., 2017).
One of the state-of-the-art architectures that have significantly impacted the field of semantic image segmentation is DeepLabV3+ (Chen et al., 2018) (Sunandini et al., 2023). This architecture combines the strengths of atrous convolutions, spatial pyramid pooling modules, and an encoder-decoder structure, thereby enhancing boundary detection and dealing with objects of different scales effectively (Yang et al., 2018) (George et al., 2023).
However, the performance of DeepLabV3+ like any other deep learning model depends on the quality of the training data. The more accurate and diverse the training data, the better the model's performance in segmenting images (George et al., 2023). This relationship is particularly noticeable in tasks such as segmenting water bodies from satellite images, which form the focus of this study (Harika et al., 2022).
Our objective is to delve deeper into this dependency on data quality. We investigate the influence of the quality of data while training the DeepLabV3+ algorithm for segmenting water bodies from satellite images. We scrutinize scenarios where there is a mismatch in the mask of the training data or when the training data consists of mixed water quality, such as clear and turbid water or water with floating vegetation. It is essential to understand whether excluding such cases would improve or degrade the model's performance (Jean et al., 2019) (Harika et al., 2022).
There is a general belief that cleaner and more consistent training data leads to better performance in machine learning models. However, when dealing with real-world scenarios, such as satellite image segmentation, data inconsistencies are often the norm rather than the exception. Hence, it is important to understand the behaviour of models like DeepLabV3+ under these circumstances, as they reflect realistic conditions that these models would encounter in actual deployments (Volpi and Tuia, 2016).
In the following sections, we present our findings, shedding light on the influence of data quality on the performance of DeepLabV3+ in the task of segmenting water bodies from satellite images.
Dataset
This study utilizes an open-source Kaggle dataset that contains satellite images and masks captured by the Sentinel-2A and Sentinel2B satellite. The images and the mask in the dataset have three bands of image: red (R), green (G), and blue (B), allowing to capture both the spectral and spatial information essential for analysis. Figure 1 represents the RGB image and the corresponding water class and no water class mask. with water class shown in white pixels (right).
Data Processing
Dataset consists of images and masks of varying sizes, to maintain uniformity and avoid bias in training the model, various sizes were resized to 256x256 pixels. This resizing ensures consistency in the data, allowing for efficient processing and analysis. The masks are recoded, where background pixels were labelled as 0, representing areas without water, while water pixels were labelled as 1, indicating the presence of water.
The dataset is split into three sets after being resized and recoded as training, testing, and validation, in a ratio of 80:10:10. The training set, includes 80% of the data, which is used to train the model. The testing set, includes 10% of the data, is used to evaluate how well the model performs on new, unseen images.
The validation set, also includes 10% of the data, is used to measure model's performance after each iteration and is helpful in adjusting the hyperparameters based on the feedback from the validation set. Splitting the dataset into separate sets allows for thorough model development and ensures that the model is tested on unbiased data to assess its ability to classify water effectively.
Data Refining
In the process of data refining, the objective was to enhance the performance of the model. To achieve this, three separate experiments are conducted, where each experiment involves training a new instance of DeeplabV3+.
These experiments aim to refine the data by incorporating various techniques such as removing images with mismatched masks and floating vegetation images. Only the pre-processing described earlier as resizing to 256x256 was performed in this experiment and no refinement of data was made.
Experiment 2:
From the original dataset, images with incorrect masks were removed to ensure that network learns from correct masks. After removing incorrect masks, the model with accurate training data will be helpful for giving better segmentation results. In Figure 2, it can be inferred that there is land in image, but NDWI (Normalized Difference Water Index) predicted land as water in mask.
Figure 2.
Sample RGB (left) image and the incorrect mask (right) where land was identified as water.
Experiment 3:
For the final experiment, images containing mixed water quality such as turbid water images and floating vegetation images were removed. This also resulted in addressing class imbalance. The dataset originally contained fewer samples for turbid and floating vegetation compared to images depicting good quality water. By removing these fewer samples, the dataset was refined to enhance the model's performance. Figure 3 and Figure 4 turbid water image and floating vegetation image with respective masks.
Selection of DeepLabV3+ Architecture for Water Body Segmentation
DeepLabv3+ has distinguished itself as a potent tool for semantic image segmentation, a technique that assigns every pixel of an image to a specific class label, facilitating intricate analysis of images. The model's application ranges from augmenting the perception capability of autonomous vehicles to improving the precision of medical diagnoses. The choice of DeepLabv3+ for this study lies in its exceptional architecture that accommodates various scales and contexts within images and its proficiency in capturing details regardless of scale, a characteristic that aligns with the properties of our dataset.
The architecture of DeepLabv3+ incorporates an advanced Atrous Spatial Pyramid Pooling (ASPP) module and an encoder-decoder structure. This combination aids in maximizing the contextual information derived from the images while ensuring that fine details are not lost. The foundation of this architecture is the Exception model, a high-performing convolutional neural network used to extract preliminary features from input images.
The ASPP module, the core of the DeepLabv3+ architecture, utilizes filters at diverse scales concurrently. This technique enables the model to capture contextual information from different sized areas in the image, an indispensable feature for our dataset, given the varying sizes of water bodies. Once this multi-scale feature extraction is complete, these features become inputs for the decoder.
The decoder's role in the DeepLabv3+ architecture is to refine the segmentation results using the extracted features. It up samples these feature maps to a higher resolution and combines them with earlier-stage feature maps from the network. This blend of high-level contextual information and detailed spatial information allows the decoder to generate precise and granular segmentation maps.
The output from DeepLabv3+ is a categorically labelled image, noted for its crisp object boundaries and comprehensive contextual understanding. The incorporation of state-of-the-art techniques such as depth-wise separable convolutions enhances its computational efficiency. As a result of this, and by leveraging the capabilities of the ASPP module and the encoder-decoder structure, DeepLabv3+ serves as an ideal choice for our study, providing a perfect blend of efficiency and accuracy for the task of water body segmentation.
Accuracy and Loss plots
Accuracy and loss plots provide valuable insights into the learning process of a model during training. As training progresses, the model adjusts its parameters to improve accuracy and reduce loss. With more epochs, accuracy improves as the model learns the data's patterns. Ideally, validation accuracy converges with training accuracy, indicating effective generalization. Loss decreases as the model's predictions align better with true values. Eventually, the reduction in loss flattens as the model captures most relevant patterns and further adjustments have diminishing returns.
Metrics
Relying solely on accuracy as the sole evaluation metric may lead to an incomplete assessment of model performance, particularly in scenarios involving imbalanced datasets. While accuracy considers both classes, precision, recall, and F1 score focus specifically on the target class.
TP+TN Accuracy= TP+TN+FP+FN
(1) Precision quantifies the proportion of correctly predicted positive instances relative to the total predicted positives, whereas recall measures the proportion of true positive (TP) instances identified by the model.
TP Precision= TP+FP
(2) TP Recall= TP+FN (3) F1 score harmonizes precision and recall into a single metric, providing a balanced assessment that accounts for both false positives (FP) and false negatives (FN).
2*Precision*Recall F1-Score= Precision+Recall
By incorporating precision, recall, and F1 score alongside accuracy, a more comprehensive and nuanced understanding of the model's performance within the target class can be attained, enabling more informed decision-making.
Experiment 1:
The accuracy plot obtained when DeepLabV3+ was trained with all images (n = 6422), shows a gradual increase in the accuracy around 90% ( Figure 5). However, the loss values shown in Figure 6 remain relatively high, indicating that the model faced some challenges in reducing its errors. The metrics obtained for this experiment are summarized in Table 2.
Experiment 2:
The accuracy plot obtained when DeepLabV3+ was trained after removing RGB images with incorrect masks (n = 1581), shows an improvement in the model accuracy ( Figure 7). The accuracy value steadily improved and reached approximately 95%. However, the loss values remain relatively higher (Figure 8), indicating that the model continues to encounter challenges in minimizing its errors. In terms of evaluation metrics, a marginal change is observed in the results.
Experiment 3:
When DeepLabV3+ was trained with images of clear water bodies and correct masks, the accuracy and loss plots demonstrate remarkable improvement. Despite using fewer images, the overall accuracy surpassed 95% (Figure 9). This demonstrated that the model's ability to generalize improved when there was less variation in the training data. Moreover, the convergence of accuracy and loss closely matches the training accuracy, showcasing the model's robustness. This indicates enhanced learning capabilities compared to the previous experiments, evident in the gradual decrease in loss over time ( Figure 10). Precision score increased to 0.9792 from 0.4827 when RGB images of turbid water and floating vegetation were removed ( The enhanced performance of the model underscores the importance of dataset cleaning and its influence on training deep learning models for image segmentation tasks.
CONCLUSION
In this study, we investigated the influence of data refinement on model predictions. Our findings underscore the importance of data quality in improving model performance. Prior to refining the data, the model achieved a precision of 0.4692. However, after implementing data refinement techniques, including the removal of mismatched masks and mixed water quality images, notable improvements in precision were observed. The precision increased to 0.4827 after removing mismatched masks and further improved to an impressive 0.9792 after excluding mixed water quality images. These results highlight the significance of addressing data quality issues to enhance model accuracy and reliability. By prioritizing data refinement, researchers and practitioners can optimize model performance and minimize potential errors. Overall, this study underscores the crucial relationship between data quality and model predictions, emphasizing the need for meticulous data refinement to achieve more accurate and reliable results. | 2,647.4 | 2023-09-05T00:00:00.000 | [
"Environmental Science",
"Computer Science"
] |
Changes to household food shopping practices during the COVID-19 restrictions: Evidence from the East of England
Measures to control the spread of COVID-19 have changed the way we shop for food and interact with food environments. This qualitative study explored food shopping practices in the East of England, a large diverse region including coastal, urban and rural settings. In 2020/2021 we interviewed 38 people living in the region and 27 professionals and volunteers providing local support around dietary health. Participants reported disruption to supermarket shopping routines; moving to online shopping; and increased reliance on local stores. COVID-19 has impacted disproportionately upon lower-income households and neighbourhoods. The longer-term implications for dietary health inequalities must be investigated.
household consumption and spending habits in relation to food (Criteo Coronavirus Survey, 2020).
Supermarkets are the dominant format of grocery retailing in the UK (Wrigley Et Al., 2009;Degeratu Et Al., 2000;Miller, 1997). In many countries, including the UK, one of the only retail outlets open during lockdown was supermarkets (Martin-Neuninger And Ruby, 2020). Retail grocery sales have risen steeply since the first UK lockdown in March 2020 (Mckevitt, 2021). Similarly, the number of new supermarkets opening across the UK doubled in 2020, with the biggest increase in the East of England (Makwana, 2021).
Little is yet known about the effects of the pandemic on household food shopping practices (Leone Et Al., 2020). Emerging research indicates that after an initial period of 'stocking up' and buying more than usual, consumers have adopted new strategies including shopping less frequentlybut buying more when they do go shopping, and buying more convenience foods (Faour-Klingbeil Et Al., 2021;Laguna Et Al., 2020). If practices like these become established in the longer term there may be negative impacts on dietary health inequalities, including obesity.
As a result of the pandemic, the food environment is changing and the ways in which people can interact with it have been severely disrupted. Large sections of the population now have less income than before, and food budgets have been reduced as a result. Combined, these represent a significant shift in the structural drivers of dietary health inequalities. From a public health and interventionist perspective, this paper aims to investigate: how residents in the East of England responded to changes in their local food environments (as a consequence of the mitigation measures); how they shopped for food during the COVID-19 pandemic; and the implications this has for their longer-term food practices, health and wellbeing.
Method
The findings presented are drawn from a larger qualitative study which aimed to understand how COVID-19 affected local food systems, household food practices and efforts to mitigate dietary health inequalities in the East of England (Thompson et al., 2020). From May 2020 to March 2021, we conducted remote semi-structured interviews with: i) individuals living in the East of England; and ii) professionals and volunteers providing support in relation to food access and/or dietary health in the region. Semi-structured interviews are suitable for collecting data on complex and sometimes sensitive topics, like feeding the family and food shopping, because it allows the researcher to collect open-ended data, to explore participant thoughts, feelings and beliefs about a particular topic and to delve deeply into practices and beliefs (Dejonckheere And Vaughn, 2019). A total of 65 participants were interviewed (38 residents and 27 professionals and volunteers).
See Supplementary File 1 for a completed COREQ (consolidated criteria for reporting qualitative research) checklist, in order to ensure comprehensive reporting of this qualitative study.
Study site and recruitment
All data were collected (remotely) in the East of England (see Fig. 1). The region covers Bedfordshire, Cambridgeshire, Essex, Hertfordshire, Norfolk and Suffolk. It is a diverse area covering costal, urban and rural settings. There is also a mixture of both socio-economically deprived and affluent areas (see Fig. 2).
The region's prevalence of both obesity and hospital admissions involving a diagnosis of malnutrition are higher than the national average (NHS Digital, 2020). The East of England also contains substantial local clusters of populations at a higher risk of food insecurity (Smith Et Al., 2018;BSNA, 2018).
Resident and professional/volunteer participants were recruited via the National Institute of Health Research (NIHR) Applied Research Collaboration (ARC) East of England website and on social media sites.
The study was also shared amongst our academic and community networks, who passed the details to relevant individuals and community organisations, newsletters or their own social networks. Some hard copies of study information leaflets were also distributed to residents receiving food parcels through the local authority scheme in one county in the region. Potential participants resident in the East of England were directed to complete an online screening questionnaire to provide their contact details, postcode, working status, household composition and typical food shopping practices (for example, who does the food shopping for the household and whether or not they have food delivered). Where it was not possible for participants to complete the screening questionnaire themselves online it was completed by a member of the research team via the telephone with the participant, prior to the interview taking place.
Ethical issues and informed consent
Ethics approval for the study was granted by the [REMOVED FOR REVIEW]. Participants were informed about what would happen to their data and their right to withdraw from the study. Informed consent was obtained from all participants. Where possible, participants were sent electronic consent forms to return via email. When this was not possible, consent forms were completed over the phone with a member of the research team. In these cases (n = 9), consent was given verbally, audiorecorded and the form signed electronically, by the researcher, on behalf of the participant.
Sample
The professional/volunteer participants were sampled for diversity, to ensure participation from a range of organisations and services (Table 1). We took a purposive approach to the resident sample (Table 2), with a focus on: households with infants and/or school-aged children; families eligible for free school meals (FSM); low-income households or those on state benefits; those aged 70 years+; households with people who were self-isolating or shielding due to a health condition; and households with key workers. Seventy-one residents and 36 professionals/volunteers were invited to take part or responded to our call for participants. Of which, 33 residents and 9 professionals/ volunteers dropped out of the study before interview. Drop-out occurred due to changes in their circumstances, unforeseen time constraints, a change of mind and non-response for unknown reasons. Recruitment and data collection continued until similar themes started to emerge from new interviews (after 65 interviews: 38 with residents and 27 with professionals).
Data collection
Semi-structured interviews were conducted remotely via telephone or video call, lasting between 30 and 60 minutes. Participants were offered the option of either telephone or video call. We offered this choice in order to: (i) ensure that participants who did not have access to the necessary equipment or software required for video calls (e.g. laptop, Zoom) were still able to take part should they want to; and (ii) provide increased opportunities to build rapport that virtual face-to-face interactions via video calls (as opposed to telephone calls) allow. Six of the seven authors were involved in data collection (CT, LH, AD, EM, SR and RFall female, academic researchers with PhDs and experience of using qualitative research methods).
Topic guides (see Supplementary File 2) were developed by the research team. We examined previous literature on food practices and used our own, collective, previous work on food shopping, food poverty, and food provisioning to further inform the guides. Our patient and public involvement (PPI) colleagues informed the design of the topic guides, fed back on drafts, and facilitated pilot interviews to further refine the questions (see Acknowledgements).
Resident interviews explored the perspectives and experiences of the pandemic and associated restrictions, general food practices, and changes to food routines as a result of COVID-19. Interviews with professionals/volunteers focused on understanding local provision for supporting dietary health before the pandemic, how these were impacted by the restrictions, and how local communities responded to the crisis.
Data analysis
Interviews were audio-recorded, transcribed verbatim and pseudonyms were assigned to each participant. Anonymised transcripts were uploaded to the qualitative data management software NVivo and subject to thematic analysis (Braun And Clarke, 2014). Open coding was used to identify and categorise practices and episodes related to food and food shopping. Selective coding was used to identify the values and motivations that linked codes. We used open and then selective coding in order to: (i) facilitate the emergence of new theoretical possibilities and concepts (open coding); and (ii) integrate and pull together the developing analysis (selective coding) (Braun And Clarke, 2014).
A coding frame was developed by the research team (all seven authors: CT, LH, AD, RF, EM, SR and WW) and refined to capture the main concepts. This was achieved by each of the authors coding an initial subset of the transcripts and then comparing codes (combining, separating, and adding as necessary) during a series of coding meetings. The resulting coding frame helped to establish themes, describing sets of consistent practices in relation to food shopping (Aronson, 1994). All of the transcripts were then coded according to the coding frame by three of the authors (CT, LH and RF). An additional Research Fellow (HW, see Acknowledgements), with prior experience of qualitative methods, also assisted with coding the interview transcripts.
Findings
The pandemic presented a range of challenges for food shopping. There was a general and widespread claim from participants, that they were buying more food and spending more money on food shopping than they had done before COVID-19. As a result, people said they were much more aware of the amount of food they were buying and eating and were able to reflect on it in detail. Eating at home all or most of the time meant increased labour in terms of food provisioning, especially food shopping, for most households. The notable exception to this was some of the older people who lived alone. For them, staying indoors and social isolation reduced their interest in, and the quantity and quality of, the food they ate. Those on very low incomes, especially those reliant on welfare benefits, found it difficult to manage their food budget because they were unable to 'shop around' for the best prices at multiple stores. The in-store environment also became increasingly difficult for those with caring responsibilities and/or with health and mobility issues. Overcrowding, queueing, and hostility from other shoppers all made food shopping more difficult and strenuous.
Three distinct behavioural responses to these challenges emerged from the analysis: (1) changing supermarket shopping routines; (2) moving to online shopping; and (3) increased reliance on smaller local stores. These are described in the sections below.
(1) Changing supermarket shopping routines 1 Whereby the participant stated they were shielding due to a health condition or other related reason (not due to age). Excludes participants shielding in other groups: low-income n = 2 and school-aged children n = 1.
Supermarket shopping continued to be the preferred or most dominant food shopping activity during the pandemic for many of the people we spoke to. However, shopping at the supermarket became more difficult, due to the restrictions, and was described as less safe, due to the risk of COVID-19 infection. To continue getting their food from the supermarket in a way acceptable to them, people needed to change their routines and practices. Broadly, participants reported trying to limit their frequency of shopping trips. They would go to the supermarket less often but bought more and 'stocked up' when they were there. For some, with access to a car, this included shopping for and delivering food to relatives and neighbours. Increased levels of planning were needed to successfully carry out supermarket food shopping. A mother of schoolaged children, Juliette, who was shielding at various periods throughout the pandemic, explained that she was shopping less often but carefully planning and buying a lot more when she did go shopping: "I would probably do a main shop in a supermarket on the Monday and then on a Thursday or Friday I'd pop to one of the cheapie shops like Home Bargains or wherever, sort of like on my lunch break at work and get a few extra bits … and just top up for the weekend … now I write a list and do one weekly shop … then I know exactly how much I'm spending. Whereas before when you are going into the shops two or three times a week you're not really keeping a track of what you are spending. So now I know exactly what I'm spending. I don't know if that's a good thing or not." As can be seen in Juliette's account, the restrictions also hampered her ability to 'shop around' and go to different shops to get the best value and her preferred items.
In addition to frequenting or avoiding certain stores, some people changed the times they went shoppinggoing very early in the morning or later at night to avoid other shoppers or ensure that they arrived soon after the shelves were restocked. Impulse buying or 'popping out' to the shops was, at best, difficult and, at worst, impossible. Much more preparation was needed for food shopping. People planned meals well in advance and wrote shopping lists to make sure they got everything they needed without having to go out again. As one participant commented, they had become more 'regimented' in their approach. People also reported travelling further to larger, often out-of-town, stores because these stores stocked more produce and tended to be more spaciousmaking social distancing easier. Emma, a key-worker and mother of school-aged children, explained why she preferred larger stores: "So now instead of going to small shops, like where we live … the supermarkets are quite a bit smaller, we, instead we are travelling further to go to the larger shops, because you can get everything you need at any one place, the aisles are wider, these sorts of things. It just gives you more of a sense of security." Discount stores were sometimes perceived as less safe and less welcoming during the restrictions. Christine, who lives with mobility issues, explained why she favoured some stores and avoided others: "We also found that [Waitrose] … a very gentle atmosphere … being extremely sensible with their jobs, with shielding, I've just found it a much nicer place to shop … Mainly Sainsbury's for the same reason, there's more space in the aisles and the staff are less pushy towards an old lady with a walking stick who is struggling to do the shopping … And COVID has made shopping an unpleasant experience and I've been grateful to be at Waitrose where staff are still talking to people, because a lot of places people are … so frightened they've become angry and they will take it out on anyone around them for any reason, it's quite a volatile … And as you go lower down the financial chain and the food chain and the social chain it gets nastier and nastier. So there are places … I'm actively avoiding Aldi because there have been fights with the security guards outside about whether or not you go in with your children or whether or not somebody is well enough to go in or whether or not you should wait another five minutes in the rain." 3.1. The difficulties of changing supermarket shopping routines Strategies such as travelling further, engaging in less frequent but more expensive shopping trips, and changing shopping times were not possible for everyone. Such practices typically required access to a car, or at least reliable public transport (which was sometimes not possible to access during the restrictions), mobility, sufficient funds, adequate food storage facilities at home, and a reasonably flexible schedule. For some people, going out shopping was difficult or impossible and went against advice to isolate and/or stay at home. There was a range of support provided to help with food shoppingboth statutory and community -but this was not always easy to access or well-advertised. Adam, a local authority employee, helped to organise community support during the pandemic and explained the particular difficulties faced by older people who needed help with food shopping: "[People ring up and tell me] that 'I'm over 70 and I shouldn't really go out and there's a lockdown and I'm too … frail to stand up in the queue for 40 minutes to get into the shop, then carry all my shopping because I can't get a taxi. All of those little things, well I usually get my pension to pay for my shop, but I can't go and collect my pension because the shop's shut and I can't queue outside'. So it's all of those little things that start to build up." The restrictions in place in supermarkets around limiting the number of people instore at any one time and trying to maintain social distancing often meant that food shopping trips took a lot longer, involved lots of queuing and waiting outside, prevented people from going shopping together, and left people unable to sit down and rest or use the toilet in store (these facilities were suspended). Even getting to the shops to do the food shopping was difficult for those with caring responsibilities. Those being cared for were often in the 'shielding' category and were therefore not supposed to leave home. However, respite care to allow carers to go shopping was also typically suspended due to the mitigation measures. Taking the person they cared for with them was often not appropriate or even possible, nor could they be left at home alone, meaning carers faced multiple challenges. June, the manager of a carers' support group, outlined these challenges: "So people in their 50s that have dementia and … their carers, again, you know, shopping for them is huge because obviously they either have to have someone to come and sit with their, the person that they care for or take them with them and, you know, again Corona has had a huge impact on that because that's not easy to be taking someone out of their routine pattern because with dementia, the most, the more routine you can have, the better." The coping strategies of changing shopping times and locations described above were not always possible for those engaged in complex care regimes. For instance, Lucy lived in a rural area and cared for her daughter with multiple health needs. To do her food shopping, Lucy needed an additional carer to help out, even if this meant leaving her other daughter at home to cover her care. When this was not possible, she tried to go shopping very early in the morning while her daughter was still asleep. Although this was far from ideal for the family: "Because Nicola my daughter has got Down's syndrome, it's a two-man job, one of you can't look after her on her own, you know she can't be left … you can't go to the toilet and leave her downstairs. There's got to be two people so she can go to the loo or get a drink or whatever. Yeah, so we went quite early in the morning while she was still in bed to avoid them having to cope too much." (2) Moving to online shopping For those who could not or did not want to leave home, shopping online (mostly from supermarkets for home delivery) was one option for those who had the skills and resources to do so. It required access to a computer and the internet, IT and literacy skills, a bank or credit card with which to pay, and enough funds to meet the minimum purchase threshold for delivery (typically £40 -£80). At one point, there were also restrictions on the quantity of items you could purchase. Those who were shielding, caring and/or isolating could not leave their home, meaning some people started shopping online for the first time during the restrictions. As Adam, a local authority employee quoted above, explained "the online shopping world is so hard to navigate if you've never done it before, then to get a slot was very difficult." Those who were shielding and had been identified by the Government as clinically vulnerable as per the national Shielded Patient List (SPL) were entitled to priority delivery slots with supermarket chains, but these were often unavailable at the start of lockdown. However, there was considerable confusion expressed about these services. People were unsure whether or not they were eligible for a priority delivery slot and there was uncertainty about how they could access these 'special slots' or how the supermarket would know they were eligible. Even for those who regularly did online shopping before the pandemic, it became much more difficult once the restrictions were introduced and demand increased. Participants from a variety of backgrounds expressed frustration at not being able to secure a delivery slot and food deliveries arriving with missing products or unsuitable substitutions. Some participants told us that they sat at the computer for hours on end, sometimes all night, trying to secure a delivery slot. This was particularly difficult for those with food allergies, health conditions, larger families, or those shopping for more than one household.
As with changes in practices when physically going to the supermarket, online shopping was used to 'stock up', bulk buy and to obtain food on behalf of relatives and neighbourswith a similar practice of shopping less often but making bigger orders. Aimee, a mother of schoolaged children in receipt of free school meals, explained how her household's eating and shopping habits changed during the mitigation measures. As Aimee outlines, 'stocking up' typically meant opting for longer-life food that would see them through to the next shop, rather than fresh produce: "So yeah, so we ate less takeaway, but … I think we ate more frozen and cupboard food because the Tesco deliveries were so hard to get and I was also shopping for my neighbour, she's over 60 and she's got lupus so it was really important that she didn't go out … And the Tesco shops were so difficult to get delivered that it was like let's stock up, let's have lots of freezer food and lots of cupboard food." For those with access to a car, 'click and collect' (i.e. ordering food online then collecting from the store) was popular because it meant they were able to access food more quickly. These collection slots were sometimes easier to obtain than delivery slots and service charges were often cheaper or free. It also meant participants could combine their click and collect trip with other essential errands in the same locality.
Click and collect shopping during the restrictions could mean having to wait around at the store, sometimes for hours outdoors, until the order was ready to collect. Demand for these services was high and adherence to social distancing measures slowed the service down. In order to reserve a timely collection slot, it was sometimes necessary to select a store relatively far away if there were none available closer to home. As a result, having access to a car was typically necessary to make good use of this service. Those without the resources and skills to order online and/or without access to a car could find themselves at a disadvantage, which was often a problem for older people. Helen, who was retired and advised to stay at home (her age classified her as more vulnerable to COVID-19) was unable to drive or do online shopping: "And this is why I find it difficult losing my [driving] licence, I can't really get to any big supermarkets or the corner store … No. Never ordered online. Don't really know how to work buying online, never bought anything online. I've always bought my own." (3) Increased reliance on smaller local stores The final shift in food shopping practices that emerged from the data was that of 'going local'. Whilst changing supermarket shopping habits and moving online allowed people to continue to access produce and services during the restrictions, it involved a lot more time, effort, and expense than households would normally expend on food shopping. An alternative strategy was to avoid supermarkets altogetherboth physically and onlineand rely more on local shops. Such outlets were within reasonable walking distance, were smaller and/or independent stores and some had delivery services. For some people it was an active decision to use these stores. The pandemic and the mitigation measures prompted some households to reassess their food practices, especially in relation to where they did their shopping, and made them think about what they would like to do differently. In these cases, the challenges of shopping at the supermarket during lockdowns further convinced them of the need to minimise their use of these spaces and engage more with local food environments and independent businesses. This was viewed positively and described as a healthier, more sustainable and ethical way of buying food. Sheila, a local community organiser, explained some of the positive changes she had seen in her local area: "Because that's been another big change in the way that people do their shopping. So for example we've got loads more people now using a milkman, we've got a load more people now having their vegetables delivered to their door, even butchers delivering to people's houses. So in a way it's opened up a new way [of food shopping]. So it's not all negative about people, the way that they are eating, the way that they are shopping. And I think it's proved that you can eat in the same way or a better way but even by not going out getting your daily shopping." In contrast to supermarket shopping practices during the pandemic, 'going local' often entailed shopping more frequently. People reported buying from a range of different local shops (including farm shops) and shopping little-but-often, with a preference for fresh produce. Anne, an older person living in a small town with her husband, was very positive about her change in routine: "One thing I have started doing more, we have a garden centre on the outskirts that also grows its own vegetables and has really nice meat and stuff and other good local produce, so I have been going there more than I was before." Again, issues of inequality and resources differentiated how these practices were enacted. For some, especially those with lower incomes, shopping locally was less of a lifestyle choice and more of a necessity because they lacked the resources (e.g. a car) to go to large 'out-of-town' supermarkets. In particular, those without a car and living where public transport was less accessible, became more reliant on smaller local shops for their food shopping, as Ruth, who works for a local authority public health team, explained: "A lot of people don't have cars to go drive to the big superstores. They're buying it a lot more expensively at the local shops and, you know, it's very difficult for some people." Local food environments vary greatly. In less affluent areas there are generally fewer outlets that sell healthful or fresh produce. Added to which, the strategies described abovesuch as using butchers, 'milkmen', and having veg boxes deliveredtend to be more expensive and are not available in all areas. 'Going local' in a more socioeconomically deprived area could mean relying on convenience/express stores, corner shops, and petrol stations. As well as stocking a relatively small range of products and lacking in fresh produce, participants reported that these outlets were also more expensive. Rebecca, who had to manage on a reduced income during the restrictions, was so frustrated by the higher prices charged at her local shop that she challenged the staff about it: "There is a local shop who was charging a fortune for certain things which I popped in a couple of times and just said to them you really shouldn't be doing it, it's not really right, but not … I wasn't buying those things particularly."
Discussion
This paper reports on how residents in the East of England responded to changes in their local food environments and how their food shopping practices changed as a result of the pandemic mitigation measures. While supermarket shopping continued to be the preferred or most dominant food shopping activity, people were forced to adapt the way they used these spaces by varying shopping times, shopping for others, and reducing the frequency of shopping trips. Stocking-up and buying more food than usual was frequently used to offset these changes. Online shopping was a popular and necessary alternative to shopping in-store, but this excluded groups without the necessary resources or skills and frustrated those who had to try to deal with existing systems that could not cope with the increased demand.
Study limitations
This paper does not address or describe the range of community and statutory schemes that were supporting local people to access food. Nor does it examine the important role that food parcels and help from food banks played in getting food to people who simply could not access or afford enough (or any) food from other means. We did collect data on these issues and acknowledge the food work done by support organisations across the region. However, there was not the scope to explore it within this paper.
The study was both cross-sectional and virtual (i.e., remote interviews online). We only spoke to each participant once and, due to COVID-19 restrictions, we were unable to meet with participants in person or observe their food shopping practices in real time (Thompson Et Al., 2013;Dickinson Et Al., 2021). A valuable piece of further research would be to follow-up participants post-pandemic to investigate to what extent they have maintained changes to their food shopping practices and to explore the impacts of these changes on health and wellbeing.
Despite these limitations the paper contributes to the emerging literature on food practices in the context of the pandemic with respect to: supermarkets, dietary inequalities and digital exclusion, and interactions with local food environments.
Contributions to the literature
As echoed by our findings, supermarkets remain the dominant choice of food retail for UK shoppers. The supermarket is central to the modern food shopping experience (Thompson Et Al., 2013;Dickinson Et Al., 2021;Bowlby, 1997). Supermarket shopping is entrenched in consumer culture and it is, perhaps, inevitable that supermarkets were granted 'essential' status during the pandemic, thus providing a legitimate reason for leaving the house. There remains a paucity of qualitative research on food shopping practices and explorations of how consumers make decisions in consumption spaces (Robson Et Al., 2020). This study partially addresses that gap by examining reported routine food shopping strategies and adaptations in response to COVID-19 restrictions. Further qualitative research is needed to understand how consumers experienced and responded to COVID-19 related changes to the in-store environment identified in quantitative studies, such as consumer (dis) empowerment, routine disruption, and emotional fallout (Brown and Apostolidis, 2022).
Online food shopping for delivery or collection rose sharply in 2020/ 21 (Chenarides Et Al., 2021). COVID-19 and the associated mitigation measures have accelerated the shift to online grocery provisioning. This shift will likely benefit affluent households at a faster rate than less affluent ones because they have the capacity to meet minimum spend requirements, pay delivery costs, and take advantages of cost savings associated with bulk buying (Cummins Et Al., 2020). This study highlights the experiences of those unable to access online food shopping, particularly older people, and explores the barriers they face. To date, most studies of online food shopping have focused on younger cohorts. There is limited research looking at older populations and what shapes their decisions to try (or not) online food shopping (Blitstein Et Al., 2020). Dickinson Et Al. (2021) found that none of the older people they studied before the pandemic used online shopping services, preferring in person shopping as it enabled social interaction and exercise opportunities. Understanding why some groups do not shop for food online and how digital exclusion impacts diet is key to informing interventions and planning to improve access to healthy and affordable food (Bezirgani And Lachapelle, 2021). The East of England includes rural and coastal areas that are not well served by food delivery services (Hart, 2021). This means that some groups are being excluded by both digital and geographical barriers, preventing them from accessing online food shopping.
One of the possible medium to long-term changes to the food system as a result of the pandemic is re-localisation. There has been an increase in the use of local food retailing as some households shop closer to home and source food from a wider range of retailers (Cummins Et Al., 2020). Our findings suggest that the potential re-localisation is having unequal impacts and could serve to further widen inequalities. In part, this can be explained by the ways in which lockdowns and social distancing affected the mode of transport used to travel to food retailers; with individual transport methods (especially walking and driving) favoured at the expensive of public transport (which reduces capacity to remain socially distant) (Moslem Et Al., 2020). Mobility has changed as a result of the pandemic (Braut Et Al., 2022) and investment in public transport is necessary to reverse the polarised trends in localisation suggested by our findings, namely: those on lower incomes potentially remaining less able to travel, affecting their ability to access healthier food, while those living in higher-income neighbourhoods continue to benefit from better quality food environments. Research from Italy suggests that people living in areas with less access to quality foods were more likely to have poorer diets and less likely to experience improvements to their diets as a result of the mitigation measuresespecially lockdowns (Pietrobelli Et Al., 2020).
In the UK, convenience stores (local stores which tend to stock less healthy and more expensive food) have reported a 39% increase in sales (Lee, 2020), which makes a strong case for targeted interventions in and around these outlets.
Interventions and programmes to tackle access and affordability of food in lower-income neighbourhoods have typically been in the form of either subsidised incomes (welfare benefits) or food being made available at reduced or no cost (typically from food banks or social supermarkets). More recently, policy makers have considered the manipulation of affordability through fiscal measures to promote healthier behaviours, such as the consumption of healthier foods (Monsivais Et Al., 2021). Post-COVID-lockdowns, a range of interventions to improve healthier food purchasing at local retailers have been identified, including healthy food subsidies, produce prescription (for fresh food) (Xie Et Al., 2021) and healthy community stores (Kaur Et Al., 2022). At present, evidence for the effectiveness of such community-based and local retailer interventions is limited by factors such as small sample sizes, limited follow-up, and limited measurement of dietary and health outcomes. Added to which, relatively little is known about the potential for healthy food subsidies to improve diets among the general population (Monsivais Et Al., 2021).
Conclusion
The COVID-19 pandemic has changed eating practices, food environments, and food shopping behaviours. The real challenge is to monitor whether these changes endure beyond the pandemic, as they look set to, and how they further amplify existing dietary health inequalities. Our research indicates, as with health and social inequalities more generally, that the food shopping practices of vulnerable groups has been disproportionately and negatively impacted by the pandemic and the resulting restrictions. Marginalised and vulnerable groups are more likely to live in poorer neighbourhood and the disadvantages arising from poorer quality environments in these neighbourhoods (including food environments) amplifies individual disadvantages and vulnerabilities (Nogueira Et Al., 2014).
There have been calls for the food industry to take action to help mitigate the impacts and make it easier and more affordable for people to buy healthier food. This may be necessary in order to repair some of the damage done in the pandemic-related shift towards greater consumption of highly processed long-life foods (Tan Et Al., 2020). Quantitative research reveals that, in British households, the substantial and persistent increase in calories consumed at home more than offset reductions in calories eaten out during 2020, with the largest increases reported in low-income households. Further, although quantity increased, there was little or no improvement in diet quality (O'Connell et al., 2022). The retail food environment is both shaped by and responsive to consumer food shopping practices and preferences. Interventions to regulate food environments and increase access to affordable healthy food will be key to recovering from the pandemic and promoting public health.
Funding
This research is funded by the National Institute for Health Research (NIHR) Applied Research Collaboration (ARC) East of England. The views expressed are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health and Social Care. | 8,591.8 | 2022-09-01T00:00:00.000 | [
"Medicine",
"Economics"
] |
Tensile and Creep Characteristics of Sputtered Gold-Tin Eutectic Solder Film Evaluated by XRD Tensile Testing
In this paper, we describe the elastic-inelastic mechanical property measurements of a gold-tin (Au-Sn) eutectic solder fi lm. Dual-source direct-current (dc) magnetron sputtering was employed to deposit a Au-20 weight % (wt.%) Sn fi lm. A uniaxial tensile test with in situ X-ray diffraction (XRD) analysis was performed at temperatures ranging from room temperature (RT) to 373 K. The XRD tensile test enabled us to directly measure out-of-plane strain in the Au-Sn fi lm specimen for Poisson’s ratio determination. The mean Young’s modulus and Poisson’s ratio at RT were found to be 51.3 GPa and 0.288, respectively, which were lower than the bulk values. The Young’s modulus decreased with an increase in temperature, whereas the Poisson’s ratio showed no change with temperature. In addition, a creep test was carried out at various stresses and temperatures. The steady-state creep deformation behavior could be estimated on the basis of Norton’s law. Information on the tensile and creep characteristics would be useful in designing Au-Sn-fi lm-bonded microjoints used in micro-electromechanical systems (MEMS).
Introduction
In package assemblies for micro-electromechanical systems (MEMS), a soldered joint plays a key role as a signifi cant electrical and mechanical interconnection. Several types of solder fi lm deposited by sputtering are used to bond microelements made of silicon or glass. The reliability of package assemblies largely depends on the reliability of the solder fi lm joints because such joints have a relatively low structure compliance compared with the jointed microelements. Therefore, the precise evaluation of a solder fi lm's mechanical behavior is one of the signifi cant issues for the development of reliable MEMS soldering packages.
To date, the investigation of the mechanical behavior of solder materials has been carried out using uniaxial tensile, torsion, and compression tests. (1)(2)(3)(4) The uniaxial tensile test, which is a common material test method for the mechanical characterization of materials, is often employed to study the elastic, inelastic, and creep properties of solder materials with milliscale dimensions. If a specimen has a solder-jointed section, bond strength can be estimated by the uniaxial tensile test. However, when the target material is a solder fi lm, a mechanical test is very diffi cult to perform accurately because there are technical diffi culties in preparing thin-fi lm specimens, in applying a tensile force to the specimen, and in detecting a very small physical response from the fi lm during the test.
To characterize a solder fi lm material, we have conducted the uniaxial tensile test with in situ X-ray diffraction (XRD) analysis. The combination of uniaxial tensile test and in situ XRD analysis enables us to examine the mechanical characteristics of materials with atomic-level resolution. (5)(6)(7)(8)(9)(10)(11) The objective of this work is to investigate the tensile and creep properties of a sputtered Au-Sn eutectic solder fi lm at temperatures ranging from RT to 373 K. Young's modulus and Poisson's ratio were evaluated using the XRD tensile test. A creep test was also performed in order to obtain creep parameters utilized for the estimation of creep curves for the Au-Sn solder fi lm.
Deposition of Au-Sn solder fi lm
Au-Sn eutectic solder fi lms were deposited using a dual-source direct-current (dc) magnetron sputtering apparatus developed by the authors. The composition of the fi lms was set to be 80 weight % (wt.%) and 20 wt.% for Au and Sn, respectively, by controlling the electric power for each sputtering gun. During deposition, a fi xed distance of 95 mm between the target and substrate was maintained, and pure argon, maintained at a constant pressure of 0.2 Pa, was employed as the working gas. Substrate temperature was not controlled. All the Au-Sn fi lms were deposited onto the diced single-crystal silicon (Si) chip covered with a silicon oxide (SiO x ) thin layer originating from a thermal oxidation process. Before the deposition, the Si chip was rinsed in a solution of sulfuric acid and hydrogen peroxide to remove organic matter on the chip.
Using XRD and scanning electron microscope analyses, all the Au-Sn fi lms we prepared were determined to have a columnar structure of polycrystal. The diameter of each column was roughly estimated to be 0.5 μm. While annealing for 2 h at 516 K, the shape of the obtained XRD spectrum gradually changed, i.e., peak intensity gradually increased whereas peak width decreased. This indicates that the Au-Sn columns gradually changed into polycrystal grains. After annealing for 2 h, such changes were hardly observed because the fi lm structure and its crystallinity were stabilized. Therefore, annealing for 2 h was conducted in the fabrication process of a specimen. Figure 1 shows a schematic of the process fl ow for fabricating a Au-Sn fi lm specimen along with a photograph. We used a Si wafer as the starting material, which consisted of a 380-μm-thick Si wafer with a 1-μm-thick SiO x fi lm layer produced by wet thermal oxidation on both sides of the wafer. After the wafer was diced to form strip chips measuring 13 × 22.5 mm 2 , two square holes for fi xing were simultaneously made on both sides of the chip by anisotropic wet etching with 20% tetramethyl ammonium hydroxide (TMAH) solution at 363 K, and then the Si portion beneath the gauge section was also etched. A UV thick photoresist, SU-8, was used to form an inverted specimen pattern. A 4-μm-thick Au-Sn solder fi lm was deposited onto the photoresist pattern by magnetron sputtering, and then the pattern was removed to fabricate the gauge section of the Au-Sn fi lm specimen by the lift-off technique. Annealing the Au-Sn fi lm at 516 K for 2 h was carried out for the stabilization of fi lm structure and its crystallinity, as described in the previous section. Finally, both SiO x and Si layers on the entire surface of the chip were removed.
Specimen
The specimen consisted of a specimen gauge section made of Au-Sn, hooking holes on Si grip ends, and a frame. (5) The gauge section of 4 mm length, 2 mm width, and 4 μm thickness had a fi llet section for reducing stress concentration during testing. In fi nite element analysis (FEA), by applying a compulsory displacement of 1% gauge length to the entire specimen model, the longitudinal elastic strain at the straight part was calculated to be 0.617%. This is because elastic deformation defi nitely occurs at not only the parallel portion but also the fi llets on both ends of the gauge section. The trend of strain reduction depends on specimen shape only, irrespective of the mechanical properties input into FEA. Thus, the elongation in the gauge section can be estimated to be 0.617 times the total elongation of the specimen if the elongation of the entire specimen is measured accurately during the test. (6) This correction works well in the elastic deformation range, but not in the inelastic range. In this study, the XRD tensile test was conducted within only the elastic deformation range of Au-Sn solder fi lms. Only specimens having dimensional tolerances within ±5% for all dimensions were subjected to tensile testing. Figure 2 shows the uniaxial tensile tester developed for film specimens. The tester is composed of a piezoelectric actuator, a load cell, a linear variable differential transformer (LVDT), specimen holders, and a heater. (5,6) These components are placed in an actuator case with dimensions of 130 × 80 × 30 mm 3 . The actuator case has a lever for amplifying the actuator's displacement by a factor of fi ve in the tensile direction. The actuator applies tensile force to the specimen that is hooked on specimen holders via hooking holes. The load cell has an accuracy of 0.10% of the full scale: 2 N. The LVDT with a resolution of 3 nm measures the relative displacement between specimen holders, and is mounted on the actuator case and load cell. The heater, which produces a surface temperature of up to 973 K, is embedded on specimen holders, and transmits heat to the specimen via the holder. During tensile testing, specimen surface temperatures ranging from RT to 373 K were precisely controlled by monitoring temperature using a thermocouple attached to both grip ends of the specimen.
XRD tensile test setup
As illustrated in Fig. 3, the tensile tester is mounted on an x-y-z-θ adjustable stage in a commercial XRD system (X'Pert MPD, Philips). In this test setup, the out-of-plane normal strain of a crystalline specimen can be measured directly with atomic resolution during tensile loading. If the scattering vector of the X-ray is normal to a crystalline specimen, the lattice spacing, d, in the out-of-plane direction is calculated using Bragg's law: where λ is the wavelength of the X-ray (Cu, Kα: 0.1541 nm) and θ is the diffraction angle. As tensile force increases during the test course, the lattice spacing in the out-ofplane direction decreases, causing the XRD peak to shift toward a higher angle. Thus, measuring the peak angle for a certain interval of tensile force, which can be maintained Fig. 3. Schematic of tensile testing with in situ X-ray diffraction analysis for evaluating out-ofplane strain of fi lm specimen.
using a feedback control program, enables us to determine the out-of-plane strain in a specimen. The Gauss function was employed when XRD data fi tting was performed to determine the diffraction angle at the peak position. In this work, the theta-scanning time of a single XRD measurement was set to be 60 s.
XRD analysis
Figures 4(a) and 4(b) show representative XRD curves around the AuSn (110) and Au 5 Sn (223) peaks at RT, respectively. The table in the fi gure indicates the increases in the spectrum peak angles of AuSn (110) and Au 5 Sn (223) with an increase in tensile stress. Although different XRD peaks corresponding to other orientations were observed in the XRD analysis, the intensities of these peaks were very low compared with those of both AuSn (110) and Au 5 Sn (223). When an initial tensile stress, that is, a preload, is applied to a Au-Sn specimen, the XRD peak angles are approximately 40.6 and 78.0 degrees, which correspond to typical angles of AuSn (110) and Au 5 Sn (223), respectively. As tensile stress increases, these peaks shift toward a higher angle. This indicates that the lattice spacings of the AuSn (110) and Au 5 Sn (223) planes, which lie parallel to the specimen surface, gradually decrease with increasing stress. Although the applied tensile stress is constant, the amount of peak shift per unit tensile stress in Au 5 Sn (223) is about 2.5 times larger than that in AuSn (110). This is related to the difference in peak angle between the crystal planes. From Bragg's law, since the XRD peak position of Au 5 Sn (223) is higher than that of AuSn (110), an increase in the peak angle per unit strain in Au 5 Sn (223) is certain to be larger than that in AuSn (110). This fact suggests that higher-angle peaks give better resolution in out-of-plane strain calculation. The different peaks may provide different elastic responses if the grain size is comparable to the specimen size. However, in fact, the Au-Sn specimen size would be much larger than the grain size that is thought to be on the submicron scale, although grain size has not been measured experimentally. We assumed, therefore, that the effect of the difference in XRD peak on the elastic response of a micron-sized specimen could be ignored, and employed only Au 5 Sn (223) peaks for out-of-plane strain calculation in the Au-Sn fi lm specimen. At 323 and 373 K, similar trends were observed between the Au 5 Sn (223) peak and that at RT, although the data are not provided here. The open and closed plots represent out-of-plane strains from XRD analysis at Au 5 Sn (223) and AuSn (110), respectively. The specimen was tensioned within only the elastic deformation range. The tensile stress-longitudinal strain relationship is linear in the stress range below 0.15 GPa. Above 0.15 GPa, it exhibits a step shape. The step shape originates from the creep deformation of the Au-Sn solder fi lm. The creep deformation occurred when tensile stress was kept constant using a feedback control program for out-of-plane strain measurement by XRD analysis. The amount of creep deformation gradually increases with an increase in the tensile stress applied. The slope of the dashed line, which is drawn by eliminating the creep effect from the solid line, provides a Young's modulus of 51.3 GPa. This value is approximately 24% lower than the electroplated Au-Sn bulk value. (4) On the other hand, a linear relationship between tensile stress and out-of-plane strain was obtained. The out-of-plane Poisson's ratio corresponding to the ratio of the out-of-plane strain of Au 5 Sn (223) to the longitudinal strain is found to be 0.288, which is almost the same as that of AuSn (110). This agreement indicates that the effect of the difference in crystal orientation determined by the XRD analysis of the out-of-plane strain is small. The measured Poisson's ratios are almost 30% smaller than the bulk value. (4) Figures 5(b) and 5(c) show the stress-strain relationships at 323 and 373 K, respectively. As temperature increases, the step shape of the solid lines that were indicated at RT becomes clearer. The plateau regions in the stress-strain relationships at 323 and 373 K are, respectively, approximately 5 and 70 times larger than that at RT. The amount of creep deformation increases with increasing temperature and tensile stress. The Young's moduli at 323 and 373 K are 50.6 and 39.3 GPa, respectively, which are smaller than that at RT. On the other hand, the linear relationship between out-ofplane strain and tensile stress is observed regardless of temperature. It is considered that the effect of creep deformation on XRD analysis is small. The Poisson's ratios at 323 and 373 K are 0.281 and 0.275, respectively, which are almost the same as that at RT. The Poisson's ratio exhibited none of the temperature dependence that was clearly observed in the Young's modulus. The relationship between the Young's modulus, E, and the temperature, T, can be expressed as
Stress-strain relation
where the units of E and T are GPa and K, respectively.
Creep test result
From the XRD tensile test results, creep deformation was found to defi nitely occur in Au-Sn solder fi lms. In order to evaluate creep deformation behavior, a creep test was carried out. Figure 6 shows the representative creep test result of the Au-Sn solder fi lm obtained at 353 K. The solid line is indicative of the creep strain curve, and the open plots indicate the creep rate curve. The creep strain curve is clearly divided into three regions of transient, steady-state, and tertiary creeps. At the beginning of the test, creep strain increases markedly because the Au-Sn fi lm deforms rapidly, whereas strain rate decreases drastically in a short time. In the steady-state region, a linear relationship between creep strain and time is observed. Creep strain rate reaches its minimum of around 0.24%/h, and remains approximately constant for 23 h. Approximately 30 h after the test start, creep strain rate rapidly increases, and then the fi lm fails when the test time exceeds 34 h. The total creep strain until failure is 8.8%, which is approximately 8 times larger than that of the Au-Sn bulk value. (4) The Au-Sn bulk typically consists of many grains on the micron scale, whereas the sputtered Au-Sn fi lm tested is thought to have a columnar structure. In the creep test, tensile force is applied to the fi lm along the inplane direction of the specimen, which corresponds to the orthogonal direction relative to the columns; therefore, grain boundary sliding is easily induced, leading to a large creep strain in the fi lm. Figures 7(a)-7(c) show representative creep curves of the Au-Sn solder fi lms at 323, 353, and 373 K, respectively. In Fig. 7(a), at a constant tensile stress of 50 MPa, creep strain is found to be approximately 0.2% at 20 h after the stress application. The creep strain for a 20 h duration reaches 1.6% as constant stress increases to 100 MPa. These creep deformation behaviors increase in intensity markedly with temperature increase, as shown in Figs. 7(b) and 7(c). At the highest temperature, some of the specimens even failed within an hour. This indicates that, in the Au-Sn fi lms, the mobility of dislocations or grain boundary sliding that contributes to creep deformation would be more effectively activated at a higher temperature under a higher-stress condition. Figure 8 shows the relationship between steady-state creep strain rate and constant tensile stress in the creep test. Steady-state creep strain rate was defi ned as the strain rate in the plateau region shown in Fig. 6. Steady-state creep strain rate linearly increases with increasing stress and temperature. The linear relationship in the logarithmic diagram indicates that the steady-state creep constitutive equation of Norton's rule is applicable to the Au-Sn solder fi lms employed. Norton's law takes the form where ε . c2 is the steady-state creep strain rate, ε c2 is the steady-state creep strain, A is the creep strain constant, and n is the stress exponent. A and n are temperature-dependent material constants, which can be determined by fi tting the creep rate-stress relationship in Fig. 8. Figure 9 shows the creep constants as a function of temperature. Creep strain constant shows a linear increase with temperature, whereas the stress exponent is The equations can be used for the structural design of Au-Sn fi lm bonded microstructures where steady-state creep deformation has to be considered.
Conclusions
In this study, XRD tensile and creep tests were carried out to measure the mechanical properties of Au-20 wt.% Sn solder fi lms deposited by dual-source magnetron sputtering. In the XRD tensile test, an XRD peak shift to a higher angle with increasing tensile stress was clearly observed, which enabled us to estimate the out-of-plane Poisson's ratio of the fi lm. The tensile stress-longitudinal strain relationship showed a step shape originating from the creep deformation in the fi lm. The measured Young's modulus and Poisson's ratio at RT were 51.3 GPa and 0.288, respectively. The Young's modulus decreased with temperature increase, whereas the Poisson's ratio showed no temperature dependence.
In the creep test, all the Au-Sn solder fi lms showed transient, steady-state, and tertiary creep behaviors. The total creep strain at the creep rupture point was larger than that of Au-Sn bulk. The steady-state creep behavior was fi tted by Norton's law, and creep parameters for constructing a constitutive equation were obtained. The obtained parameters would be useful within the intermediate temperature range tested when a structural design including a Au-Sn-fi lm-bonded microstructure is carried out. | 4,390.4 | 2010-01-01T00:00:00.000 | [
"Materials Science"
] |
Two new species of Anisotacrus Schmiedeknecht (Hymenoptera, Ichneumonidae, Ctenopelmatinae) with a key to Eastern Palaearctic species
Two species of genus Anisotacrus Schmiedeknecht, 1913, A. externus Sheng & Sun, sp. nov. and A. senticosus Sheng & Sun, sp. nov., collected from the Natural Reserve, Huairou, Beijing, are described and illustrated. A key to the Eastern Palaearctic species of Anisotacrus is provided.
Coloration (Fig. 1). Black, except for following: Face, clypeus except a small dark brown spot at the center of clypeal sulcus, mandible except teeth, malar space, maxillary palpi, labial palpi, ventral profiles of scape and pedicel, upper-posterior corner of pronotum, anterolateral portion of mesoscutum, tegula, subtegular ridge, fore and middle coxae, trochanters, hind trochantellus yellowish white. Dorsal profiles of scape and pedicel, basal portion of flagellum darkish brown; remainder of flagellum yellowish brown. Fore and mid femora, tibiae and tarsi, basal portion of hind tibia yellow brown. Hind tarsus brownish black. Posterior portion of second tergite, third and fourth tergites entirely brownish red. Metasomal sternites 2 and 3 whitish yellow, with small lateral longitudinal brown spots; sternites 4-6 almost entirely reddish brown. Ovipositor sheath irregularly blackish brown. Pterostigma and veins brownish black.
Etymology. The specific name is derived from the fore wing vein 2m-cu connecting to cubitus distal of lower-posterior corner of areolet.
Distribution. China. Differential diagnosis. The new species is similar to A. albinotatus Kasparyan, 2007, but can be distinguished from the latter by the following combination of characters: malar space about 0.6 × as long as basal width of mandible; areolet triangular, 2m-cu connecting to cubitus slightly distal of areolet (Fig. 8); hind coxa and posterior tergites of metasoma black. Anisotacrus albinotatus: malar space about 0.9 × as long as basal width of mandible; areolet receiving vein 2m-cu basal of lower posterior corner; hind coxa, second and subsequent tergites reddish brown.
Head. Inner margins of eyes (Fig. 14) slightly concave near antennal sockets. Face (Fig. 14) 1.2 × as wide as long, almost flat, shagreened, with dense fine indistinct punctures and yellowish white setae; upper margin with median narrow smooth longitudinal stripe and a small median tubercle. Clypeus approximately 3.3 × as wide as long, shagreened, lateral portion with sparse fine punctures, apical median smooth, shiny; apical margin slightly arcuate. Mandible with sparse punctures and dense long yellowish white setae; lower tooth distinctly wider and slightly longer than upper tooth. Malar space about 0.4 × as long as basal width of mandible. Gena (Figs 15, 16), vertex (Fig. 16) and frons shagreened, with dense yellowish brown setae. Gena strongly convergent backward. Postocellar line approximately 0.6 × as long as ocular-ocellar line. Antenna with 38 flagellomeres; ratio of length from first to fifth flagellomeres: 2.4:1.3:1.1:1.0:1.0. Occipital carina complete, genal carina joining hypostomal carina above base of mandible.
Metasoma. First tergite (Fig. 20) approximately 2.7 × as long as posterior width, straight, shagreened, apical half with sparse fine indistinct punctures; latero-median and dorso-lateral carinae absent; ventro-lateral carinae complete; spiracle small, circular, distinctly convex, located at 0.5 of first tergite. Second and third tergites (Fig. 21) shagreened. Second tergite approximately as long as apical width, anteromedian portion with weak indistinct transverse wrinkles; thyridium transverse, distance to basal margin of second tergite about as its length. Third tergite (Fig. 21) parallel laterally, 0.85 × as long as wide. Ovipositor sheath (Fig. 22) 5.5 × as long as maximum width, 0.2 × as long as hind tibia, upper and lower margins almost parallel. (Fig. 13). Black, except for following: Lateral side of face widely and irregularly, labrum, mandible except teeth and dorso-posterior corners of pronotum yellowish white. Maxillary and labial palpi dark brown. Fore and middle tibiae and tarsomeres 1-4 brownish yellow. Basal half of hind tibia except basal end and tibial spurs reddish brown. Posterior portion of tergite 2, tergite 3 entirely and 4 except posterior margin brownish red. Pterostigma brownish black. Veins blackish brown.
Distribution. China. Differential diagnosis. The new species is similar to A. xanthostigma (Gravenhorst, 1829), but can be distinguished from the latter by the following combination of characters: areolet distinctly quadrilateral; first tergite evenly convex, without longitudinal groove; second tergite as long as apical width; third tergite distinctly shorter than its width; mesoscutum, tegulae, all coxae and trochanters black. Anisotacrus xanthostigma: areolet triangular; first tergite with longitudinal groove; second tergite longer than apical width; third tergite as long as apical width; anterolateral portion of mesoscutum with yellow spots; tegulae yellow; fore coxae and parts of trochanters yellow.
Discussion
The male of Anisotacrus xanthostigma displays some distinct variation (Kasparyan and Khalaim 2007), including a wide range of coloration. The original description of A. iyoensis (Uchida, 1953) was based on two male specimens, and is very similar to A. xanthostigma (Gravenhorst, 1829). Both species may be conspecific (Kasparyan and Khalaim 2007). For a decision, the type of A. iyoensis and more East Palaearctic material have to be studied in the future. | 1,135.4 | 2021-04-29T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Whole-blood sorting, enrichment and in situ immunolabeling of cellular subsets using acoustic microstreaming
Analyzing undiluted whole human blood is a challenge due to its complex composition of hematopoietic cellular populations, nucleic acids, metabolites, and proteins. We present a novel multi-functional microfluidic acoustic streaming platform that enables sorting, enrichment and in situ identification of cellular subsets from whole blood. This single device platform, based on lateral cavity acoustic transducers (LCAT), enables (1) the sorting of undiluted donor whole blood into its cellular subsets (platelets, RBCs, and WBCs), (2) the enrichment and retrieval of breast cancer cells (MCF-7) spiked in donor whole blood at rare cell relevant concentrations (10 mL−1), and (3) on-chip immunofluorescent labeling for the detection of specific target cellular populations by their known marker expression patterns. Our approach thus demonstrates a compact system that integrates upstream sample processing with downstream separation/enrichment, to carry out multi-parametric cell analysis for blood-based diagnosis and liquid biopsy blood sampling. A microfluidic device that uses acoustic microstreaming could lead to faster, cheaper and more-efficient assaying of whole blood. The efficient separation of cell populations from whole blood is an essential preparatory step in many biological and clinical assays. However, conventional macroscale cell sorting techniques, including density-gradient-based centrifugation and immunolabeling-based separation, are limited by their separation sensitivity and the need for large sample volumes, as well as processing times and cost. This led Abraham Lee and colleagues at the University of California, Irvine, in the United States to develop a multifunctional microfluidic acoustic streaming platform that enables the sorting, enrichment and in situ identification of subsets of cells from whole blood. The device integrates upstream sample processing with downstream separation and enrichment, offering inexpensive and efficient multiparametric cell analysis for blood-based diagnosis and liquid-biopsy blood sampling.
INTRODUCTION
Efficient separation of cellular populations from whole blood is an essential preparatory step for many biological and clinical assays 1 . Conventional macro-scale cell sorting techniques include density gradient based centrifugation, immunolabeling-based separation such as fluorescence activated or, magnetic-activated cell sorting (FACS, MACS, respectively) and CellSearch. However, these techniques are limited by separation sensitivity, large blood sample volume requirement, multi-step interventions prone to artifacts, processing time, and cost. Lab-on-a-chip and microfluidic systems developed for microscale separation techniques have been gaining importance over the past decade. Many types of unique cell sorting and enrichment devices have been devised but they are not usually able to work with unprocessed whole blood [2][3][4][5] . These technologies typically require pre-processed samples and cannot handle the physical properties and complex populations inherent in whole blood [6][7][8][9][10] . They are limited to either diluted 11,12 or lysed blood 13,14 , and in some cases, density based centrifugation to reduce blood complexity and cell-cell interaction [15][16][17][18][19][20][21] .
The microfluidic cell sorting technologies that can handle whole blood, in general fall into two categories: label free sorting based on physical characteristics of the target cells or biomarker labeling. For example, Davis et al. 22 utilized deterministic lateral displacement (DLD) to separate whole blood components. Nagrath et al. 23 used microposts and Stott et al. 24 utilized herringbone structures to capture EpCAM+ CTCs from whole blood. In another study, Karabacack et al. 25 integrated DLD with inertial focusing and magnetophoresis for negative isolation of CTCs from whole blood. Although they successfully demonstrated target cell sorting, they were unable to enrich in a small volume ( o 50 μL).
Acoustic microstreaming has been demonstrated to focus, sort and enrich target cells/particles within the microchannel [26][27][28][29][30][31][32] . Studies have shown negligible adverse effect of ultrasonic actuation on cellular phenotypes, viability and functional capacity 9,21,27,28,33,34 . Acoustic microstreaming is a phenomenon of localized streaming patterns occurring near an oscillating liquid interface 29 . In lateral cavity acoustic transducers (LCATs), liquidgas interfaces are formed by trapped bubbles in patterned deadend side channels that form the acoustic microstreaming sources. Oscillation of these bubble surfaces is actuated by an acoustic energy source, leading to a second-order characteristic streaming viscous flow velocity, U s, which is maximum at the outer edge of the microstreaming vortices. With the slanted LCAT angles, this streaming flow also induces a bulk flow in the microchannel that 'squeezes' through an open streamline gap (termed d gap ) that is between the outer edge critical streamline of the vortex and the oscillating liquid-air interface. Particles trapped in the vortices form size-dependent orbits, where larger particles are pushed towards vortex center (Figures 1a and c). Particles with radii smaller than the d gap are released from the LCAT and are pumped along with the bulk flow. Therefore, LCATs are capable of bulk pumping of the sample while simultaneously separating particles/ cells by size based on the bubble surface oscillation amplitude. The oscillation thus controls the trapping and releasing of particles, correlated roughly to the ratio between bulk and streaming velocities U b
US
(Refs. 28,30). As depicted in Figure 1a, LCATs integrate the following sample processing steps (1) separate whole blood into its cellular constituents, (2) enrich rare cells based on size, and (3) deliver fluorescent biomolecules (monoclonal antibodies) to selectively label target cells. These functions allow one to identify target cells based on two important characteristics-size and surface biomarkers in a single device. Here we are utilizing enrichment ratio (ER, materials and methods) as a metric to analyze the device performance 14,35,36 . It is defined as the enhancement of target cell to background cell ratio from the device input to the output sample 36 . ER in the order of~100 × to 1000 × is clinically significant for subsequent gene profiling by RT-PCR as discussed by Tong et al. 14,37 In our device, we enhanced the ER to 170 × for particle mixture (15 and 10 μm at initial ratio 1:1 00 000). We achieved 213 × enrichment of MCF-7 cells with respect to WBCs when spiked at 10 mL − 1 in whole blood while ensuring the capturing of all target cells and thus avoiding false negatives.
MATERIALS AND METHODS Microfluidic device design
The LCAT chip is composed of 200 LCATs spaced 200 μm apart at an angle of 15°relative to the main channel. The main channel width and the side channel width were fixed to 500 and 100 μm, respectively. The height of the device was remained constant to 100 μm.
Microfluidic device fabrication
Microfluidic devices were fabricated using soft lithographic techniques. Silicon wafer was first cleaned with 2% HF to render it hydrophobic and dehydrated at 120°C for 15 min. Negative photoresist, SU-8 2050 was spin coated per manufacturer's (Microchem) protocol for 100 μm height. Following spin coating, it was soft-baked, exposed, post baked and developed. After hard baking at 200°C, it was kept in silane overnight. Poly (dimethylsiloxane), PDMS (Sylgard 184, Dow Corning) base and curing agent at 11.5:1 ratio was mixed and poured on the mold. Following degassing in a desiccator, it was cured at 65°C overnight. Hardened PDMS was cut and peeled carefully. Inlet hole was punched with a 1.5 mm biopsy punch and the outlet was cut. After cleaning the device, it was bonded on a thin cover slip (Fisher Scientific) using standard plasma procedure (Harrick). Following oxygen plasma, the bonded device was kept on a hot plate set at 65°C overnight to make the device hydrophobic. instructions in RPMI (Mediatech, Manassas, VA, USA) supplemented with fetal bovine serum (10%), penicillin (100U mL − 1 ), and streptomycin (100 g mL − 1 ; Mediatech). For biomarker staining on MCF-7 cells, anti-human EpCAM CD326 antibody conjugated with PE-Dazzle 594 (Biolegend) was used. After obtaining the cell pellet in an Eppendorf tube, they were blocked by adding 60 μL staining buffer (1% BSA, 0.1% NaN3 in 1X PBS, pH 7.4) to re-suspend cell pellets with 20 μL of FcR block (Miltenyi Biotech) for 10 min. Appropriate volumes of antibody stain (3 μg mL − 1 ) was added and incubated for 30 min on ice. Following incubation, the cells were washed with staining buffer and collected by centrifugation at 1000 r.p.m. for 5 min. These prestained cells were observed under fluorescence and spiked in whole blood at appropriate concentrations.
For nuclear staining of WBCs, we added DAPI (4',6-diamidino-2phenylindole) stain (Fisher Scientific) at the concentration of 1mg mL − 1 and incubated the sample for 10 min. Following that, we imaged the stained cells using DAPI filter of the fluorescent microscope.
Blood sample collection and processing Normal donor blood was obtained from Institution for Clinical Translation and Science, UCI under Institutional Review Board (IRB) approval. Whole blood with anti-coagulant was mixed with prestained MCF-7 cells to represent target cancer cells/rare cells. All normal donor blood was processed within 24 h after withdrawal. The blood cells after separation were stained using Wright-Giemsa Stain (Electron Microscopy Services). After making the smear of whole blood and its separated components (RBCs and WBCs), the cells were fixed and stained using Wright-Giemsa stain for 5 min. The glass slide is then washed in DI water carefully and dried for observation under the microscope.
Microfluidic device experimental setup
The blood sample and 1 × PBS buffer were supplied to the microfluidic devices by 1 mL syringes connected to Harvard Apparatus syringe pump and NE-500 (New Era), respectively. The tubing of inner diameter 0.51 mm (Cole Parmer) was used to connect the microfluidic device with the syringe. This tubing was inserted in the inlet hole punch by a 1.5 mm biopsy punch and the outlet was cut to ensure the release of 20 μL trapped sample. The ultrasound gel (ultrasonic) was smeared on the piezoelectric transducer (Steminc piezo) to ensure the efficient transport of acoustic wave from PZT to the PDMS device. The PZT was connected to the function generator (Agilent 33220A) and the voltage amplifier (Krohn-Height 7500) to supply the square wave. To monitor the functionality of the microfluidic device, we connected a high-speed camera (Phantom, vision research) to a L150 Nikon Eclipse upright microscope. Fluorescence detection was performed using a 100 W, 488 nm laser for excitation and a yellow green laser of 561 nm for detection by a canon DSLR mounted on an Olympus IX 51 inverted microscope.
Isolation and separation of cells from whole healthy blood
The LCAT device (Figure 1b) was fabricated by soft lithography techniques (materials and methods). To initiate the testing, the device was first primed with 1x PBS buffer to form air-liquid interfaces, and then placed on a piezoelectric transducer (PZT) with ultrasound gel. Undiluted normal donor whole blood (10 μL) was then introduced into the inlet of the LCAT microchannel. The mechanical vibration from the PZT (controlled by a function generator) is coupled through the gel into the LCAT device to actuate the air-liquid interfaces. At 2V pp , the PZT vibration induced the pumping of plasma and small platelets in the blood, but RBCs and WBCs were trapped in the vortices formed by the microstreaming flow (U s ). This resulted in the collection of 2 μL of fluid enriched for plasma and platelets from the outlet. As the PZT voltage is increased to 2.5 V pp , the increase in U b /U s results in RBCs being released from the microstreaming vortices and collected from the outlet. To increase the purity of separated sample, excess blood was pipetted out of the inlet and replaced with 30 μL of PBS buffer. The device remained driven at 2.5 V pp , until all RBCs were flushed out by the buffer and WBCs remain trapped as shown in Supplementary Video 1. After collecting all the RBCs from the outlet, the voltage was further increased to 4.5 V pp to release remaining trapped WBCs. The separated samples were kept in individual vials for inspection. The plasma and platelet sample was viewed in bright field at 40 × on a countess slide while the whole blood, and RBC samples were stained with Wright-Giemsa stain to distinguish them within (Figures 2a and b). WBC samples were spotted on a fluorescent microscope using DAPI nuclear stain (Figure 2b Integration of size-based separation with biomarker-based immunolabeling In some cases, rare cells such as circulating tumor cells or cancerassociated fibroblasts also exist in our circulatory system. Separation and enrichment of these rare cells are of prime interest for both prognosis and treatment monitoring. Postseparation analysis such as identification and enumeration can be performed by immunofluorescence and is unaffected by the presence of non-target cells in the output sample. Currently, despite considerable automation along with blood handling and separation, detection by immuno-staining of target cells is performed manually. The standard staining procedure takes at least one hour, and requires expensive centrifuges to collect cells maintained in suspension. To maximize the utility of lab-on-chip devices in point-of-care setting, we have integrated size-based separation with biomarker expression based immunolabeling. This enables our device to integrate complete hematological separation of cellular subsets and on-chip immunofluorescent labeling to confirm the identity of captured cells of interest based on both size and biomarker, reducing turnaround time, minimizing laborintensive sample preparation, and limiting cost. For proof of concept, MCF-7 breast cancer cells (2.1 × 10 5 mL − 1 ) were prepared into a single cell suspension. The LCAT device excitation voltage was set to capture the larger MCF-7 cells, followed by on-chip staining with anti-human EpCAM mAb conjugated to PE/Dazzle 594 (Biolegend). 30 μL of MCF-7 cell suspension in PBS was pumped at 2.5 V pp from the LCAT device inlet for 5 min. After the cells were trapped in the microstreaming vortices, 20 μL of Fc block (Miltenyi Biotec) was added in the inlet after removing the extra cell sample. Fc block was flown for 5min followed by pumping 4 μL of CD326 (0.8 μg anti-EpCAM) antibody for 2 min (2 μL min − 1 ). To prevent the inlet from emptying and air entering the channel, 30 μL of staining buffer (1% BSA, 0.1% NaN 3 in 1 × PBS, pH 7.4) was added to the inlet for 3 min. The buffer also served to wash out any unbound antibodies in the channel. After the cells were immunolabeled in the device, the fluorescent cells were imaged on-chip using a fluorescent upright microscope (Figure 3a). Subsequently, stained MCF-7 cells were released at 4.5-5 V pp to analyze the fluorescent intensity and the morphology. We compared the detected expression level of the MCF-7 cells stained using a standard benchtop procedure (0.8 μg, incubation time of 50-60 min) to the 15 min in situ chip-based staining procedure and a correlation of 91% demonstrates the similar expression level detected via on-chip staining regardless of the shortened staining time (Figure 3b).
After optimizing the immunostaining protocol, we sought to produce a more realistic case by spiking MCF-7 cells into normal donor whole blood samples (50 000 mL − 1 ). The spiked blood samples were run through the device, with the intent to separate the cancer cells from the blood. Platelets and RBCs were first extracted, followed by pumping of 30 μL of RBC lysing buffer At this point, we immunostained the MCF-7 cells with anti-EpCAM antibody to discriminate them from the remaining WBCs on the device using the protocol described above and released the mixture at 4.5-5 V pp . The released sample of 20μL was placed on a countess slide, and imaged using a fluorescent filter (Figure 3c), as well as bright field (Figure 3d). 10 × ratio increment (ER 9.71 × ) for fluorescently labeled MCF-7 cells with respect to WBCs confirms the identification and enumeration of target cancer cells specifically based on both size and marker expression, which is particularly important for clinical blood samples.
Device optimization for analysis at rare concentration At this point, we demonstrated a quick and portable technique for separating blood components and detecting target cancer cells by immunolabeling on a single device. While the size of circulating tumor cells ranges from 15-25 μm, the rate of occurrence is 1-10 cells per mL of whole blood, thus making them extremely difficult to track 38 . Since rare cell detection requires a large sample volume to process and to identify, enumerate and analyze, we used an external syringe pump. The set-up in Figure 4a shows two syringe pumps connected with a three-way valve for controlling the flow of undiluted blood sample and the wash buffer. The LCAT device was primed with an aqueous solution of 10% glycerol in lipid to maintain the stability of air-liquid interfaces and increase the time of operation 39 . Due to an external bulk flow, it was imperative to optimize the device with respect to flow rate and the voltage applied to PZT. As a model system, we evaluated the device sensitivity and efficiency by utilizing polystyrene microparticles of varying sizes that resembled cell diameters and concentrations of rare cells and WBCs in whole blood. Fifteen micrometer particles were spiked at 10 000 particles per mL concentration in a solution containing one million particles per mL of 10 μm particles. We obtained the highest ER of 8.5 × at 25 μL min − 1 and 2.75 V pp . At constant flow rate, U b by the syringe pump, enrichment ratio reduces at larger voltage values (reduced U b /U s ) due to non-specific trapping of 10 μm particles. However, at low voltage values (larger U b /U s ), due to partial release of target 15 μm particles, a reduced ER was observed (Figure 4b). Whole blood sorting, enrichment and in situ labeling N Garg et al At 8.5 × ER, there was still non-specific trapping in the microstreaming vortices. To increase the purity and ER, we devised a voltage switching procedure by applying a short pulse at low voltage while keeping bulk flow at the same rate (U b ) (Figure 1a). When U s decreases by reducing the PZT voltage to 2 V pp , the increase in U b /U s ratio allows 10 μm particles to be released. To prevent 15 μm particles from releasing, this voltage is maintained for only 30 s. After that, the flow from the syringe pump was switched off and the voltage was increased to 3.5 V pp for 1.5 min to remove 10 μm particles by the LCATs own ability of generating bulk flow. We removed 78.2% of non-target 10 μm particles (Figure 4c), achieving 30 × ER from the same starting material as above. To test at practical occurrence rate, we reduced the spiking concentration of 15 μm particles to 10 mL − 1 and achieved 170 × ER with~100% trapping efficiency (Figures 4d and e).
Device validation with spiked particles and cultured cancer cells from unprocessed blood After validating the device with microparticle mixtures, the ability to isolate particles from normal donor whole blood was tested. We spiked 25 μm and 15 μm particles at concentrations ranging from 10 000 mL − 1 to 10 mL − 1 . Flow for the spiked sample was 25 μL min − 1 and particles were trapped at 2.75V pp . We imaged the device using an upright microscope equipped with a highspeed phantom camera, and observed at the beginning, the channel filled with RBCs as shown in Figure 5a. After 14 min of flow, we switched the valve to introduce the wash buffer (1 × PBS) at the same flow rate for 20 min, with voltage switching to maximize the release of non-target particles and achieve highly pure vortices (Figures 5b and c).
One of the major requisites for the downstream processing of target cells, such as genomic analysis, is the enrichment of target cells in a small volume. Here after operating the device for 34min, the PZT was switched off and the trapped cells were released in a 20 μL volume of buffer. We analyzed the original and trapped sample on a countess slide using fluorescent imaging as shown in Figures 5d and e. At the higher concentration of particles, 10 000 mL − 1 with WBCs at the original concentration of one million cells per mL, the ER of the particles with respect to WBCs was 70 × . We repeated the same procedure at low concentrations and achieved 479 × ER for 15 μm particles and 531 × for 25 μm particles at 10 mL − 1 spiking concentration (Figure 5f).
As a final proof-of-concept experiment, the separation and enrichment of MCF-7 breast cancer cells from whole blood was demonstrated. MCF-7 cancer cells in suspension were immunofluorescently stained with anti-EpCAM antibody and then spiked into normal donor whole blood at concentrations ranging from 1,000 mL − 1 to 10 mL − 1 . The trapping and enrichment of MCF-7 cells was performed using the same protocol described in the sections above. After processing 400 μL of spiked blood sample (separately spiked with 25 μm, 15 μm and MCF-7 cells) for 34min, we could capture and image 3-4 particles (Figures 6a and b) and 4 cells (Figure 6c) when they were spiked at initial concentration of 10 mL − 1 in whole blood, thereby giving~100% trapping efficiency (Figure 6e). After counting the fluorescent MCF-7 cells in the final sample, we obtained 213 × enrichment at 10 mL − 1 spiking concentration (Figure 6f). In addition, we cultured the enriched MCF-7 and SKBR-3 cells for 3 days in conditioned media to ensure the preservation of phenotypic and genotypic characteristics after ultrasound exposure and obtained successful proliferation after acoustic exposure as shown in bright field images ( Figure 6d). We immunostained MCF-7 and SKBR-3 cells in culture with DAPI antibody and obtained multiple nuclei as shown in Figure 6d (right). We further stained MCF-7 cells with anti-EpCAM antibody and observed intact cell membranes as shown in Supplementary Figure S2.
DISCUSSION
Rare cells in the circulatory system carry significance for improving cancer prognosis and monitoring treatment response 1 . The ultimate goal of liquid biopsy based detection of rare cells requires a multi-modal system that can process raw samples by separating, sorting and enriching for specific cell populations, and finally identifying the targeted cells via known biological markers 40 . Typically, each of these processes require separate instruments and the transfer of samples across different platforms, resulting in lowered sensitivity, specificity, and added costs. As described in this paper, our LCAT rare cell isolation platform can process whole blood without dilution, separate out the cellular subcomponents, isolate and enrich the targeted size-based population, and perform in situ immunolabeling. There is no other liquid biopsy system that can achieve this level of integration and multi-modal functionality. For example, the first category of rare cell detection platforms relied on surface biomarkers (for example, EpCAM) and the binding to antibodycoated surfaces to filter and concentrate rare cells such as CTCs. However, it is now well known that cancer cells can undergo 41 . Another promising category of microfluidic systems is "label-free", and they are able to sort blood cells based on their sizes using known techniques such as deterministic lateral displacement (DLD) 22,42 , inertial microfluidics 43 , and acoustophoresis 20,21,44 . However, recent developments in oncology have shown heterogeneity and great size overlap between rare cells such as CTCs and healthy WBCs 19,45 . Being able to separate blood cell sub-populations with fine resolution to define a size threshold that ensures~100% capture of targeted rare cells, and then identify the targeted cells within this concentrated population via gentle immunolabeling in situ is a powerful combination of techniques needed for liquid biopsy. An integrated system that is multimodal, label-free, biomarker compatible, low cost and can be fully automated has enormous potential to make clinical impact. In this paper, our device has demonstrated the enrichment of MCF-7 cells from whole blood at concentrations as low as 10 mL − 1 , at 100% capture efficiency, and verified by EpCAM biomarker staining in situ. In contrast, although hydrodynamic vortices based devices demonstrated cancer cell enrichment up to four orders of magnitude, it required significant blood dilution to avoid cell-cell interactions and the capture efficiency was only~20% 11 . Another enormous advantage of this LCAT platform is in the speed of operation for cell enrichment and identification. Considering just the downstream analysis and enumeration of target cells after enrichment, benchtop immunostaining and fixation could take several hours. However, to develop an inexpensive and automated point-of-care device, we labelled the captured target cells in situ within 15 min and achieved 91% correlation in the fluorescence intensity by optimizing the LCAT pumped bulk flow.
While our system has great potential for liquid biopsy, it may be limited by throughput due to the low flow rate and the instability of LCATs over time. Inertial microfluidics based devices are highthroughput (mL min − 1 ) but require significant dilution. Our strategy to improve throughput is to parallelize LCAT devices and increase the total blood sample volume it can handle. We are also optimizing the length of the sorting microchannels to maximize microfluidic channel length while minimizing hydrodynamic resistance. Another possibility is to flow a second pass of the enriched 20 μL sample in the device to reduce the volume required in the first place since it is likely the enrichment ratio will be greatly improved.
Overall, this work demonstrates a highly promising approach for integrated isolation, enrichment, release and identification of Whole blood sorting, enrichment and in situ labeling N Garg et al targeted blood cells, and ultimately could lead to the development of complete point-of-care blood diagnosis. The technology, based on acoustic microstreaming, processes whole blood and does not require centrifugation or benchside processing of separated blood components. Thus, an offshoot of this technology is to create a complete blood sample preparation front end device able to separate each of the blood cellular component (platelets, RBC and WBC), as each of these components contain critical information for the diagnosis of physiologic and pathologic conditions such as infectious diseases and inflammatory responses.
CONCLUSIONS
An acoustic microstreaming based LCAT device that can process whole blood and all its cellular constituents by size-based sorting, enrichment and subsequent biomarker-based identification is reported. This is, to our knowledge, the first demonstration of a single microfluidic device that combines the advantages of both label-free detection and in situ immunolabeling to select for targeted specific cell types. It is also the first device that can separate and collect the major components in whole blood (platelet rich plasma sample, RBCs and WBCs). Rapid immunolabeling (15 min) was demonstrated as targeted cells were trapped in microstreaming vortices. With these novel integrated functions, we demonstrated~100% trapping efficiency of spiked polystyrene beads and MCF-7 cells in whole blood and achieved~200 × enrichment ratio in the sorted sample of 20 μL for concentrations as low as 10 mL − 1 . | 6,210.4 | 2018-02-26T00:00:00.000 | [
"Biology"
] |
Common Reed as a Renewable Energy Resource for Pellet Production
. Renewable reed biomass has traditionally been used for various purposes. Currently, an energy-dependent society is returning to the use of natural energy sources. The paper is devoted to the study of reed reserves in the Lower Volga region, production technology, and quality assessment of pellets from it. Reed mowing with further production of fuel pellets from biomass will not only provide the local population or small enterprises with energy but also solve a number of environmental problems. The main environmental problem is the high fire hazard of reed beds. The main characteristics of the pellets that have been investigated by the authors are humidity, ash content, calorific value, and the composition of the ash residue. When granulating shredded cormophytic biomass, the main process parameter affecting the fuel pellet quality is the moisture content of the raw materials, which has been determined experimentally. To improve the consumer properties of the granules, binders were also experimentally selected. Binders should not reduce the calorific value and impair the composition of the ash residue
Introduction
Common reed is widespread almost everywhere in Europe, except for desert and the Arctic. From the economic point of view, the reed is a weed infesting all agricultural crops on irrigated lands, whereby it causes significant damage [1,3].
In spring and summer, reed beds create an increased fire hazard. In absolutely most cases in the reed beds, fire is caused by the anthropogenic factor. The fire that has arisen in the reed beds especially in dry and windy conditions quickly becomes uncontrolled. In spring, the reed fires are systematic and massive [1].
Proper science-based control over the reed biomass can significantly reduce the risk of landscape fires and bring economic benefits from the production of commercial goods [1].
One of the promising ways to solve the issue of utilizing rapidly growing reed biomass is to use it as biofuel for small-size power plants, e.g., district boiler houses, peasant farms, small industrial enterprises, or private households. As a biofuel, it is advisable to produce pellets (fuel granules) from cormophytic reed biomass [1].
Recently, the demand for fuel pellets made of non-wood raw materials, including reed has been growing especially in regions suffering from the lack of other resources -wood, coal.
Producing fuel pellets from wood waste is widespread in many world countries. In 2018, almost 36 million tons of wood fuel pellets were globally produced and by 2028, according to INFOBIO IAA experts' estimates, this volume would increase 2.5 times [2].
The Volzhsky Polytechnic Institute scientists have studied the possible operational reserves of reed growing in the entire Volgograd region and particularly in the Volga-Akhtuba Floodplain (involving the territories of two administrative districts of the Astrakhan region). The most suitable technique to determine the productivity of the reed fields and evaluate the probable harvest is the plot harvesting [3]. In the reed raw materials granulation, one of the most important characteristics is the moisture content, which has been determined experimentally.
Granulation can be performed with and without binders, however, the use of binders allows improving the consumer properties of reed pellets. Binders have also been experimentally selected.
To evaluate the finished products, i.e. reed biomass pellets, parameters such as pellet moisture, calorific value, ash content, and ash residue composition should be determined, since it is these characteristics that affect either the operational or ecological parameters of the finished product. These parameters have also been determined by the paper authors in the experiments.
Estimating the reserves of reed raw materials in the Volga-Akhtuba floodplain and the entire Volgograd region
Locating reed beds is an important task for determining the biological and operational reed resources and performing the feasibility study of its further rational use.
To study the operational reed reserves of the Volgograd region, requests have been drawn up and sent to agencies interested in reducing the fire hazard in reed growing areas, as well as natural regional parks and forest ranger stations. Based on the data obtained, a preliminary generalized analysis has been conducted for the reed beds in certain regional districts.
Reed beds growing in the Volga-Akhtuba Floodplain have been studied in more detail. Three districts of the Volgograd region-the Central Akhtubinsky, Leninsky, and Svetloyarsky ones, and two administrative districts of the Astrakhan region -Akhtubinsky and Chernoyarsky ones have been investigated. All these districts are fully or partially located in the floodplain.
To fulfill the tasks set, a field expedition was preliminarily arranged, the objective of which was to locate the commercial reed beds, determine the harvesting boundaries and the crop productivity, and estimate reserves of these areas and lands. In the course of field studies, the location of commercial beds and lands was identified. The beds and lands located have been plotted on topographic maps.
The work was performed from December 15, 2015 to February 1, 2016. Carrying out location work is most feasible in this period, since reed beds are easy to identify by the color, which is specific for this season; in winter, they are most researchable, especially after a certain subzero (Celsius) temperature period that causes freezing of the soil and forming a stable strong ice cover on the water surface. Industrial harvesting is performed in winter, therefore, the study of operational reed reserves and analysis of their condition from the point of view of suitability to produce solid biofuel have been carried out just at that time.
The task of locating commercial reed beds and determining the operational reed reserves in the Volgograd and Astrakhan regions has been divided into two stages.
The first stage is locating commercial reed beds as such. The search expedition aimed at locating reed beds has used photo equipment and the public Internet resources Yandex Maps and google.ru/maps that allow accurately tracking the commercial reed beds identified. To correlate the administrative attribution of reed beds, cadastral maps of settlements or notable geographical objects have also been used. To determine the reed bed area, the following procedure was applied: the survey region was determined on a map using Internet resources and the scale was increased to sizes allowing to identify reed beds on the map. The image was converted into the Compass graphic editor, in which the reed bed perimeter was outlined with further determining the resulting flat figure area in mm2. Since the electronic map has a scale indicator, a reference square has been constructed in this scale, the area of which is easy to determine in true (m 2 ) and relative units using the Compass graphics editor [3]. The second stage is a field study aimed at determining the operational reed reserves. At the second stage, trips were made to the reed beds located at the first stage and operational reed reserves were determined using the plot harvesting technique.
In [3], a procedure for defining the reed bed productivity in site investigations is described. Plots of 1×1 m in size were laid in a straight line in a reed bed. At these plots, control mowing was performed and to calculate the reserves, the productivity determined as a result of the control mowing was multiplied by the area of the commercial harvesting. For this purpose, the reed beds with a productivity of more than 3 t/ha were considered commercial ones [4]. However, this method has shortcomings: laying straight routes in reed beds is very difficult even using special equipment; the number of plots should be large to reduce the error; the control mowing error increases with decreasing plot size. For this study, it has been decided to use a typical plot sampling within a reed bed. The number of plots within the bed varied from 5 to 7. This method allows considering the reed bed plots with the highest, average, and lowest yield without the need to lay certain routes in the beds, which is very difficult in the course of mowing, since it is impossible to clearly define the location and ties in dense and high reed beds. The plot dimensions were 5×5 m. The reed was mowed using a weed whacker to ensure the lack of unmown reeds inside the perimeter. All reed mowed at the plot was collected and weighed using electronic scales. The measurement results were recorded in a logbook specifying the place and time of measurement. Sampling has also been made to determine the moisture content of reed in the beds studied in the research. To do this, the sampled material was placed in an airtight valved bag; in addition, reed material was sampled in an amount of 2-3 kg for further processing into fuel pellets.
During the expeditions, an area of about 10,000 km 2 was surveyed, on which more than 16,000 ha of reed beds were identified (despite the entire Volga-Akhtuba Floodplain area is about 20,000 km 2 ). The average yield of mowed and dry biomass is 5.7 and 4.6 t/ha, respectively, as can be seen from table 1. When comparing the areas occupied by reed beds with a cadastral map, we may conclude that most of the beds are located on the territory of settlements and agricultural lands [3].
The production of reed fuel pellets requires the moisture content of the feedstock (mowed reed) to be within certain limits. Otherwise, the initial reed biomass would require drying or moisturizing. From a technological point of view, drying is a much more complicated and costly operation. According to the results of field studies in which samples have been taken to determine the moisture content, in most cases, the moisture content of reed biomass is quite suitable to produce the fuel pellets without drying (Table 1). The only sample taken in the Khutor Talovy area had increased water content. This can be explained by either measurement error (outlier) or, more likely, the adverse atmospheric effects (fog and fine rain during research).
The calculation results entirely confirm the previously accepted hypothesis of average reed yield, according to which this value is supposed to be within 4.22 to 7.18 t/ha.
The results can be considered reliable and used to estimate the reed yield and possible harvesting in the territories surveyed. The forecasted total value of possible harvesting for all explored commercial beds should be within 75 to 85 thousand tons, which is quite enough to power, e.g., an enterprise for processing agricultural products or arrange the production of reed pellets for sale on the solid biofuel market.
To track the reed biomass rehabilitation in the vegetative period, monitoring of the reed biomass condition has been organized at the reference plots in the Volzhsky industrial area. Monitoring comprises periodic inspection of reference plots and sampling reed biomass for analysis.
Basic process parameters affecting the pellet quality
In the course of producing pilot batches of reed fuel pellets, the main factors affecting their quality have been established: the moisture content of chips, the chip size, and the die shaft torque.
Among the factors listed, the initial moisture content of chips has the greatest impact on the pellet quality. When granulating reed biomass at the moisture content of the feedstock exceeding 15-20 %, the resulting fuel pellets are destructed by the internal pressure of water formed at the compression of the crushed mass. Therefore, in this case, the raw materials used to produce fuel pellets should be dried to a suitable moisture value. Since drying is the most energy-intensive operation in the fuel pellet production (up to 70 % of the energy consumption falls on drying biomass [5]), the acceptance control allows optimizing the reed fuel pellet production process through the drying of only the feedstock whose moisture does not meet the standard requirements.
Determining the moisture content of the mowed reed biomass is important for other reasons as well. Firstly, when long-term storing the mowed reed biomass to be used as a raw material in the fuel pellet production, an increase in the moisture content of raw materials during storage to 15-20 % and more creates the conditions for starting the vital activity of substrate microflora (bacteria, molds). The microflora activity is accompanied by a large biological heat release, the substrate is heated, and the process becomes even more active. The substrate begins to "burn". Secondly, with an initial biomass moisture content known, it is possible to choose the optimal operating conditions of the drying unit and determine the amount of water or steam required to moisturize raw material with too low initial moisture content. Thirdly, the relative moisture of the reed biomass was used to determine the dry biomass yield [6].
The moisture content of the reed samples from each plot was determined by two techniques-the analytical one and using the Wile Bio Wood moisture meter. The discrepancy between the values obtained using the instrument and the analytical technique did not exceed 5 %. Studying the impact of the moisture content of the reed chips on the possibility of granulating it and the quality of the pellets obtained allows establishing the following: at too low chip moisture content (less than 8 %), granulation does not virtually occur, the reed chips do not agglomerate and freely pour through the die holes; at a chip moisture content of 8-14 %, granulation occurs but the ungranulated mass yield is sufficiently large, such process cannot be considered effective and used for commercial purposes; herewith, an increase in the granulator operating time leads to the chip burning in the gap between the rollers and the die, which is accompanied by smoke, and the thermal effect is clearly visible on the pellet surface; such process is dangerous from the point of view of the possible chip burning in the granulator and the occurrence of a fire; the chip moisture content within 14-20 % may be considered optimal, the pellet yield and quality reach the highest values; with a further increase in moister content (over 20 %), the pellet quality deteriorates due to excessive water in it, which deforms the pellet when the latter leaves the die, the defects in the form of cracks are clearly visible on the pellet surface, and it remains brittle after cooling (Fig. 1). It should be noted that in the course of granulation in a granulator with a flat rotating die and pressure rollers, the moisture content of the feedstock changes significantly. In the experiment, the granulator was fed with chips having a moisture content of 19.4 %. In the gap between the rollers and the die, a significant amount of heat is released, the pressed chips are heated, and their moisture content decreases. The heating of the pressed material continues as it passes through the die holes. At the die outlet, the pellets have a moisture content of 12.1 % and an elevated temperature. During cooling, the moisture content of pellets decreases to 8.1 %. Thus, in the course of granulation, the moisture content of the material decreases almost 2.4 times from 19.4 to 8.1 % that should be considered when developing an industrial process.
The impact of the initial chip moisture content on the pellet density has been established. For the experiment, pellets made of the fine chip with a moisture content of 10, 20, and 25 % were sampled. As can be seen from the diagram in Fig. 2, when the chip moisture content deviates from the optimal value, the pellet density decreases and the quality deteriorates [1]. Improving the pellet quality without changing the granulation conditions can be achieved by adding auxiliary components-binders to the chips. When choosing one or another binder, its cost, availability, and technological and environmental properties should be considered. In [7], the authors propose the following substances as binders: lignin, glycerin, stearin, and sodium stearate.
When testing the reed pellets obtained, potato starch was also used as a binding component. According to qualitative analysis, pellet samples in which potato starch was used as a binder showed good adhesion of the feedstock during compression and therefore, the highest density and strength. Also, it is worth noting that starch is the cheapest of the binders studied.
Estimating the reed fuel pellet quality was performed according to ISO 17225-6: 2014 "Solid biomass fuels (biofuels)-Fuel specifications and classes-Part 6: Graded non-woody pellets." According to the standard specified, the following indicators were studied: moisture content, %; ash content, %; calorific value (lower), MJ/kg; and the content of macro and microelements in their composition [8]. The moisture, ash, and the macro and microelements content studies were carried out in the Teplotekhnik Test Center; the test results are given in Table 2. Table 3 shows the elementary composition of the mineral (ash) part of the reed fuel pellets [9]. The study results showed the possibility of using reeds to produce fuel pellets. The most suitable raw material is reed growing in relatively favorable environmental conditions, away from industrial enterprises. Reed beds located in the Volga-Akhtuba Floodplain natural park, as well as in the Svetloyarsky district of the Volgograd region and the Chernoyarsky district of the Astrakhan region meet these conditions; the largest and most promising from the reed harvesting and processing point of view beds are located in these areas. | 3,984.4 | 2020-01-01T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
Development of Newly Formulated Nanoalumina-/Alkasite-Based Restorative Material
Purpose Nanotechnology offers considerable scope in dentistry to improve dental treatment, care, and prevention of oral diseases through the use of nanosized biomaterials. This study assessed the effect of incorporating alumina nanoparticles (Al2O3 NPa) to the recently introduced alkasite-based restorative material (Cention N) on its mechanical properties and surface topographical features. Materials and Methods Alumina nanopowder was incorporated into the powder component of Cention N at 5 and 10% (w/w). The unblended powder was used as a control. Compressive strength was assessed using a universal testing machine. Surface microhardness and roughness were evaluated using the Vickers microhardness test and surface profilometer, respectively. Surface topography was inspected using a scanning electron microscope (SEM). Data were analyzed by ANOVA and Tukey's test (P < 0.05). Results Incorporation of either 5 or 10% (w/w) Al2O3 NPa into alkasite-based restorative materials (Cention N) increased both its compressive strength and surface microhardness. This increase was significant with the use of lower concentration Al2O3 NPa (5% w/w). Meanwhile, there was an increase in surface roughness values of Cention N modified with either 5 or 10% (w/w) Al2O3 NPa. This increase was only significant in the case of 10% (w/w) Al2O3 NPa. Conclusion Incorporation of 5% (w/w) Al2O3 NPa into the newly introduced alkasite-based restorative material (Cention N) seems to produce a promising restorative material with high compressive strength and surface hardness without adversely affecting its surface roughness properties. Thus, nanotechnology implementation into Cention N restorative material may be strongly helpful for a diversity of clinical applications.
Introduction
Various direct filling materials are available in dental markets shifting from amalgams to modern bulk-fill composites [1]. Amalgam and glass ionomer cement are considered basic filling materials. ey are basic in terms of their long establishment, economical, and simplicity of use. Moreover, they are usually applied in bulk without adhesive, are self-curing, and do not need complicated dental equipment [2].
However, the drawbacks related to amalgam such as the relatively high coefficient of thermal expansion, the need for matrix band during condensation, the unesthetic appearance, and the argument concerning the safety of mercury all have a role in the emergence of tooth-colored restorative materials [3]. Similarly, glass ionomer cement possesses poor mechanical properties, limited usage (unsuitable for stress-bearing situations), and low esthetic value that led to the further development of resin-based composites [4].
Numerous improvements in direct filling materials have been made with dental composites and their accompanying adhesives in recent decades [1]. Polymeric restoratives have continued to develop into the direct restorative materials of choice mainly due to their superior esthetic characteristics [5]. Composites have been the most widely used restorative materials in dentistry in recent years, with a wide variety of applications. Yet, they are considered expensive, timeconsuming, and technique sensitive [6].
Consequently, dentists searched for a real alternative to silver amalgam, glass ionomer cement, and composites that is cost-effective, a fluoride-releasing product, quick, and easy to use without the complicated equipment and offers both strength and good esthetics [7].
Cention N is an "alkasite" restorative material that marks the start of a new age of restorative dentistry, such as compomer or ormocer. It is essentially a subgroup of the composite resin [7]. It is a novel bulk-fill direct posterior restorative material. is new material uses an alkaline filler that can release acid-neutralizing ions [7].
It is self-curing with elective supplementary light curing. Cention N is radiopaque, which releases fluoride, calcium, and hydroxide ions. Due to its dual-curing option, it can be utilized as a full volume (bulk) replacement material [8]. Cention N has many advantages such as bulk placement, optimal physical/mechanical properties, better esthetics, and optional light curing [9]. e use of nanomaterials in dentistry is not only supposed to enhance the properties and functionality of dental products but also serve strides forward to the development of innovative, novel products for the beneficence of patients [10]. Nanosized materials exhibit exceptional properties according to their size. Metal and metal oxide nanoparticles have been greatly investigated due to their prospective-wide applications [11].
Aluminum oxide, commonly referred to as alumina with the chemical formula Al 2 O 3 , is a chemical compound of aluminum and oxygen with strong ionic interatomic bonding that produces its desirable material characteristics.
is can exist in several crystalline phases; alpha phase alumina is the strongest and the stiffest of the oxide ceramics. e desirable characteristics of alumina, such as high hardness, excellent dielectric properties, and good thermal properties, make it the material of choice for a variety of applications. Moreover, it has excellent size and shape capabilities with high strength and stiffness too [12].
As the use of nanoparticles has become a significant area of research in the dental field, the purpose of this study is to evaluate the effect of incorporating the recently introduced alkasite restorative material, Cention N with alumina nanoparticles on its compressive strength, surface roughness, and microhardness and surface microstructure.
According to the research hypothesis, adding Al 2 O 3 NPa to Cention N would change its physical properties and surface microstructure.
Materials and Methods
A commercially available Cention N restorative powder (Cention, Ivoclar Vivadent AG, Liechtenstein, Lot Number X46009) was blended in various proportions with alumina nanoparticles (Sigma-Aldrich Co., St. Louis, MO, USA) with particle size measuring <50 nm by transmission electron microscope (TEM).
Specimen Preparation.
Specimens' powders were made by blending 5% and 10% (w/w) alumina nanoparticles powder with the Cention N powder (with a particle size of 90 µm as received by the manufacturer) by hand using a mortar and pestle for 10 min. e unblended powder was used as the control for all tests. e recommended powder/ liquid (P/L) ratio of 1.8/1 for Cention N restorative material was used in all prepared specimens. e 5 and 10% w/w of alumina NPa powder ratios were added to the Cention N powder before proportioning the powder with the liquid; hence, the additional alumina powder ratios were accompanied by the reduction in the amount of Cention N powder.
A total of 93 specimens were used in the study: 30 specimens for each mechanical test (compressive strength, surface microhardness, and surface roughness tests) and 3 representative samples, one for each of the following groups for scanning the surface microstructure.
A sectional Teflon mold (8 mm diameter × 2 mm thickness) was utilized to fabricate disc-shaped specimens used for surface microhardness, surface roughness, and color stability tests. At the same time, a stainless-steel split mold (4 mm in diameter and 6 mm in height) according to ISO standards was utilized to prepare cylindrical specimens for compressive strength testing. All specimens were stored in deionized water at 37 ± 1°C to equilibrate for 48 hours before testing.
Compressive Strength Test.
Compressive strength testing (Cs; MPa) was performed using the universal testing machine at a crosshead speed of 0.5 mm/min. It was calculated using the following equation: where P f is the load (N) at the fracture and D is the diameter of the specimen (mm) [13].
Surface Microhardness Test.
e Vickers hardness numbers (VHN) for the tested specimens were obtained using a microindentation tester (MMT-3 Digital Hardness Tester, Buehler Ltd., Lake Bluff, IL) by applying a load of 29.42 N on the specimens for 30 seconds. Five indentation measurements were carried out and averaged for each specimen [14].
Surface Roughness
Test. Using a surface profilometer (Surftest 211, Mitutoyo, Tokyo, Japan), the surface roughness of each specimen was explored in five distinct locations. e surface roughness cutoff value was 0.8 mm, and the stylus' traversing range was 4 mm. e tracing diamond tip radius was 5 μm, and the measuring strength and velocity were 4 mN (0.4 g) and 0.5 m s −1 , respectively. Each specimen shows the average roughness value (Ra, μm) as the mean of the Ra values measured in five distinct locations.
2
International Journal of Dentistry
Scanning Electron Microscopy (SEM).
e surface microstructure of the three samples representing the studied groups was examined using a scanning electron microscope (SEM; JEOL, JSM-6510LV, Japan) operating with an accelerating potential of 30 kV and magnification up to ×10 6 . All specimens were coated with a thin layer of gold to minimize the effect of charge.
Compressive Strength.
e mean and standard deviation values for compressive strength are presented in Table 1. e 5% (w/w) Al 2 O 3 -NPa-modified Cention N group showed the highest compressive strength value (202.680 ± 7.558), while the control group (no addition) showed the least value (173.787 ± 3.302). One-way ANOVA identified significant differences between the mean values of compressive strength of the tested groups (P � 0.0012). Tukey's test showed that there was no statistically significant increase in compressive strength value of 10% (w/w) Al 2 O 3 -NPa-modified Cention N in comparison to the control group. On the other hand, there was a significant increase in compressive strength values (P < 0.05) of 5% (w/w) Al 2 O 3 -NPa-modified Cention N when compared to both the 10% (w/w) and the control groups.
Surface Microhardness.
e mean and standard deviation values for surface hardness are presented in Table 1. e 5% (w/ w) Al 2 O 3 -NPa-modified Cention N group showed the highest surface microhardness value (76.067 ± 2.682), while the control group exhibited the least value (48.333 ± 2.645). One-way ANOVA identified significant differences between the mean values of surface microhardness of the tested groups (P � 0.0001). Both 5 and 10% (w/w) groups showed a significant increase in surface microhardness values when compared to the control group. e addition of a lower concentration of Al 2 O 3 NPa (5% w/w) to Cention N significantly increased its microhardness values when compared to those of the higher concentration (10% w/w) group.
Surface Roughness.
e mean and standard deviation values for surface roughness are presented in Table 1. e higher concentration of the Al 2 O 3 NPa group (10% (w/w)) demonstrated the highest surface roughness value (0.1790 ± 0.0118), while the control group exhibited the least value (0.1064 ± 0.0357).
One-way ANOVA showed significant differences between the mean values of surface roughness of the tested groups (P � 0.0003). e surface roughness value of the 5% (w/w) group exhibited a slight nonsignificant increase in comparison with that of the control group. However, the surface roughness value of the 10% (w/w) group exhibited a significant increase when compared to both the 5% (w/w) and the control groups.
Scanning Electron Microscopy.
e SEM photomicrographs obtained in this study demonstrated an increase in the homogeneity and smoothness of the surface with modification of the Cention N samples with 5% (w/w) NPa (Figure 1(b)) in comparison to Figure 1(a). Meanwhile, the higher concentration (10% w/w) exhibited the appearance of small clusters due to the agglomeration of the powder of nanoparticles (Figure 1(c)).
Discussion
Nanomaterials are expected to enhance not only the properties and use of dental products but also the development of new products for the best benefit of patients [10]. e use of nanoscale materials, especially metal oxide nanoparticles such as Al 2 O 3 NPa, has been investigated in this study because of their potential for a variety of applications due to their specific properties [11].
Compressive strength has a particularly important role in the mastication process since most of the masticatory forces are compressive [15]. erefore, it is important to investigate whether the compressive force contributes to fracture failure during the mastication process. e microhardness test is a parameter frequently used to evaluate the material surface's resistance to plastic deformation by penetration [16]. e research hypothesis was accepted since the addition of Al 2 O 3 NPa to Cention N did alter its physical properties. e two concentrations of Al 2 O 3 NPa (5 and 10% w/w) increased the compressive strength of Cention N. However, this increase was only significant in the case of lower concentration (5% w/w). Similarly, a significant improvement in surface hardness values was exhibited by the two groups of Cention N modified with both 5 and 10% (w/w) Al 2 O 3 NPa, which was more pronounced also with the lower concentration (5% w/w).
Compressive strength and surface hardness improvement of Cention N containing 5% and 10% (w/w) Al 2 O 3 NPa can be attributed to the small size of the Al 2 O 3 particles supplemented into the glass fillers of the powder. ese nanoparticles could occupy the empty spaces between the larger Cention N glass filler particles and act as additional binding sites for the organic monomer part of Cention N that was found in the Cention N liquid [17]. is monomer consists of four different dimethacrylates: urethane dimethacrylate (UDMA), tricyclodecanedimethanol dimethacrylate (DCP), tetramethyl-xylylen-diurethane dimethacrylate (aromatic aliphatic UDMA), and polyethylene glycol-400 dimethacrylate (PEG-400 DMA) that interconnect (cross-links) during polymerization resulting in strong mechanical properties and good long-term stability [18]. e lower compressive strength and surface hardness values at higher Al 2 O 3 NPa concentration (10% w/w) loading compared to lower concentration (5% w/w) loading could be related to the Al 2 O 3 NPa's propensity to agglomerate within the matrix at higher concentration exhibiting weak matrix interaction, resulting in lower mechanical properties [19]. Furthermore, these clumped particles may serve as a defect center, promoting the accumulation of stress-related damage [20].
is was supported by Schulze et al. who concluded that an increase in filler fraction does not necessarily lead to an increase in strength. is could be attributed to the fact that International Journal of Dentistry higher filler fractions could generate more defects that weaken the materials [21]. e findings of this study are consistent with Adachi et al. who reported that the addition of fillers in the form of alumina nanoparticles into a polymer that serves as a matrix improved the mechanical behavior of the obtained composite material [22].
On the contrary, the main problems encountered with the addition of higher concentrations of nanoparticles are the mixing and uniform distribution of the nanoparticles within the matrix material because nanoparticles tend to agglomerate, thus weakening the polymer matrix [21].
In the present study, the values of average surface roughness (Ra) for all tested Cention N specimens (control and modified groups) were within the 0·106-0·179 μm range. Uppal et al. [23] reported that the critical surface roughness value for bacterial colonization is 0.2 μm. Surface roughness higher than 0.2 μm is likely to increase significantly bacterial adhesion, dental plaque maturation, and acidity, which act on material surfaces, thus increasing caries risk. In this study, all Cention N presented surface roughness below this value, both before and after modification with Al 2 O 3 NPa. e results of this study exhibited an increase in surface roughness values of Cention N modified with either 5 or 10% (w/w) Al 2 O 3 NPa. However, this increase was only significant in the case of the higher concentration group (10% w/w). is might be attributed to the increasing possibility of agglomeration of Al 2 O 3 NPa in the case of using higher concentration with the corresponding lack of homogeneity and interfacial bonding between the particles and polymer matrix and hence an accompanying increase in surface roughness. e SEM examination of the samples in this study was consistent with the roughness results since the SEM photomicrograph of Figure 1(c) revealed the appearance of small clustering with a higher concentration of the Al 2 O 3 NPa group when compared to those of both the control group (Figure 1(a)) and the lower concentration group of Al 2 O 3 NPa (Figure 1(b)).
is clustering tends to decrease the homogeneity of the surface of the samples [24]. e lack of water sorption and solubility tests, as well as the use of only two concentration groups of alumina NPa, added to Cention N, are regarded as limitations of this study.
Conclusions
Based on the results and within the limitations of this study, it could be concluded that the use of 5% (w/w) Al 2 O 3 -NPamodified Cention N appears to be very promising. Modification of Cention N with 5% (w/w) Al 2 O 3 NPa improved both compressive strength and surface hardness without compromising its surface roughness. Further assessments are demanded to study the effect of this modification on certain properties such as color change as well as water sorption and solubility.
Data Availability
e SPSS data file used to support the findings of this study are available from the corresponding author upon request.
Ethical Approval
All the procedures performed in the study were following the ethical standards of the institutional and/or national research committee.
is study was approved by the institutional review board. | 3,866.8 | 2021-07-26T00:00:00.000 | [
"Materials Science",
"Medicine"
] |
SAPFIR: A webserver for the identification of alternative protein features
Background Alternative splicing can increase the diversity of gene functions by generating multiple isoforms with different sequences and functions. However, the extent to which splicing events have functional consequences remains unclear and predicting the impact of splicing events on protein activity is limited to gene-specific analysis. Results To accelerate the identification of functionally relevant alternative splicing events we created SAPFIR, a predictor of protein features associated with alternative splicing events. This webserver tool uses InterProScan to predict protein features such as functional domains, motifs and sites in the human and mouse genomes and link them to alternative splicing events. Alternative protein features are displayed as functions of the transcripts and splice sites. SAPFIR could be used to analyze proteins generated from a single gene or a group of genes and can directly identify alternative protein features in large sequence data sets. The accuracy and utility of SAPFIR was validated by its ability to rediscover previously validated alternative protein domains. In addition, our de novo analysis of public datasets using SAPFIR indicated that only a small portion of alternative protein domains was conserved between human and mouse, and that in human, genes involved in nervous system process, regulation of DNA-templated transcription and aging are more likely to produce isoforms missing functional domains due to alternative splicing. Conclusion Overall SAPFIR represents a new tool for the rapid identification of functional alternative splicing events and enables the identification of cellular functions affected by a defined splicing program. SAPFIR is freely available at https://bioinfo-scottgroup.med.usherbrooke.ca/sapfir/, a website implemented in Python, with all major browsers supported. The source code is available at https://github.com/DelongZHOU/SAPFIR. Supplementary Information The online version contains supplementary material available at 10.1186/s12859-022-04804-w.
minor isoform frequency of 15% or more [2]. Alternative splicing leads to changes in the sequence of the mRNA transcript which can translate into changes in the protein product, including at the level of its localization in the cell, cellular function, stability or binding affinity [3]. Indeed, previous studies estimate that about 26% of protein domains and about 20% of localization signals are absent in some transcripts due to alternative splicing [4][5][6]. Alternative splicing can also change the expression of a gene by changing the stability of the mRNA or protein product [7].
In general, constitutive splicing events tend to be more conserved than alternative splicing events suggesting a role for alternative splicing in supporting species identity [8][9][10]. However, it remains unclear whether species-specific alternative splicing events result in species-specific alternative protein functions.
Recent advances in RNA sequencing technologies have allowed for transcriptomewide analysis of differential splicing of mRNAs [11]. Deep sequencing of normal and diseased tissues identified thousands of splice variants underlining the potential of splicing as regulator of cell function. However, identifying the functional differences between splicing variants is limited to empirical gene-by-gene studies [3,12]. It is unclear whether groups of genes of similar functions or genes within the same pathways are particularly prone to alternative splicing regulation. Ultimately, the association of changes in splicing profiles with specific changes in cellular function continues to be challenging [13][14][15].
The impact of splicing on protein function is currently being addressed by a small number of computational tools that provide information on exon or isoform functions (Table 1). These tools infer the effect of alternative splicing by predicting its impact on the presence of important protein features. For example, Exon Ontology annotates exon function by associating exons with information such as protein domains and post-translational modification sites grouped in a hierarchical structure similar to Gene Ontology [16]. On the other hand, IsoformSwitchAnalyzeR and tappAS compare splicing isoforms to determine the gain or loss of protein features including functional domains and important motifs [17,18]. However, these tools suffer from a lack of flexible user-friendly interface, which reduces their capacity to adapt to different splicing analysis pipelines and reach a wider user base.
To facilitate the prediction of the effects of splicing events on protein function we created the Sherbrooke Alternative Protein Feature IdentificatoR (SAPFIR). SAPFIR is a flexible and easy-to-use webserver that identifies alternative protein features in individual genes or lists of genomic regions. SAPFIR's flexible parameter setup permits
1) Annotation of alternatively included protein features
The SAPFIR webserver aims to identify and characterize alternative protein features encoded in transcripts from user-defined genes or genomic regions. To do so, SAPFIR considers all genes, transcripts, proteins as annotated in Ensembl and all protein features as predicted by InterproScan to determine those that are alternatively included. However, for a given gene, whether a protein feature is constitutive (present in all transcripts) or alternative (present only in some of the transcripts), depends on which transcripts are considered.
To provide flexibility to the user, we propose three largely independent standards to suit the needs of different studies. As illustrated in Fig. 1, isoforms can arise from alternative exons (transcripts 1 and 2), alternative transcription start sites or transcription termination sites (transcripts 2, 3 and 4), and other mechanisms. Non-coding transcripts may result from intron retention or frame shift (transcript 5). The first standard considers all coding transcripts, since including non-coding transcripts makes all predicted features alternative.
Transcripts with short coding region (CDS) pose similar challenges and are frequently observed in Ensembl annotation. To overcome this problem, we introduced a second standard where only the transcripts with long enough CDS are considered. To implement this standard, the transcript with the longest CDS in each gene is considered as the major isoform. The CDS lengths of other transcripts are compared to the major isoform Fig. 1 Definition of alternative protein features using an illustration of a hypothetical gene with predicted features. Whether a feature is alternative depends on which transcripts are considered. Notably, the transcripts considered by the Overlap CDS standard vary as a function of the feature in question. The CDS Length ratio is set at 50% which removes Transcript 4 from consideration. "Alt. " indicates alternative, "Con. " indicates constitutive to determine their ratios of CDS length. Only the transcripts whose ratios exceed a user-defined threshold are considered in this standard. The distribution of gene-wise proportion of transcripts exceeding three different thresholds (0.25, 0.50, and 0.75) are illustrated in Additional file 1: Fig. S1C. In a quarter of genes, fewer than 40% of transcripts have CDS longer than a quarter of their respective major isoform, in both human and mouse, indicating the abundance of transcripts with short CDS. Therefore, it would create significant biases if these transcripts were not excluded.
Finally, we propose a third standard of "Overlap CDS" to better identify variations caused by alternative splicing. In this standard, only transcripts where the genomic region defined by the start and end of its CDS cover entirely the genomic region defined by the start and end of a feature of interest, disregarding exon-intron boundaries during the process, are considered. These transcripts are more likely to be regulated by alternative splicing, where an alternative region is usually flanked by two constitutive regions. This standard has the particularity that transcripts considered for each feature may vary within a gene, as illustrated in Fig. 1.
2) SAPFIR web interface
For the identification of alternative protein features in individual genes, the user starts by providing the identity of the gene of interest, defines the threshold of CDS length ratio and chooses the protein feature prediction tool(s). The result is presented as a webpage containing two downloadable and searchable tables and one graph ( Fig. 2A and Additiona file 1: Fig. S2). The first table contains the protein features predicted as encoded in each transcript with their genomic position (Additional file 1: Fig. S2A). The second table indicates whether each feature is alternative or constitutive according to the standards described above (Additional file 1: Fig. S2B). Finally, the graph presents the position of the features in relation to the exons within each transcript (Additional file 1: Fig. S2C).
Rather than a single gene, users can also input a list of multiple genomic regions, which can be helpful particularly for high-throughput experiments such as RNA-seq. In this second function, SAPFIR annotates multiple genomic regions for InterProScan predicted protein features and compares the frequency of presence of the features in a list of regions of interest (referred to as the "target") against the frequency in a list of regions of control (referred to as the "background"). A suitable target list could be splicing events found to be affected by a change in cell condition identified by differential splicing analysis of RNA-seq experiments, and the background list could be splicing events not affected during the same process.
SAPFIR can accept genomic regions that correspond to splice junctions, exons, isoforms or other user-defined regions. The user starts by providing the target and background lists, and chooses the tools used to predict protein features. The result page contains a brief summary, downloadable tables of enrichment analysis and feature annotation of the target and background lists ( Fig. 2B and Additional file 1: Fig. S3). The summary contains the number of regions in both lists with the number of domains identified in total, and up to five most enriched features (Additional file 1: Fig. S3A). The enrichment table follows the same format as the table in the summary, plus links to the InterPro website for each entry. The annotated target and background tables contain the original input with predicted features in each region (Additional file 1: Fig. S3B). The SAPFIR web interface also contains a help page with hyperlinks, screenshots and figures to explain the functionality of the webserver and interpretation of results with example data (Additional file1: Fig. S4).
1) Alternative feature annotation
To examine the capacity of SAPFIR to detect alternative protein features, we compared its result to that previously obtained manually for a study investigating the alternative splicing events following infection of mouse cells by the reovirus [19]. As indicated in Fig. 3A and Additional file 2: Table S1, SAPFIR identified the alternative domains in 19 out of the 27 manually annotated exons. Most domains that were not identified by SAP-FIR are present in adjacent regions of the same gene (Additional file 2: Table S1). Most importantly, SAPFIR detected 15 alternative domains that were missed by the manual curation and allowed larger coverage of the alternative splicing data set resulting in the identification of 28 additional domains that were not discovered by the manual inspection (Fig. 3A). Both manual curation and SAPFIR analysis found similar protein domains Table S1), suggesting both methods have similar capacity to identify alternatively included protein features. However, usage of SAPFIR requires much less time and effort from the user, ensures consistency and allows comprehensive prediction of protein features by including other prediction tools from InterProScan, thus facilitating the functional analysis of alternative splicing.
2) Alternative protein domains display species-specific splicing pattern
Previous studies showed that alternative splicing events were poorly conserved between species [8]. However, it was unclear whether these species-specific alternative splicing events lead to species-specific protein function. Therefore, we compared the splicing pattern of the different human and mouse protein domains and identified those that are alternatively included in a species-specific manner. We first determined the Pfam predicted protein domains in human and mouse genomes and identified both conserved and species-specific alternative domains in human-mouse orthologs. As indicated in Fig. 4A, 60% of human domains and 45% of mouse domains were alternative based on the most relaxed standard of coding transcripts. The numbers of predicted alternative domains were reduced when only transcripts with long CDS are considered. Finally, only 14% of human domains and 7% of mouse domains were alternative using the standard most relative to alternative splicing (Overlap CDS). In general, mouse domains are less alternative than their human counterparts, which is consistent with differences in the number of transcripts per gene in the mouse and human genomes (Additional file 2: Table S2), and which also likely reflects the differences in the isoform annotation processes between the two species. Interestingly, we found that the transcription factors enriched Zinc Finger C2H2-type (IPR013087) domain was the most frequently spliced protein domain in both mouse and human genome (Additional file 2: Table S3). This indicates that while the number of alternative domains may vary, the basic functional requirement for regulating the function of this domain by alternative splicing is conserved.
To identify conserved and species-specific protein domains, we compared the domains of human and mouse orthologs as identified by Ensembl. As shown in Fig. 4B, most (> 95%) protein domains were conserved between human and mouse and most (> 90%) constitutively spliced domains in one genome were also constitutively spliced in the other. In contrast, only 25% (999 out of 3997) of alternative human domains were alternative in mouse, and only 43% (999 out of 2299) of alternative mouse domains were alternative in human (Fig. 4C). Furthermore, 54% (1122 out of 2081) of humanspecific domains found in the orthologs were alternative (Fig. 4D), much higher than the common domains (11%, p < 2.2e-16). A similar result was observed in mouse (Fig. 4D). Accordingly, we conclude that the splicing pattern of alternative domains is in general less conserved across genomes than that of constitutively spliced domains.
3) The alternative splicing potential of protein domains is linked to domain and gene functions
Interestingly, we found that common domains between human and mouse display similar splicing potential as shown through the high level of correlation but that not all protein domains display the same level of alternative splicing potential (Fig. 5A). For example, we found that the protein-protein interaction domains are highly alternative in general. Three out of the five most alternative domains in both human and mouse genomes were Ankyrin repeat (IPR002110), Nebulin repeat (IPR000900), and IQ motif EF-hand binding site (IPR000048), which are among the most widely distributed protein-protein interaction motifs (Additional file 2: Table S4). On the other hand, the least alternative domains in both genomes were receptor or inhibitor domains (Additional file 2: Table S4). We conclude that the protein functional domains do not all have the same potential for alternative splicing and that certain domain functions like proteinprotein interaction are particularly targeted for splicing dependent regulation.
Since domains with different functions display different splicing potential, we asked whether proteins with different functions may display different propensity for Fig. 4 Conservation of alternatively included Pfam predicted domains. A Ratio of Pfam predicted domains considered as alternative in human or mouse. B Number of Pfam predicted domains present in homologous genes between human and mouse. C Number of common domains shared between homologs classified according to whether they are alternative or constitutive in either species, using the Overlap CDS standard. D Number of human or mouse specific domains in homologous genes that are constitutive or alternative using the Overlap CDS standard regulation through alternative splicing. As indicated in Fig. 5B, we found that indeed genes with different functions have different levels of alternatively spliced domains. For example, in the human genome the processes carried out by neurological organs contained the highest percentage of alternative domains, followed by the regulation of transcription and aging ( Fig. 5B and Additional file 2: Table S5). Interestingly, while the regulation of transcription remained among the processes with most alternative domains in mouse, mitochondrial gene expression and protein catabolic processes were the most alternative domains in mouse instead of neurological processes or aging in human (Fig. 5B). The biological process differences in alternative domains reflect the number of domains present in each group of genes, and not changes in the alternative splicing frequency of any given domain. Indeed, the alternative splicing frequency of each domain type mostly remained similar between the genes associated to these terms and the genomic average (Additional file 2: Table S6). We conclude that highly alternative domains are enriched in groups of genes involved in same biological processes in a species-specific manner.
Conclusion and discussion
In this study, we implemented SAPFIR, a flexible and user-friendly webtool to facilitate the study of functional consequences of alternative splicing in human and mouse by linking variations in mRNA sequences to those in functional protein features (Figs. 1 and 2). Compared to existing tools, SAPFIR is more flexible in parameter setting and can be extended to other species, thus providing better capacity to adapt to various functional studies of alternative splicing [16][17][18].
SAPFIR is especially helpful to predict the functional impact of changes in splicing profile detected by RNA-seq by performing functional annotation for a list of genomic regions (Figs. 2 and 3). The splicing changes that affect the presence of important protein features are more likely to change the function of the proteins, thus providing a priority list for downstream validation and directions for further studies.
We find that although protein domains are largely conserved between human and mouse, the splicing patterns of alternative domains are less conserved than those of constitutive domains (Fig. 4), similar to what was observed on the exon level [8-10, 20, 21]. This finding reemphasizes the importance of genome specificity in functional analysis of alternative splicing. In addition, species-specific domains are particularly more alternative, in accordance with previous suggestions of alternative splicing as a source of protein functional innovation and adaptive benefit [20,22].
In human, genes with most alternative domains are related to neurological process, transcription regulation and aging (Fig. 5B). The most frequent domains found in each group are consistent with their respective functions (for example transmembrane ion transport for neurological process), and these domains are not particularly more alternative compared to the rest of genes in the genome (Fig. 5B, Additional file 2: Table S5 and S6). Previously it was shown that neural alternative splicing events regulate protein-protein interactions [23,24]. Our data suggest that neural alternative splicing could also regulate protein functions related to transmembrane ion transport (Additional file 2: Table S6). The alternative splicing patterns of several genes were associated with aging and age-related diseases, however the global functional impact remained unclear [25,26]. Here our data suggest that alternative splicing may regulate protein functions related to cellular structure (Additional file 2: Table S6).
SAPFIR currently relies on InterProScan to predict protein function from mRNA sequences, thus could benefit from improvements of the InterProScan algorithms or better tools to predict protein functions in general. The quality of SAPFIR analysis also depends on the quality of upstream differential splicing analysis. A robust differential splicing analysis with high precision and accuracy will increase the number and quality of alternatively spliced events and produce a better description of functional impact of the changes in alternative splicing profile. The functionality of SAPFIR can be further extended by incorporating additional interesting features including protein-protein interaction sites, pre-mature stop codons, isoform expression or splicing profiles from cell lines and tissues. Zhou et al. BMC Bioinformatics (2022) 23:250 Methods
Construction of SAPFIR database
To identify alternative protein domains in both human and mouse, we started by building a database housing all required data. Human (GRCh38 release 103) and mouse (GRCm39 release 103) genome annotations (.gtf files) and protein sequences (.fa files) were obtained from Ensembl [27]. Protein features predicted by InterProScan v 5.40-77.0 [28]. The APPRIS database was used to identify the principal isoform of protein-coding genes [29]. These data were combined into a local sqlite3 database as shown in Additional file 1: Fig. S1A. Numbers of coding genes, their transcripts, exons, coding regions (CDS) and predicted protein features are listed in Additional file 2:
Web server implementation
The web server was constructed with Python v3.9.5 and Django v3.2.3, with two main functions: single gene annotation and multiple genomic regions annotation, as described below.
Single gene annotation
The first function of SAPFIR is to identify alternatively included protein features in a single gene. To do so, the gene, transcripts, exons and domain features were retrieved from the database and presented as the first table in the output page. The genomic coordinates of predicted features are compared to each other to identify the common domains among transcripts. Features with overlapping genomic positions are considered as common. The features are then examined to determine whether they are present in all candidate transcripts, where the candidate transcripts consist of either (A) all coding transcripts, (B) transcripts whose CDS are longer than a user-defined fraction of the longest CDS in the gene, or (C) transcripts whose CDS cover the genomic region corresponding to the feature (referred to as the Overlap CDS standard). The result is displayed as the second table in the output page. Finally, the positions of protein features relative to the exons are plotted for all transcripts of the gene, and plotted as a graph using Python v3.9.5. The final graph is displayed in the output page following the two tables described above.
Multiple genomic regions annotation
The second function of SAPFIR is to identify enriched protein features in a list of genomic regions of interest (referred to as the "target") compared to a list of control genomic regions (referred to as the "background"). To do this, the two lists are compared with the database to annotate the predicted features that overlap with these regions using pybedtools, a Python implementation of Bedtools [30,31]. The number of overlaps for each feature is then compared between the two lists and a chi-square test is performed using scipy v1.7.0 to determine whether a feature is more frequent in either list [32]. The p-value of the chi-square test is then adjusted by the Benjamini-Hochberg Procedure (Additional file 1: Fig. S1B). A fold change of enrichment is calculated as the ratio of the frequency in the target list over the frequency in target and background combined, to avoid division by zero errors. Genomic regions from previously published data were provided as example data [19,33].
Identification of homologous genes between human and mouse
Human and mouse homologs were retrieved through Ensembl BioMart web session along with percentage identity both from human to mouse gene and mouse to human gene. For each query gene, the target gene with the highest query identity was considered the best hit. Pairs of genes that are reciprocal best hits were considered homologs to each other. This process identified 18,213 pairs of homologs between human and mouse, which are listed in the Additional file 2: Table S7.
Identification of alternative domains associated with Gene Ontology (GO) terms
A list of 143 GO Slim generic terms was retrieved from the Gene Ontology project [34,35]. BioMart was used to retrieve genes associated with each of these GO terms from the Ensembl database [36,37]. Protein domains were predicted in these genes using Pfam, a prediction tool covering many common protein domains [38]. Predicted domains were then compared between transcripts of each gene to determine whether they were constitutive or alternative using the Overlap CDS standard as described above. Chi-square tests were performed using scipy v1.7.0 to determine whether Pfam predicted domains were more alternative in genes associated with a GO term than the genomic average. The p-values of the chi-square test were then adjusted by the Benjamini-Hochberg Procedure when appropriate.
CDS
Coding regions GO Gene ontology SAPFIR Sherbrooke alternative protein feature identificator
Supplementary Information
The online version contains supplementary material available at https:// doi. org/ 10. 1186/ s12859-022-04804-w. Additional file2: Table S1 Manual and SAPFIR annotation of splicing events affected by viral infection in mouse. Figure 5B. Table S7 Homolog genes in human and mouse genome | 5,437.6 | 2022-06-24T00:00:00.000 | [
"Computer Science",
"Biology"
] |
A Comparison of Interpolyelectrolyte Complexes (IPECs) Made from Anionic Block Copolymer Micelles and PDADMAC or q-Chitosan as Polycation
Block copolymers synthesized via Atom Transfer Radical Polymerization from alkyl acrylate and t-butyl acrylate and the subsequent hydrolysis of the t-butyl acrylate to acrylic acid were systematically varied with respect to their hydrophobic part by the variation in the alkyl chain length and the degree of polymerisation in this block. Depending on the architecture of the hydrophobic part, they had a more or less pronounced tendency to form copolymer micelles in an aqueous solution. They were employed for the preparation of IPECs by mixing the copolymer aggregates with the polycations polydiallyldimethylammonium chloride (PDADMAC) or q-chit. The IPEC structure as a function of the composition was investigated by Static Light and Small Angle Neutron Scattering. For weakly-associated block copolymers (short alkyl chain), complexation with polycation led to the formation of globular complexes, while already existing micelles (long alkyl chain) grew further in mass. In general, aggregates became larger upon the addition of further polycation, but this growth was much more pronounced for PDADMAC compared to q-chit, thereby leading to the formation of clusters of aggregates. Accordingly, the structure of such IPECs with a hydrophobic block depended largely on the type of complexing polyelectrolyte, which allowed for controlling the structural organisation via the molecular architecture of the two oppositely charged polyelectrolytes.
Introduction
Mixtures of oppositely charged polyelectrolytes in an aqueous solution typically lead to the formation of interpolyelectrolyte complexes (IPECs) [1]: a process that is driven by the release of counterions [2]. IPECs are promising for a number of applications, such as in medicine and agricultural biotechnology [3], because of their rather variable options for solubilisation and binding. They have been studied for a long time, initially mostly for oppositely charged homopolyelectrolytes, which tended to form insoluble complexes around a charged equilibrium and soluble complexes for a significant excess of polycation or polyanion [4]. However, one may also form more complex built IPECs by combining ionic block copolymers possessing a second hydrophobic block and oppositely charged polyelectrolytes. Such copolymer micelles are more complex colloidal aggregates that may contain a purely hydrophobic core, which is then surrounded by an IPEC shell, and, still further outside, one has a corona of the excess polyelectrolyte that stabilizes such polymer colloids in an aqueous solution. They are interesting as their hydrophobic core may be able to solubilize a hydrophobic cargo, and the IPEC shell then is a way to separate them from the aqueous surrounding and/or to allow for their controlled release from the core. In addition, the IPEC shell may itself be the location of the selective solubilization of more with PDADMAC, covering a wide range of molecular weight. The hydrophobic modification of PA was obtained by having 10 mol% monomeric units substituted by ones from a dodecyl alkyl acrylate chain. The complexes were characterized via DLS, SLS and SANS to obtain detailed information on the structure of the soluble complexes. It was found that the hydrophobic modification led to phase separation for lower amounts of added polyanion, and the biphasic region was larger than the complexes with unmodified PA. Moreover, light scattering experiments proved that their complexes became larger upon approaching a charged equilibrium. SANS data of the complexes showed that these complexes tended to form clusters of aggregates with sizes of around 35 nm with smaller subunits of 2.5-5.0 nm in size.
Another commonly used polymer was Chitosan which is a cationic polysaccharide [19] and has received increasing attention in recent decades due to its unique biocompatibility, biodegradability, non-toxicity [20] and medical properties [21]. Accordingly, chitosan has also been widely applied in the field of polyelectrolyte complex formation, as reviewed some while ago [22]. Chen et al. [23] studied the structure of the IPECs composed of chitosan and poly(methacrylic acid) (PMAA) in aqueous solutions by means of UV-vis spectroscopy, a fluorescence probe technique, and transmission electron microscopy (TEM). The results showed that for pH 4.0 and a molar ratio of Chitosan to the PMAA of 1:4, well-defined complexes of Chitosan and PMAA were formed. For pH > 4.0, the degree of ionization of PMAA increased, but that of Chitosan decreased. Moreover, TEM results showed that the complex particles exhibited a very compact spherical structure. The size of the particles at pH = 3.0 was between 10 and 17.5 nm; at pH = 4.0, it was between 22.5 and 40.0 nm and then increased to about 75 nm for pH = 4.5.
Carvalho et al. [24] investigated the influence of pH, molecular weight, and polymeric proportion on the formation of IPECs based on chitosan: dextran sulfate. It was stated that nanoparticles in the polycation-rich regime formed aggregates, while an excess of dextran sulfate reduced the size of the particles. Saether et al. [25] studied IPECs of alginate and chitosan using a one-stage process under high shearing conditions. They mainly focused on how the IPEC particle sizes and surface charge could be controlled by varying preparation procedures and polymer characteristics. These complexes were prepared with varying rates and diameters of the dispersing elements of the homogenizers to examine the effect upon the IPECs formed. It was found that the size of the complexes decreased with the increasing rate of homogenization. Additionally, the results showed that polyelectrolyte complexes made from chitosans and alginates with low molecular weights formed smaller complexes in comparison to that with high molecular weights.
Lima et al. [26] published a study about the formation and structure of IPECs composed of chitosan and poly(sodium methacrylate), which was produced by mixing solutions at different carboxyl-to-amine molar ratios, r CA . The small angle X-ray scattering (SAXS) experiments proved that at r CA = 0.15, the structure of formed aggregates was nearly spherical. When r CA was raised up to 0.75, the Kratky plot indicated the presence of elongated structures. For r CA > 0.75, these structures had a tendency to collapse back to nearly spherical ones. Despite particle structures becoming more elongated at r CA = 0.75, the radius of gyration markedly dropped to 6 nm, exhibiting the occurrence of collapse at this ratio.
Accordingly, complexes of chitosan or PDADMAC and polyacrylate have been studied to quite some extent. However, so far, only a few studies have been conducted using quaternized chitosan, which showed the slight pH dependence of its complexation properties. Similarly, the complexation of the diblock copolymer of polyacrylate with an additional hydrophobic block has been studied very little. Closing this gap was the centre of our work, in which we focused on amphiphilic block copolymers of the alkyl acrylate-sodium acrylate (AlkA-NaPA) type. Such systems have been described before and showed the formation of well-defined micelles [27]. More recently, we studied the aggregation behaviour of AlkA-NaPA copolymers, where we varied the alkyl rest from butyl to dodecyl, i.e., the extent of hydrophobicity of the micellar core. This study by critical micelle concentration (cmc) determination and light and neutron scattering showed that the tendency for micel-lization and the size of the formed micelles depended in a systematic way on the length of the alkyl chain of the acrylate and on the total length of the hydrophobic block [28]. This type of copolymeraggregate, where the internal polarity is controlled by the type of alkylacrylate and the size of the hydrophobic domains are determined by the length of their polymeric unit, was then employed to form IPECs with quaternized chitosan (q-chit) and for comparison also with the well-studied synthetic polycation PDADMAC [29]. This is interesting as, in that way, one can create highly versatile polymeric colloids with a hydrophobic core of variable polarity and size, surrounded by an IPEC shell, which is highly polar, though water-insoluble, and has an affinity for solubilisation that depends on the precise composition of the IPEC. By employing polycations with a different type of chain backbone for complexation, there was an option for varying the IPEC structure, and by using q-chitosan, one had, in addition, a biopolymer that could lead to an overall better biocompatibility of the formed aggregates. It might be noted that we chose the Mw of the polycations, such that their stretched length was rather equal and in the range of~450 nm (see Supplementary Materials Section S5).
In our investigation, we worked at a constant concentration of copolymer micelles and increased the content of added polycation. For the copolymer, we employed hydrophobic blocks of butyl acrylate, hexyl acrylate, and dodecyl acrylate at different lengths (40-70 units) and as hydrophilic block sodium polyacrylate (with 70-170 units). These copolymers had been shown before to vary largely with respect to their tendency for forming micelles, which is very pronounced for dodecyl and rather weak for butyl chains [28].
In the experiments described here, we studied the phase behaviour and, in particular, the structure of the aggregates formed upon complexation with oppositely charged polycations, which were expected to bind to the surface of the polymeric micellar aggregates. These structural studies were conducted by means of light and neutron scattering.
Materials
The anionic amphiphilic alkyl acrylate-sodium acrylate (AlkA-NaPA) block copolymers were synthesized as described before, and their molecular characterization was also given in a previous publication [28]. Poly(diallyldimethylammonium chloride) PDADMAC (M w :~150 kDa) was purchased from Sigma Aldrich and used after drying in a freeze dryer without any further purification. The second polycation was quaternised chitosan (q-chit; M w :~180 kDa) synthesized in our group by a modified version of a synthesis previously described [30,31]. This product had a degree of quaternisation of 0.785, a degree of acetylation of 0.185, and a degree of O-methylation of 0.17. The characteristic parameters for all polyanions employed are summarized in Table 1, where they have abbreviated afterward according to the type and length of the hydrophobic block and for the dodecyl copolymer in addition to "s" for the short and "l" for the long poly(acrylic acid) (PAA) block, respectively. Milli-Q water was produced by a Millipore filtering system. D 2 O was obtained from Eurisotop (99.5% isotopic purity, Gif-sur-Yvette, France). Sodium hydroxide (99%) and sodium chloride (>99%) were obtained from Sigma-Aldrich (Taufkirchen, Germany). Toluene (>99.5%) from Fluka, methyl-2-bromopropionate (2-MBP, 98%) and hexane from Aldrich (Steinheim, Germany), N,N,N ,N ,N -pentamethyldiethylenetriamine (PMDETA, 99%), hexyl acrylate (98%), dodecyl acrylate (technical grade 90%) from Sigma-Aldrich and diethylether (>99.5%) from Carl-Roth GmbH (Karlsruhe, Germany) were used as supplied. tert-butylacrylate, n-butylacrylate and dichloromethane were gifts from BASF (Ludwigshafen, Germany) and were used as supplied.
Preparation of Complexes
All samples were prepared in H 2 O or D 2 O. Before the preparation of the samples, D 2 O was filtered by passing through a PTFE filter with a 0.45 µm pore size in order to obtain dust-free samples. The degree of deprotonation for the stock solution of polyanions was adjusted to α = 1.0 (formally 1.2, as a slight excess of 20% of NaOH was added to ensure complete deprotonation). The stock solutions were prepared with a concentration of 10 g/L for polyanions and 15 g/L for polycations. The charge ratio, z, was defined as [+]/[−] and was changed from 0 to 0.4. The concentration of polyanion in the complexes was kept constant at 5 g/L for all the samples. First, the required amount of polyanion was added to a glass vial. Then, the solvents, H 2 O or D 2 O, were added into the vial to obtain the appropriately diluted solution before adding the polycation solution. As a last step, the stock solution of polycation was added dropwise under strong stirring in order to avoid any precipitation due to the local neutralization of the sample.
Methods
Zeta-potential measurements were performed on a Litesizer 500 by Anton Paar GmbH (Graz, Austria) at 25 • C. Zeta-potential ζ was calculated from the measured electrophoretic mobility µE as: where η is the viscosity. ε 0 is the permittivity of the vacuum and ε r is the relative permittivity of the medium. The kinematic (ν) viscosities were measured with the automated viscosimeter iVisc, and Lauda Scientific GmbH (Lauda-Königshofen, Germany) using calibrated Micro-Ostwald capillary viscometer (Typ Ic-II) at a constant temperature of (25.0 ± 0.1) • C. One premeasurement and five main measurements were performed. The dynamic (η) viscosities were calculated after density measurements at (25.0 ± 0.1) • C (DMA4500, Anton Paar GmbH). The samples were prepared in heavy water, D 2 O.
According to Beer-Lambert Law, the transmission, T, can be related to the attenuation coefficient (or turbidity), τ, [32] where I L is the intensity of the beam traversing the sample of thickness d and I is the incident intensity. The transmission of the samples was measured with a Cary 50 spectrometer by Varian in a UV-Vis cuvette of thickness d = 10 mm or Quartz cuvettes with d = 2 or 1 mm.
The apparent molecular weight (M app w,λ ) of the complexes was calculated from transmittance measurements at 632.8 nm according to: The optical constant was given by: where N Av is the Avogadro constant, n 0 is the refractive index, dn/dc is the refractive index increment; c is the mass/volume concentration, and λ is the wavelength of light. Static light scattering (SLS) experiments were carried out with a CGS-3 (compact goniometer system) and a HeNe-Laser at 632.8 nm wavelength from ALV GmbH (Langen, Germany). Two avalanche photodiodes (APD) were used to detect the scattered light in an angular range of 40-140 • .
For the static light scattering experiments, the measured intensity had to be corrected and normalized for each angle as given by Equation (5).
CR1 is the mean scattered intensity of the samples, normalized by the initial laser intensity I mon . The background scattering of the solvent and cuvette was subtracted, and a calibration with toluene was applied, where R θ,toluene is the Rayleigh ratio of toluene [33].
The scattered intensity I(q) for particles should exhibit an angular dependence according to Guinier's law: where Rg is the radius of gyration, q is the modulus of the scattering vector, θ is the scattering angle, and λ is the wavelength of the light. From the intensity at a zero angle, I(0), one can, via the optical constant K: directly calculate the apparent molecular weight of the scattering objects, M app w M app w = where dn/dc is the refractive index increment. The refractive index increment (dn/dc) of the synthesized polymers was measured with the instrument Orange 19" DN/DC (see Supplementary Materials Section S4). SANS measurements were performed at a PA20 spectrometer of the Laboratoire Léon Brillouin (LLB, Saclay, France). Three configurations were used with 1.9, 8.3, and 18.8 m sample-to-detector distances (SD) and a wavelength of 6 Å. To reach a higher q, we used an off-centred detector position at the shortest detector distance, 1.9 m. In the experiments, a q-range of 2.5 × 10 −2 to 3.2 nm −1 was covered. Some additional SANS measurements were conducted on the V4 instrument at Helmholtz Zentrum Berlin (HZB, Berlin, Germany). Three configurations were used: with an SD and collimation (C) of: 1.35 m (SD) and C = 8 m, 8 m (SD), C = 8 m, 15.60 m (SD) and C = 16 m. Two wavelengths of 4.5 and 12 Å (for SD = 15.60 m) were employed. In these experiments, a q-range of 2.3 × 10 −2 to 6.4 nm −1 was covered.
The coherent scattering intensity was obtained after normalization of the detector cell efficiency using an incoherent scatterer (H 2 O), the subtraction of empty cell scattering and electronic noise (Cd). The scattering curve was obtained by isotropic re-groupment with respect to the scattering centre, and, taking into account the transmissions, the differential cross-sections were calculated [34]. All data evaluation was conducted using the BerSANS (Aug 2014) software [35]. Subsequently, the data sets obtained for the three different configurations were merged. Data analysis was performed with SasView (5.0.5) an opensource scattering analysis software [36].
SLD of the complexes (SLD comp ) was calculated from the sum of the volume fractions Φi of the polyanion or the polycation multiplied with the SLD of the corresponding part (i) (see Supplementary Materials Section S6.1).
Phase Behaviour
As IPECs are known to form precipitates or coacervates around equimolar charge conditions, we first studied the phase behaviour of the different block copolymer micelles at a constant concentration of 5 g/L upon the addition of increasing amounts of polycation. The result is shown in Figure 1, and for PDADMAC, the phase boundary was always in the range of z~0.45. For q-chit, it this similar for the more hydrophobic copolymers with dodecyl and hexyl acrylate, but for the butyl acrylate, the phase boundary was shifted to a larger z and increasingly so for a lower degree of polymerisation of the butyl acrylate. Interestingly, phase behaviour was more sensitive for q-chit as a complexing polycation, and this followed the expected trend of higher solubility with the decreasing hydrophobicity of the copolymer.
The coherent scattering intensity was obtained after normalization of the detector cell efficiency using an incoherent scatterer (H2 ), the subtraction of empty cell scattering and electronic noise (Cd). The scattering curve was obtained by isotropic re-groupment with respect to the scattering centre, and, taking into account the transmissions, the differential cross-sections were calculated [34]. All data evaluation was conducted using the BerSANS (Aug 2014) software [35]. Subsequently, the data sets obtained for the three different configurations were merged. Data analysis was performed with SasView (5.0.5) an opensource scattering analysis software [36].
SLD of the complexes ( ) was calculated from the sum of the volume fractions Φi of the polyanion or the polycation multiplied with the SLD of the corresponding part (i) (see Supplementary Materials Section S6.1).
Phase Behaviour
As IPECs are known to form precipitates or coacervates around equimolar charge conditions, we first studied the phase behaviour of the different block copolymer micelles at a constant concentration of 5 g/L upon the addition of increasing amounts of polycation. The result is shown in Figure 1, and for PDADMAC, the phase boundary was always in the range of z ~ 0.45. For q-chit, it this similar for the more hydrophobic copolymers with dodecyl and hexyl acrylate, but for the butyl acrylate, the phase boundary was shifted to a larger z and increasingly so for a lower degree of polymerisation of the butyl acrylate. Interestingly, phase behaviour was more sensitive for q-chit as a complexing polycation, and this followed the expected trend of higher solubility with the decreasing hydrophobicity of the copolymer. Another interesting point is that complexes with PDADMAC at z~0.45 precipitated for a relatively shorter alkyl chain-containing polyanions, i.e., Bu40, Bu68 and Hex37, while the complexes with dodecyl-containing polyanions formed coacervates.
Apparently, for the latter, the formation of hydrophobic domains did not allow the formation of a compacted structural arrangement, and instead, a marked swelling of the systems took place. For the complexes with q-chit at z = 0.65, precipitates were only formed with the least hydrophobic polyanion, Bu40, while coacervation took place at z = 0.55 for Bu68 and z =~0.45 for the complexes with Hex37, Do36l and Do36s was observed after preparation. This indicated that q-chit enhanced the tendency for swelling with water, which may have been due to the OH-groups it carried along its backbone.
The zeta potential of IPECs with the most and the least hydrophobic polyanions was measured and showed a similar behaviour upon complexation with PDADMAC or q-chit. However, the addition of q-chit led to a faster increase in zeta-potential (Table S1) which at first glance was surprising as the q-chit formed more stable complexes, i.e., the ones where a higher z still colloidally stable solutions were obtained ( Figure 1). However, it also indicated that q-chit more effectively neutralised and bonded to the anionic block copolymer aggregates, which is in good agreement with its tendency not to form clusters of these aggregates, as discussed later.
Turbidity Measurements
The apparent molecular weight (M app w,λ ) of the complexes was calculated from the turbidity (τ) of the samples. In doing so, quite different behaviour for the complexation with PDADMAC and q-chit was found, as shown in Figure 2. For q-chit M app w,λ changed only a little, mostly increasing somewhat with increasing z, but for the case of Hex37 and Bu40, even decreasing slightly compared to the pure copolymer micelles always formed rather small aggregates. This meant that the micelles became somewhat complexed by the q-chit and also partly even rearranged in this neutralisation process, thereby explaining the rather constant, partly even reducing M app w,λ . Clearly, the largest complexes were formed with the dodecyl acrylate, which, being the most hydrophobic of the alkyl acrylates employed, also formed the most well-defined copolymer micelles. Bu68 showed a smaller but consistent increase in M app w,λ . after preparation. This indicated that q-chit enhanced the tendency for swelling with water, which may have been due to the H-groups it carried along its backbone.
The zeta potential of IPECs with the most and the least hydrophobic polyanions was measured and showed a similar behaviour upon complexation with PDADMAC or q-chit. However, the addition of q-chit led to a faster increase in zeta-potential (Table S1) which at first glance was surprising as the q-chit formed more stable complexes, i.e., the ones where a higher z still colloidally stable solutions were obtained ( Figure 1). However, it also indicated that q-chit more effectively neutralised and bonded to the anionic block copolymer aggregates, which is in good agreement with its tendency not to form clusters of these aggregates, as discussed later.
Turbidity Measurements
The apparent molecular weight ( , ) of the complexes was calculated from the turbidity (τ) of the samples. In doing so, quite different behaviour for the complexation with PDADMAC and q-chit was found, as shown in Figure 2. For q-chit , changed only a little, mostly increasing somewhat with increasing z, but for the case of Hex37 and Bu40, even decreasing slightly compared to the pure copolymer micelles always formed rather small aggregates. This meant that the micelles became somewhat complexed by the q-chit and also partly even rearranged in this neutralisation process, thereby explaining the rather constant, partly even reducing , . Clearly, the largest complexes were formed with the dodecyl acrylate, which, being the most hydrophobic of the alkyl acrylates employed, also formed the most well-defined copolymer micelles. Bu68 showed a smaller but consistent increase in , .
(a) (b) This situation was quite different for complexation with PDADMAC, where already low z values up to 0.05 and a substantial increase in M app w,λ by a factor of 10-100 took place. For a still higher z value, the increase continued but at a smaller rate. The correlation between the type of hydrophobically modified polyanion and the M app w,λ hardly existed. Only at the lowest z were the highest values for Do36s and Do36l found, but with increasing z, the differences became smaller, and all complexes at z = 0.4 exhibited values of 1-2·10 8 g/mol.
Compared to the simple prediction for the masses by the addition of polycation and its complexation onto existing aggregates (see Figure S1), the increase was always much more pronounced for the addition of PDADMAC, while for q-chit this simple model often described the situation well, especially for smaller values of z. It was also very interesting to look at the ratio of the M app w,λ values obtained for the same z value upon complexation with PDADMAC or q-chit, as shown in Figure 3. For the dodecyl polyacrylates, only somewhat larger values were observed for PDADMAC, while for Hex37 and Bu68, the increase by a factor of 3 to 50 generally became larger with increasing z. For Bu40, this factor was almost constant by about 100. Apparently, the size increase seen in turbidity depended strongly on the type of hydrophobic polyanion.
Polymers 2023, 15, 2204 9 of 18 described the situation well, especially for smaller values of z. It was also very interesting to look at the ratio of the , values obtained for the same z value upon complexation with PDADMAC or q-chit, as shown in Figure 3. For the dodecyl polyacrylates, only somewhat larger values were observed for PDADMAC, while for Hex37 and Bu68, the increase by a factor of 3 to 50 generally became larger with increasing z. For Bu40, this factor was almost constant by about 100. Apparently, the size increase seen in turbidity depended strongly on the type of hydrophobic polyanion.
Static Light Scattering (SLS)
In order to confirm the structural information obtained by turbidity regarding the size of the IPEC aggregates formed upon admixing either PDADMAC or q-chitosan to the anionic copolymer micelles, we performed comprehensive light scattering experiments in which the mixing ratio z = [+]/[−] was increased systematically up to a value of z = 0.4. At this value, all samples were visually homogeneous, whereas higher values (higher than z = 0.4) of precipitation or coacervation were observed for all systems.
Looking at the pure polyanions, one could observe the much weaker aggregation of the butyl-and hexylacylate copolymers compared to the dodecylacrylate, which is in good agreement with previous studies on the self-assembly of the pure copolymers [28]. The apparent molecular weight (Mw app ) as a function of z is shown in Figure 4 and confirms the substantially different behaviour for the IPECs formed with q-chit (Mw = 180 kDa) and PDADMAC (Mw = 150 kDa), as already seen in the turbidity measurements ( Figure 2, a direct comparison between the results of turbidity and static light scattering results is shown in Figure S1). For q-chit, a continuous increase (or rather constant value for Do36s or Hex37) was observed, except for the addition of a polyelectrolyte onto an existing micellar copolymer aggregate. When comparing the increase with the one expected simply from assuming the stoichiometric addition of polycation to the existing structure of the anionic copolymer ( Figure S1), one observed that this picture was quite well fulfilled for the complexes with q-chit. In contrast, PDADMAC addition was only in good
Static Light Scattering (SLS)
In order to confirm the structural information obtained by turbidity regarding the size of the IPEC aggregates formed upon admixing either PDADMAC or q-chitosan to the anionic copolymer micelles, we performed comprehensive light scattering experiments in which the mixing ratio z = [+]/[−] was increased systematically up to a value of z = 0.4. At this value, all samples were visually homogeneous, whereas higher values (higher than z = 0.4) of precipitation or coacervation were observed for all systems.
Looking at the pure polyanions, one could observe the much weaker aggregation of the butyl-and hexylacylate copolymers compared to the dodecylacrylate, which is in good agreement with previous studies on the self-assembly of the pure copolymers [28]. The apparent molecular weight (M w app ) as a function of z is shown in Figure 4 and confirms the substantially different behaviour for the IPECs formed with q-chit (Mw = 180 kDa) and PDADMAC (Mw = 150 kDa), as already seen in the turbidity measurements (Figure 2, a direct comparison between the results of turbidity and static light scattering results is shown in Figure S1). For q-chit, a continuous increase (or rather constant value for Do36s or Hex37) was observed, except for the addition of a polyelectrolyte onto an existing micellar copolymer aggregate. When comparing the increase with the one expected simply from assuming the stoichiometric addition of polycation to the existing structure of the anionic copolymer ( Figure S1), one observed that this picture was quite well fulfilled for the complexes with q-chit. In contrast, PDADMAC addition was only in good agreement for the Do36s, while for all other copolymers, a much more marked increase was seen that demonstrates the formation of much larger complexes. As seen in the turbidity data, for the addition of PDADMAC, there was a drastic increase in scattering intensity at low z values that in light scattering was even more pronouncedly visible already for the addition of very small amounts. This is a very intriguing difference that could indicate that the q-chit simply binds to the surface of the charged micellar aggregates, while the PDADMAC binds only partially at a low z and instead bridges to other micellar aggregates, thereby leading to the formation of clusters of such aggregates, which on average are 5 to 100 times larger than individual copolymer micelles, explaining the higher M w app observed for the PDADMAC complexes. Interestingly, this effect was similarly seen for Do36s and Do36l, which by themselves form well-defined copolymer micelles [28], as well as for the butyl acrylates that form alone only rather ill-defined micellar aggregates. Apparently, the presence of PDADMAC transforms all the different copolymers at higher z values into about the same-sized complexes. However, the relative increase in Mw was much more pronounced for the butyl acrylates, as they formed only rather small aggregates in the absence of polycation.
complexes. Interestingly, this effect was similarly seen for Do36s and Do36l, which by themselves form well-defined copolymer micelles [28], as well as for the butyl acrylates that form alone only rather ill-defined micellar aggregates. Apparently, the presence of PDADMAC transforms all the different copolymers at higher z values into about the same-sized complexes. However, the relative increase in Mw was much more pronounced for the butyl acrylates, as they formed only rather small aggregates in the absence of polycation. Interestingly, the q-chit appeared to lead to a similar bridging and cluster formation for Do36s but to a lower extent than PDADMAC. For the other AlkA-b-NaPa systems, this did not lead to such an effect at all but instead fostered their transformation into compacted aggregates, which were then surrounded by the corresponding IPEC shell. In general, this indicated a stronger binding of the q-chit to the acrylate, resulting in more compact structures.
Looking at the aggregation numbers of the polyanions, given in Figure S2, for PDADMAC, rather constant numbers of 5000-10,000 were found for z equal to 0.1 and higher for all the different polyanions studied. This also confirmed that the aggregates seen in light scattering were not simple complexes of a micellar type but, with this size, must be more likely clusters of such micellar aggregates. In contrast, for q-chit, these values increased from about 100 for Bu40 to 500 for Do36s. Apparently, here the size of the structures formed was strongly dependent on the type of polyanion, and correspondingly differentiated complex structures were formed.
The ratio of the apparent molecular weight of IPECs obtained through static light scattering experiments for the same z value using two different types of polycations- Interestingly, the q-chit appeared to lead to a similar bridging and cluster formation for Do36s but to a lower extent than PDADMAC. For the other AlkA-b-NaPa systems, this did not lead to such an effect at all but instead fostered their transformation into compacted aggregates, which were then surrounded by the corresponding IPEC shell. In general, this indicated a stronger binding of the q-chit to the acrylate, resulting in more compact structures.
Looking at the aggregation numbers of the polyanions, given in Figure S2, for PDAD-MAC, rather constant numbers of 5000-10,000 were found for z equal to 0.1 and higher for all the different polyanions studied. This also confirmed that the aggregates seen in light scattering were not simple complexes of a micellar type but, with this size, must be more likely clusters of such micellar aggregates. In contrast, for q-chit, these values increased from about 100 for Bu40 to 500 for Do36s. Apparently, here the size of the structures formed was strongly dependent on the type of polyanion, and correspondingly differentiated complex structures were formed.
The ratio of the apparent molecular weight of IPECs obtained through static light scattering experiments for the same z value using two different types of polycations-PDADMAC and q-chit was shown in Figure 5. Although the ratio decreased slightly from z = 0.1 to 0.2, it remained somewhat unchanged with further addition of polycation. This observation was consistent with the results obtained from turbidity measurements, indicating that the increase in size is dependent on the hydrophobicity of the polyanion used. PDADMAC and q-chit was shown in Figure 5. Although the ratio decreased slightly from z = 0.1 to 0.2, it remained somewhat unchanged with further addition of polycation. This observation was consistent with the results obtained from turbidity measurements, indicating that the increase in size is dependent on the hydrophobicity of the polyanion used.
Small Angle Neutron Scattering (SANS)
To gain more detailed structural insights, SANS experiments were conducted on some selected copolymer micelles, Bu40 and Do36s, representing the cases of weak
Small Angle Neutron Scattering (SANS)
To gain more detailed structural insights, SANS experiments were conducted on some selected copolymer micelles, Bu40 and Do36s, representing the cases of weak aggregation and formation of well-defined larger spherical micelles, respectively, that were complexed with either q-chit or PDADMAC. The obtained SANS intensity data as a function of q are shown in Figure 6, and they were in good agreement with the light scattering data with respect to the fact that at low q, higher scattering intensities were always observed for the addition of PDADMAC compared to that of q-chit, where this effect was much more pronounced for Bu40 compared to Do36s. Apparently, the smaller aggregates of the Bu40 could become more easily interconnected within a larger network by the addition of PDADMAC (Figure 6a,b). At the same time, they also became more compacted, as seen by the increase in the intermediate q-range of 0.1-0.2 nm −1 . The slope at low q for the PDADMAC complexes with Bu40 is~3.5 for the higher z values (see Figure S9), thereby indicating the formation of rather compacted structures larger than could be observed in the SANS experiments (which for the selected q-range means were at least larger than 100 nm), but which were covered in the light scattering experiments shown before. The SANS data showed that for Do36s (Figure 6c,d), already without the addition of polycation, well-defined globular aggregates were present (some increasing toward lower q was seen, which was either due to some attractive interaction or larger aggregates). They become somewhat bigger by the addition of the polycation but apparently retain their globular structure. This effect was somewhat more pronounced for q-chit compared to PDADMAC, as expected from the fact that the mass per charge that became deposited in The SANS data showed that for Do36s (Figure 6c,d), already without the addition of polycation, well-defined globular aggregates were present (some increasing toward lower q was seen, which was either due to some attractive interaction or larger aggregates). They become somewhat bigger by the addition of the polycation but apparently retain their globular structure. This effect was somewhat more pronounced for q-chit compared to PDADMAC, as expected from the fact that the mass per charge that became deposited in the IPEC shell was about twice for q-chit compared to PDADMAC. In addition, an increasing intensity at low q was seen with an increasing polycation addition, which was again more pronounced for PDADMAC, compared to the q-chit, which could be attributed to the formation of a polycation corona of the aggregates that were also interconnecting them.
The situation was very different for the Bu40, where a much more marked increase in intensity in the mid-q range around 0.1 nm −1 was seen for the addition of both polycations. This meant that here, initially, no larger self-assembled aggregate was present and substantial aggregation was induced (seen in the q-range of 0.08-0.4 nm −1 ) by the addition of the polycation, and much larger and more compacted aggregates were formed. The effect was generically similar for both polycations but more marked for q-chit.
This also led to the formation of a marked correlation peak, which was more pronounced for q-chit (Figure 6a) compared to adding PDADMAC (Figure 6b), as it was obscured by a low q increase. The overall scattering intensity around q~0.1 nm −1 was much higher for the addition of q-chit, showing that, here, appear larger and more welldefined aggregates were formed than for the complexation with PDADMAC. Of course, in addition, it must be noticed that q-chit had a higher Mw per charged unit, and, accordingly, more scattering power was generated in the process of complexation. At first glance, this was in strong disagreement with the observations of static light scattering (Figure 4), but in SANS, we looked at a much smaller size range. However, the much higher SLS intensity seen for PDADMAC was reflected in the SANS curves in the large upturn of intensity in the low q-range, while for q-chit here, only some increase was seen. An explanation for such behaviour would be that PDADMAC leads to a much more pronounced interlinking of the different IPEC copolymer aggregates (here, one has to keep in mind that the size of these aggregates was~5-10 nm, while the stretched lengths of the PDADMAC and the q-chit chains was~460 nm and~440 nm, respectively) (see Supplementary Materials Section S5). This more pronounced interlinking by PDADMAC was confirmed by the observation that the macroscopic viscosity of the IPEC solutions was higher by a factor of two for PDADMAC compared to the corresponding q-chit solutions (Table S4).
The peak positions were similar for complexation with q-chit and PDADMAC, which also indicated that the size of the formed complex aggregates was similar. However, the sharp increase at low q seen for PDADMAC showed that they were contained in much more interconnected clusters. In contrast, with q-chit, more isolated particles were formed, which still interacted repulsively for the case of Bu40 (correlation peak), while for the Do36s, a more marked core-shell structure appeared to be visible. SANS data of the complexes of PDADMAC with Bu40 were analysed in the low q range via a simple power law, which for z, showed a larger 0.1 scaling of I(q)~q −3.5 , indicating that here one seemed to see rather well-defined larger cluster domains (See Figure S9). On the other hand, the q-chit complexes with Bu40 showed a power law of I(q)~q −1.5 for higher z, which indicated the formation of much more open structures (see Figure S8).
In order to quantify the scattering behaviour further, the mid-range indicated the presence of globular structures on a scale of 5-20 nm; SANS data of the complexes was fitted to the shape-independent Guinier law, thereby obtaining I(0) and the radius of gyration R g . This was conducted in the q-range, which can be found in Figures S4-S7. From I(0), the Mw of these aggregate structures was calculated according to Equation (S3), and the obtained values are given in Figure 7 and Table 2.
the formation of much more open structures (see Figure S8).
In order to quantify the scattering behaviour further, the mid-range indicated the presence of globular structures on a scale of 5-20 nm; SANS data of the complexes was fitted to the shape-independent Guinier law, thereby obtaining I(0) and the radius of gyration Rg. This was conducted in the q-range, which can be found in Figures S4-S7. From I(0), the Mw of these aggregate structures was calculated according to Equation (S3), and the obtained values are given in Figure 7 and Table 2. For Do36s, R g increased slightly from 8.0 nm for the pure micelle to less than 11 nm for z = 0.4. This meant that, here, we saw a systematic but relatively small increase that was in good agreement with the evolution of the M w data shown in Figure 7. On the other hand, as seen in Table 2, the R g of the pure Bu40 system was 3.2 nm, and upon the formation of aggregates by adding polycation, it increased up to 8.9 nm. For the M w values, one found that for Do36s, they increased for both polycations by about 30% from 1 × 10 6 g/mol to 1.3·10 6 g/mol. For Bu40, the situation was quite different. An addition of polycation into nBu 40 -b-AA 167 micelles resulted in an increase by a factor of 6 from 1.4 × 10 4 g/mol to~8 × 10 4 g/mol. Q-Chitosan complexes were slightly larger than PDADMAC complexes, and the molecular weight of the complexes increased with an increase in the charge ratio. From the M w values, one could straightforwardly calculate the aggregation numbers N agg , assuming all polymers could be aggregated in these aggregates (See Supplementary Materials Section S6.3, Equations (S5)-(S7)). This shows ( Table 2) that the Bu40 N agg was always in the range of 3-6, while the polycation was contained in a number of 0.1-0.7 (See Table 2), thereby explaining that these complexes necessarily had to be bridged by polycation. For the Do36s, the situation was much different. Here, N agg for the Do36s was rather constant around~85, and N agg of the polycation increased systematically from around 0.9 to 3.8 (See Table 2). This explains why rather compact and slightly interconnected aggregates were formed. As shown in Figure 7, for q-chit added to Do36s, the M w values followed very nicely the theoretical prediction for simply adding the polycation onto the existing anionic copolymer micelles, while for PDADMAC the increase was somewhat larger. In contrast, for Bu40, a very substantial increase in M w was seen that demonstrated that here the formation of compacted aggregates was largely driven by the presence of the polycation. This was further quantified by the rather low aggregation numbers given in Table 2.
An effective density, ρ e f f , calculated from Equation (S9), could also quantify the IPEC's compactness. The effective densities of the IPECs based on Bu40 were much lower than that of Do36s complexes. In general, q-chit complexes achieved higher effective densities in comparison to PDADMAC complexes. This confirmed again the higher compactness of the complexes with q-chit than PDADMAC. Moreover, considering an increase in z ratio, the concentration of polycation resulted in a decrease in the effective density of the IPECs.
As a next step of quantitative analysis, we used the position of the correlation peak (q max ) seen for the nBu 40 -b-AA 167 samples, as that which should give the mean spacing d (=2π/q max ) between the aggregates. Assuming all of the copolymer to be aggregated and all of the added polycation to be bound in a more or less compact IPEC shell (and assuming the homogeneous distribution of the complexes in space), we could proceed to calculate the volume fraction Φ of dispersed aggregates, for which we assumed a density ρ of 1.1 g/mL. Further assuming that the mean spacing d could be approximated by placing the aggregates on a primitive cubic lattice, we calculated their aggregation number, the radius of the core R and the molecular weight M w as: The obtained values are summarized in Table 3, and the values for M w were generally rather constant in the range of 5-6·10 5 g/mol and thereby generically similar to the ones obtained from the intensity ( Table 2) but varying less as a function of z and polymer. However, here it has to be pointed out that the values in Table 2, arising from the absolute intensity, were more reliable, as for the values in Table 3, a homogeneous distribution of the aggregates in space was assumed, which especially for the Bu40 might not have been the case, as seen by the intensity upturn at a low q which indicated clustering. In addition, we saw much higher aggregation numbers in Table 3 (compared to the ones given in Table 2), which indicated that for the not-so-hydrophobic Bu40 copolymer, only a fraction of the molecules was really contained in these compacted aggregates and a larger part of it was contained in a less compacted form.
Discussion
We studied the complexation of anionic copolymer aggregates with alkyl acrylates as the hydrophobic part and polyacrylates as the hydrophilic part. The hydrophobicity of the hydrophobic block was varied using alkyl chains from butyl to dodecyl, and also the length of the hydrophilic block was varied. The dodecyl polymer was sufficiently hydrophobic to form spherical micelles with an aggregation number of about 40-80, while the aggregates became smaller and less defined with the decreasing length of the alkyl chain of the acrylates [28]. These anionic copolymer aggregates were complexed by the polycations PDADMAC or q-chitosan, leading to aggregates with a hydrophobic core of the alkyl acrylate surrounded by an IPEC shell of PAA and PDADMAC or q-chit, which were stabilised in an aqueous solution by the remaining excess PAA chains. Especially for the butyl acrylates, which show by themselves as having a rather weak aggregation tendency, the addition of polycation induced a marked increase in aggregation on a local scale of~5-8 nm, as well as the formation of larger structures, as seen by light scattering.
Similar-sized aggregates were formed upon complexation with PDADMAC or q-chit; for PDADMAC, light scattering and the low q-range of the SANS data showed a very marked increase in intensity, which could be interpreted as an interconnection of the different IPEC aggregates by the polycation. This was not surprising since the polycations employed for complexation were much longer (~450 nm) than the average spacing between the IPEC aggregates, and SANS showed that, typically, there was less than one polycation contained in an aggregate. However, despite the fact that q-chit and PDADMAC had a similar stretched length, they showed here markedly different behaviour. This may be attributed to the fact that q-chit had an intrinsically lower affinity to be dissolved in water and bound more strongly to the polyacrylate. As a result, it formed a more compacted IPEC shell, while PDADMAC could extend more easily out from the shell of the IPEC micelles into the aqueous surroundings, thereby being able to bridge to neighbouring aggregates. This different structural behaviour is depicted in Figure 8 and was confirmed by the macroscopic viscosity of the solutions of the different complexes, which was more than twice as high for PDADMAC complexes compared to those with q-chit (Table S4).
to be dissolved in water and bound more strongly to the polyacrylate. As a result, it formed a more compacted IPEC shell, while PDADMAC could extend more easily out from the shell of the IPEC micelles into the aqueous surroundings, thereby being able to bridge to neighbouring aggregates. This different structural behaviour is depicted in Figure 8 and was confirmed by the macroscopic viscosity of the solutions of the different complexes, which was more than twice as high for PDADMAC complexes compared to those with q-chit (Table S4). Figure 8. Sketch of formed IPECs either from q-chit or PDADMAC. Red colour represents the hydrophobic core of the polyanion, while blue colour shows the hydrophilic block of the polyanion. q-chit is shown in green, while dark purple is the colour of PDADMAC. Figure 8. Sketch of formed IPECs either from q-chit or PDADMAC. Red colour represents the hydrophobic core of the polyanion, while blue colour shows the hydrophilic block of the polyanion. q-chit is shown in green, while dark purple is the colour of PDADMAC.
Conclusions
Our experiments on IPEC formation with amphiphilic block copolymers with a variable extent of hydrophobicity of the hydrophobic block demonstrate how sensitive IPEC structures react to the parameter hydrophobic modification. Accordingly, this is an important parameter in the design of IPEC systems. However, the molecular details of the complexing homopolyelectrolyte are also important, as seen in our experiments in the different behaviour of PDADMAC and quaternized-chitosan, where at the same polymer length, the PDADMAC led to bridging and cluster formation and the q-chit not. Correspondingly, the choice of the complexing polyelectrolyte offered a facile way to control the structural properties of such micellar IPECs and thereby of their properties as they could be interested in using them as tailor-made ionic assemblies, for instance, for delivery purposes.
Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/polym15092204/s1, Table S1: Zeta Potential of the complexes of Bu40 and Do36s with q-chitosan and PDADMAC at different charge ratio z. Table S2: Refractive index increment dn/dc of the different polymers at 25 • C measured at 620 nm. Table S3: SLD of the complexes. Table S4. Viscosity measurements of Bu40 Complexes. Figure S1: Direct comparison of the M app w values for the different copolymers upon complexation with PDADMAC (left) or q-chit (right) obtained from the turbidity and the static light scattering measurements. The theoretical Molecular Weight values are shown as UV_th and SLS_th. The theoretical Mw values were calculated with the assumption that the micelle size remains constant and the corresponding amount of polycation is simply complexing these micelles. Figure S2: The aggregation number of polyanion of the IPECs obtained by complexing solutions of AlkA-b-NaPa with different amounts of polycation (a) q-Chitosan or (b) PDADMAC via SLS. Figure S3: The measured TTAB reference sample at the facilities HZB (dark grey), LLB (red) and the corrected LLB data (blue). Figure S4: The Guinier approximation plots for Bu40 Complexes with q-Chitosan at different charge ratios z (the fitted q range~0.01-0.03 Å −1 ). Figure S5: The Guinier approximation plots for Bu40 Complexes with PDADMAC at different charge ratios z (the fitted q range~0.01-0.02 Å −1 ). Figure S6: The Guinier approximation plots for Do36s Complexes with q-Chitosan at different charge ratios z (the fitted q range~0.009-0.03 Å −1 ). Figure S7: The Guinier approximation plots for Do36s Complexes with PDADMAC at different charge ratios z (the fitted q range~0.009-0.03 Å −1 ). Figure S8: The Power Law plots for Bu40 Complexes with | 11,554.2 | 2023-05-01T00:00:00.000 | [
"Materials Science",
"Chemistry"
] |
BCBimax Biclustering Algorithm with Mixed-Type Data
- The application of biclustering analysis to mixed data is still relatively new. Initially, biclustering analysis was primarily used on gene expression data that has an interval scale. In this research, we will transform ordinal categorical variables into interval scales using the Method of Successive Interval (MSI). The BCBimax algorithm will be applied in this study with several binarization experiments that produce the smallest Mean Square Residual (MSR) at the predetermined column and row thresholds. Next, a row and column threshold test will be carried out to find the optimal bicluster threshold. The existence of different interests in the variables for international market potential and the number of Indonesian export destination countries is the reason for the need for identification regarding the mapping of destination countries based on international trade potential. The study's results with the median threshold of all data found that the optimal MSR is at the threshold of row 7 and column 2. The number of biclusters formed is 9 which covers 74.7% of countries. Most countries in the bicluster come from the European Continent and a few countries from the African Continent are included in the bicluster.
I. INTRODUCTION
In real data, it is possible to involve various kinds of data which do not only consist of numeric or categorical data, but rather a mixture of both (mixed-type data).Choosing an approach to handling mixed data is still a challenge, especially for biclustering methods because it is very dependent on datasets.There are several approaches to conducting mixed data analysis, one of which is converting all variables into categorical variables and then proceeding with clustering analysis [1].Another approach is to quantify categorical variables into numerical variables using a dimension reduction technique, followed by a clustering technique [2].
Cluster analysis is a method of grouping objects based on their similar characteristics [3].Cluster analysis commonly used is one-way clustering.This analysis assumes that objects have similar characteristics in all rows or columns, so that objects in rows are grouped based on similarity in columns or variables in columns are grouped based on similarities in rows.One-way clustering has been carried out separately, namely by grouping objects in rows using a distance measure matrix, then grouping variables in columns using a correlation matrix [4].Clustering like this still has limitations for two-way data that want to know the relationship of a particular group of objects with a specific group of variables together.Biclustering is a development of cluster analysis and aims to group data simultaneously from two directions or two dimensions.The two-way clustering technique (biclustering) is then applied to the gene expression matrix, namely the matrix data contains real numbers that show the activity of several different genes (rows) and experimental conditions (columns) [5].
Biclustering algorithm classification is divided into five namely greedy iterative search, divide and conquer, exhaustive bicluster enumeration, iterative row and column clustering combination, and distribution parameter identification [6].In this study, the algorithm to be chosen is the BCBimax algorithm which is a divideand-conquer category.The BCBimax algorithm is a technique introduced to research market segmentation using customer pain points [7].This algorithm is a development of the Bimax algorithm which can avoid overlapping and also solve problems in the Bimax algorithm which produces too many small and many biclusters [8].This method is fast and precise in producing optimal biclusters so that it becomes a reference for other algorithms [9].The BCBimax algorithm has been widely applied in identifying customer segmentation based on behavior [10] and has been applied in the health sector to identify subgroups of lymphoma cancer survivors [8].
Biclustering analysis to mixed data is still relatively new because initially this analysis was mostly used on gene expression data that has an interval scale.In addition, so far no algorithm has been developed that can accommodate mixed data at once.So that in this study we will first transform ordinal categorical variables into interval scales.One method that is widely used in transforming ordinal categorical data into an interval scale is the Successive Interval Method (MSI).The MSI transformation is a method of transforming ordinal data into interval data by changing the cumulative proportion of each variable in the category to its standard normal curve value [11].
The open economic system and globalization make international trade even more important because every country uses it to analyze economic development and formulate economic policies [12].In determining international trade potential, it is necessary to study many factors such as cultural, economic, and demographic factors where these factors consist of mixed data types, namely numerical data and categorical data.In determining market potential, each company also chooses different interests in determining variable priorities.The existence of different interests in the international market potential variables and the many destinations for Indonesia's export countries is the reason for the need for identification regarding the mapping of destination countries based on international trade potential in order to map destination countries for exports abroad appropriately and efficiently.Several studies regarding the grouping of export destination countries have been carried out, such as grouping Turkey's export destination countries based on an assessment of market potential using factor analysis and K-Means [13] and grouping potential markets in developing countries using Markov chains [14].
Based on this discussion, this research will aim to carry out a biclustering analysis using the BCBimax algorithm for potential international trade data, especially in 103 countries in the world.The results of this study are expected to be a reference for government policymaking in increasing exports to potential countries based on the results of the biclusters formed.
A. Data
The variables used in this study were 15 variables with 10 numerical variables and 5 ordinal categorical variables.We extract the data of variable X1-X9 and X14-X15 from World Development Integration Data by World Bank, variable X10 from Global Sustainable Competitiveness Index (GSCI) [15], variable X11 using calculation with formulation [16] to [17], variable X12 from Amphori [18], and variable X13 from FM Global [19].The research data used was obtained from a total of 103 countries as presented in Table I.R-Studio software is used as a data analysis tool.
B. Research Stages
In this research, there are three stages of research: pre-processing, inverse and data transformation, and biclustering analysis.The following is a complete explanation.
1) Pre-processing: At this stage, data cleansing will be carried out by deleting data that contains missing values.After obtaining complete data and producing as many as 103 countries, the next step is to standardize the numerical variables.The results of the data that have been standardized will then be carried out with exploratory analysis to see the initial characteristics of the data.
2) Inverse and data transformation: The dataset on the variables X2, X5, and X11 will be inverted by -1 to create a meaning that is in line with other variables according to the definition of international trade potential.In addition, the ordinal variables X13, X14, and X15 will be transformed using MSI into interval data.The following are the stages in carrying out the MSI transformation [11]: 1. Calculate the frequency of observations for each category 2. Calculate the proportion in each category 3. From the proportions obtained, the cumulative proportions for each category are calculated.4. Calculates the Z value (normal distribution) of cumulative proportions 5. Determine the Z limit value (the value of the probability density function in the abscissa of Z) for each category, by ( 1) ) , −∞ < < +∝ (1) 6. Calculating the scale value for each density category lower limit-density upper limit area under the upper limit -area below the lower limit (2). 7. Calculating the score (transformed value) for each category through (3).The existence of an inverse value aims to make a meaning that is in line with other variables according to the definition of international trade potential 3) BCBimax Algorithm: The idea behind the BCBimax algorithm is to partition the binary data into three submatrices, one of the partitions will only contain 0 elements so they can be discarded.The algorithm is then applied recursively to the remaining two submatrices, namely U and V which are the results of the step 3 process, the recursion will end if a submatrix containing only 1 elements is formed.As a way to avoid overlapping, the next bicluster search uses an algorithm based on the submatrix that does not include the previous bicluster row.The stages of the algorithm's work are as follows as in Fig. 1.
4) Optimal bicluster selection:
The evaluation function is an indicator to measure performance on a biclustering algorithm.To choose the most optimal bicluster from several threshold trials, a comparison value is needed.In this study, the Mean Square Residue (MSR) evaluation function will be used as a basis for selecting the optimal bicluster and is defined in (1).Suppose denoted as the average of the row-th of bicluster (, ), is the average of the column-th , (, ) and is the average of all elements in the bicluster.Based on [5], it can be written as (4).
The residues of the elements in the submatrix are as follows = − − + .The quality of the bicluster can be evaluated by calculating the residuals , which is the sum of all the squared residues of all the elements.The residue here in after referred to as MSR as follows in (5): where || × ||is the dimension of the bicluster (volume), ie is the size of || the bicluster row and ||is the size of the bicluster column.The quality of a bicluster will be better as the residual value decreases and/or the volume of the bicluster increases.Qualities of the bicluster group the next MSR-based can be measured by calculating the average of MSR divided by the volume or the average MSR per volume and is defined by (6) [20]: with b is the number of biclusters generated by an algorithm certain.
A. Data Exploration
The description of the data related to the initial characteristics of each variable can be seen from the boxplot in Fig. 2(a) Through the boxplot in the figure, it can be seen that almost all variables have extreme values except for variables X4, X10, and X11 which do not contain outliers.This indicates that no country in the world has an extreme urban population percentage, competitiveness index, and cultural similarity index.Most of the median data tends to be negative (below zero).This indicates that the country in most of the variables concerned tends to be of moderate to low value.
In Fig. 2(b) shows the frequency of the four categorical variables X12 to X15.From the four categorical variables, it can be seen that the proportion of frequency in each value is not significantly different except for X15.At X15, the frequency of countries with code 1, namely developing countries, seems to dominate compared to transitional countries and developed countries.
B. Ordinal Data Transformation
Transformation using MSI will change the data scale from ordinal to interval.This is done because the biclustering analysis can only be used for data with an interval scale only or only an ordinal scale.
The results of the MSI transformation are shown in Table II.The variables X13, X14 and X15 are ordinal variables so that the MSI transformation will be carried out, while X12 will not be transformed because it is a binary variable.Variable X13 has 4 categories with almost the same proportion of samples in each category, namely in the range of 0.2, as well as X14, where the proportion of samples in the three categories is not much different.In X15, the proportion is not the same, the majority of countries are developing countries (category 1), then a few countries are transitional countries (category 2).Furthermore, for biclustering analysis, the values of the variables X13, X14, and X15 will be changed with the values from the Interval column which are the resulting values of the transformation.
C. Biclustering Results
In the BCBimax algorithm, a binarization process is required before biclustering analysis is carried out.The binarization process in this study will be carried out in 4 trials, namely: 1) system threshold value, 2) median of all data, 3) median of each variable, and 4) average of all data (which has the same value as the average of each variable).The purpose of the trial was to compare the sample average MSR/volume values of the biclusters formed.This test is applied to the minimum number of rows and columns that are the smallest, namely a minimum row = 2 and a minimum column = 2 as shown in.
Table III shows the results of the comparison of the four threshold test scenarios in the scaling data matrix binarization process.In the first scenario, using the system threshold will produce a binary matrix with a proportion of 0 elements of 0.976.The value of this proportion is very large and unbalanced so it does not form a bicluster output.Furthermore, in scenarios 2 to 4, a fairly balanced proportion value is obtained, which is close to 0.5 and produces a large number of biclusters, so it can be assumed that the binarization of scenarios 2 to 4 can provide sufficient information.In this case, the scenario that has the smallest average MSR/volume value is scenario 4. So that the median value of all data will be used as the basis for binarization as data that will be used for biclustering analysis using the BCBimax algorithm.
The BCBimax algorithm will work by clustering all submatrices whose elements have a value of 1.In this study, 60 minimum threshold combinations of rows and columns will be tested.The minimum column threshold will be taken from 2, 3, to 7, while for the highest threshold, 7 is taken from half of the total number of research variables.The minimum row thresholds generally follow the column minimum thresholds, namely 2, 3, and up to 7. Then for further exploration, a minimum value of rows 10, 13, 15, and 20 will be tried so that there are 60 trial threshold combinations.The highest number of biclusters is 41 resulting from a combination of very small thresholds, namely a minimum row and column of 2. The biclusters that are formed will be bigger when the threshold is smaller, and vice versa, the bigger the threshold value, the biclusters that are formed will be smaller/less.The median of all data (-0.098)0.487 41 0.006 The average value of MSR per volume is presented in Fig. 3 which follows the elbow method where this method is commonly used in k-means clustering analysis where the optimal number of clusters will be located at the elbow at a certain point.In this case, the row and column thresholds will produce biclustering information if they have a small average MSR value per volume and form an angle at a point.The results of the calculation of the average MSR per volume that experienced a significant decrease between the 2 threshold points were then followed by a relatively constant value at the minimum threshold in row 7 and column 2 with an average MSR value per volume of 0.0036.This threshold forms a total of 9 biclusters.Thus, the optimal bicluster group from the BCBimax algorithm is at the minimum threshold of row 7 and minimum column 2.
Furthermore, the characteristics of the optimal bicluster results will be presented in Table IV.Based on the table, it can be seen that the BCBimax algorithm in the combination of row 7 and column 2 thresholds will produce clusters of 77 countries or 74.7% in the 9 biclusters formed.There are no overlapping countries in the bicluster or other words the countries in each bicluster are unique.Meanwhile, 26 countries, or 25.3% of the countries in the study that were not included in the bicluster were countries that had a negative average value on the results of standardization or had an average value that was smaller than the mean value of the entire scaling data.This is consistent with the performance of the BCBimax algorithm wher e this algorithm will only find element "1" which in this study element "1" is the one with a value greater than the median according to the table described in.In addition, another condition for countries that are not included in the bicluster is because these countries do not have the same characteristics of international trade potential as other countries.As for the membership of the variables, it appears that several variables appear in more than one bicluster that is formed.In this case, the overlap in the variables is not a significant problem considering that a country can have several indicators of international trade potential.The mapping of international trade potential in Indonesia's export destination countries can be seen in Fig. 4 with a map.In bicluster 1 with orange color, the majority of its members are found on the European continent, namely Belgium, Estonia, France, Luxembourg, Malta, and the United Kingdom, as well as the United States on the American continent.In bicluster 2 the red color is also dominated by countries from Europe, namely Denmark, Finland, Germany, Greece, Hungary, Italy, Latvia, Lithuania, Norway, and Sweden.In addition, there are Japan and Australia as well as New Zealand.This second Bicluster has the most members compared to the other biclusters.Furthermore, in bicluster 3 and 4, it is also dominated by European countries, considering that countries on the European continent also dominate the sample in this study.Most of the Bicluster 5 come from the Americas as well as Bicluster 9. Most countries from the Asian continent are in Bicluster 6 and 8, then Bicluster 7 is dominated by countries from Africa.
Furthermore, to find out the similarity of trading potential, a membership plot will be displayed in Fig. 5.
Each row in the graph shows a variable while the column will represent the bicluster group.The green rectangles depict trade potential variables with the same resemblance so that they are grouped into one bicluster.The brighter the middle color produced, the more countries that have high international trade potential.The light color in this case is interpreted as the color of the light version of the box on the side.The X7 and X14 variables have the brightest color.There are 69.9% of countries with a high proportion of individual internet users and 67.9% of countries with high state income so it can be seen that these two variables appear the most in the biclusters formed, namely in biclusters 1 to 6. Likewise with X2 and X5 where there are 66.9% of countries with low import tariffs and 64.1% of countries with a country that is close to Indonesia.These two variables are also the variables that appear the most in the bicluster.Variables X8 and X15 have a dark color and there are only 34.9% of countries with high per capita GDP and 43.7% of countries with high economic development categories.
IV. CONCLUSION
This study applies the BCBimax algorithm to potential international trade data with 103 countries in the world.Based on an experiment of 60 threshold and row combinations, the BCBimax algorithm produces an optimal bicluster at the combination of rows and columns (7,2) with an average MSR/volume value of 0.0036.The binarization threshold experiment has been carried out and the use of the median value for all data produces the most biclusters and the average MSR/volume which tends to be small compared to the other three experiments.All countries with high trade potential are well clustered and spread out in 9 biclusters.Biclusters are formed have unique country characteristics and the majority of countries on the European continent have good trade potential compared to countries from other continents.Several countries in Africa also have trade potential that is quite good so that it can be used as a consideration for the government to open markets and cooperate.The variables that appear most frequently in the nine biclusters are X2 (import tariffs), X7 (proportion of individual internet users), and X14 (country income category) because the average data on this variable is more from the median, meaning that in this data the majority of countries in the world have import tariffs low, high proportion of individual internet users and state income middle to upper to high.
Fig. 2
Fig. 2 Average value of MSR per volume
Fig. 3
Fig. 3 Distribution map of optimal bicluster result by country without scale
TABLE II THE
RESULTS OF THE TRANSFORMATION USING THE INTERVAL SUCCESSIVE METHOD | 4,692.8 | 2024-05-20T00:00:00.000 | [
"Computer Science",
"Biology"
] |
Terahertz detectors arrays based on orderly aligned InN nanowires
Nanostructured terahertz detectors employing a single semiconducting nanowire or graphene sheet have recently generated considerable interest as an alternative to existing THz technologies, for their merit on the ease of fabrication and above-room-temperature operation. However, the lack of alignment in nanostructure device hindered their potential toward practical applications. The present work reports ordered terahertz detectors arrays based on neatly aligned InN nanowires. The InN nanostructures (nanowires and nano-necklaces) were achieved by chemical vapor deposition growth, and then InN nanowires were successfully transferred and aligned into micrometer-sized groups by a “transfer-printing” method. Field effect transistors on aligned nanowires were fabricated and tested for terahertz detection purpose. The detector showed good photoresponse as well as low noise level. Besides, dense arrays of such detectors were also fabricated, which rendered a peak responsivity of 1.1 V/W from 7 detectors connected in series.
characterizations for InN nanomaterial. On the growth section, it is found that during InN nanowire growth, another material form nano-necklace was formed simultaneously. Then the crystal properties and growth mechanisms were studied in detail. For device demonstration, especially instead of conventional single nanowire device geometry, THz detectors were fabricated from several aligned nanowires. THz detectors array was further realized by integration individual nanowires-device in series. The advantages of this approach lies in: 1. only nanowires with controlled density, position, as well as orientation will likely find ultimate applications. 2. THz wave has typical focus spot larger than 100 μ m. If more than one subwavelength sized devices are connected, stronger photoresponse can be generated. Hence in the fabrication process we adopted a "transfer printing" approach developed by others 18,19 , which enabled alignment of groups consist 3 to 6 InN nanowires into designed areas for subsequent device fabrication. The result of these devices unequivocally demonstrates fairly good FET properties of the multi-nanowires sample, which successfully produced sum responsivity of 1.1 V/W from integrated device arrays. This result is important for many THz applications.
Results
The InN nanostructures were prepared by chemical vapor deposition (CVD) method. Although the growth was intentionally designed for only nanowires, however, distinctly, after growth there were two kinds of InN morphologies seen on the as-grown substrate: nanowires and "nano-necklace". It is then found that the places close to the source grow nanowires whereas further away positions were covered mostly by nano-necklace. Fig. 1a,b are the scanning electron microscope (SEM) images of InN nanowires and nano-necklaces on close and far parts (to the source) of the substrate, respectively. The co-existence of different morphologies suggests that the growth of various InN nanostructures in CVD has narrow growth windows. Thus, slight variation of growth parameter (such as III/V ratio) can yield drastic different growth modes. In addition, the heads or parts of these nanomaterials do not contain gold nanoparticles (method section) in SEM images, indicating the vapor-solid growth mode 20 . The role of gold nanoparticles prior to the growth is to increase the surface roughness to facilitate vapor nucleation. Magnified SEM image of a typical InN nano-necklace architecture is shown in Fig. 1c, which exhibits clearly that the typical nano-necklace consists of a number of connected and uniform beads. The major and minor waists range from 200 to 400 and 200 to 1000 nm, respectively.
The crystal structure of the as-obtained product was characterized by X-ray diffraction (XRD) (Fig. 2). All of the strong reflection peaks can be indexed to wurtzite-type InN (w-InN 21 . XRD peaks related to other phases (such as In and In 2 O 3 ) were not found. Energy dispersive spectroscopy (EDS) measurements were performed to determine the chemical composition of the samples. Fig. 3a illustrates a typical SEM image of sample part that found co-existence of InN nano-necklaces and nanowires. The EDS spectrum of marked position in Fig. 3a is shown in Fig. 3b. Distinct In and N peaks were found to confirm the InN elemental composition, while the silicon peak originates from the substrate. Oxygen peak is attributed to SiO 2 and Indium oxide related substances. The atomic ratio of In: N is close to 1 : 1, suggesting good stoichiometry of the material. The chemical composition and microstructure of as-synthesized InN nanostructures were further studied by transmission electron microscope (TEM). The TEM images of single InN nanowire and nano-necklace sample are shown in Fig. 4a,c, respectively. High resolution (HR)-TEM of Fig. 4b is the magnified part of the single nanowire to examine detailed crystal structure. The d-spacing of the lattice fringes measured is 0.28 nm, matching well with the (0001) plane interspacing of wurtzite InN structure. The inset of Fig. 4b shows the corresponding selected area electron diffraction (SAED) pattern taken from the corresponding nanowire, further confirming that the sample is single-crystalline and grows preferentially along the [0001] crystallographic direction. For the InN nano-necklace, Fig. 4c indicates that the nano-necklace consists of multiple single crystalline connected beads, while no grain boundaries between them could be noticed. HRTEM of Fig. 4d (also with fast fourier transfor (FFT) in the inset of Fig. 4d) shows fringe with d-spaces of 0.31 nm. After the TEM studies, the growth pattern of the nano-necklaces can be deduced by the assistance from a w-InN unit cell (Fig. 4e): the growth direction of the nano-necklace is determined to be also [0001]; the equilateral trapezoid facets of the truncated hexagonal beads belong to{1011}; the edges between two facets are 2113 . By using the standard lattice parameters of w-InN, the angle between two corresponding facets on the opposite sides of the major waist of the beads (e.g.( ) 1011 and ( ) 1011 is calculated to be 124° and that between two corresponding edges (e.g. [2113] and [2113]) to be 116°. These are in good agreement with the observed values ranging from 116° to 124° in Fig. 4c. The schematic illustration of the nano-necklace is presented in Fig 4e. The fractions of {1011} and {1011} planes are not equal, which may owes to the different polarities of them: the former is In-polar while the later is N-terminated. In other words, the as-obtained nano-necklace (see Figs 1 and 4c) are not exactly the same as simulated Fig. 4e; instead, the{1011} planes are more favored than the {1011} ones, which is due to the different formation energies thus different diffusion abilities of the adatoms on these two types of side-surfaces 22 . In addition, similar multi-beads morphologies are also observed in one dimensional hexagonal wurtzite stacked-cone and zigzag AlN 23 , GaN 24 and ZnO 25 nanostructures.
The photoluminescence (PL) of the two kinds of structures was carried out (Fig. 5) to investigate the optical properties of single InN nanowire and nano-necklace. The strong emission peaks around 1700 nm can be seen clearly, which corresponds to the near band edge (NBE) emission of InN. This IR-emission peak position is consistent with the NBE emission observed from reported InN nanorods 26 and epitaxial films 27 . Notefully, the emission intensity from the nano-necklace is stronger, and is tentatively attributed to the larger material size as well as muti-scattering inside one nano-necklace for three dimensional cavity-like light resonant 28 .
Although single InN nanowire or nano-necklace can be readily used for THz detectors such like single nanowire FET detectors, randomly distributed nanostructures are less favorable for ultimate practical applications. For simplicity and wider impact purpose, InN nanowires rather than nano-necklaces were selected to perform alignment and device fabrications. They were transferred from growth Si substrates to a patterned device substrates by a mechanical "transfer printing" process (Fig. 6a). The alignment direction is determined by the mechanic shear force due to relative motion of the two substrates. SEM image of Fig. 6b indicates that 3 ~ 6 nanowires can be successfully transferred into trench arrays after the transfer-printing process. In the following FET device fabrication, one group containing 5 nanowires was selected for demonstration. Source, drain (10 nm Ti/20 nm Au) were defined and deposited by photolithography and e-beam evaporation (Fig. 6c) to two heads and tails of the nanowires. The gate area was defined by photolithography and then 300 nm thick Al 2 O 3 gate insulator was deposited by e-beam evaporation (Fig. 6d). The gate electrode (50 nm Pt) is finished by the assistance of focused ion beam (FIB) technique (Fig. 6e). Bow-shaped antenna with radius of 100 μ m was designed to partially resonant with incidence THz wave, while the left and right lobes of the antenna were connected to source and gate electrodes, respectively. The final device structure is also seen in Fig. 6f. The reason of the typical un-symmetric source-drain electrode is for rectifying enhancement to maximize the photovoltage readout.
With the above device geometry, FET characterizations on a single array which consists of 5 InN nanowires were performed at room temperature. The source-drain current (I DS ) versus source-drain voltage (V DS ) curves under different gate voltages (V G ) − 10 V to 10 V is shown in Fig. 7a. I DS increases with the increase of V G evidently, suggesting typical n-type channel characteristics 29 . Fig. 7b shows the I DS -V G transfer curve at V DS = 0.5 V. It is observed that the I DS increases with the increase of V G from − 30 V to 30 V, which is in accordance with Fig. 7a. It is notes that with high V G , I DS increase becomes nonlinear, which is possibly due to high current (~ 30 μ A in one nanowire) annealing effect. Also the corresponding I DS at given V G is not well matched between Fig 7a,b, which is attributed to the hysteresis behavior of the I DS -V G curve. Unlike ZnO nanowire FETs 29 or carbon nanotubes 30 , the device does not exhibit very clear and high on/off ratio. The reason is due to residue carrier density that not swept by V G . The threshold voltage (V th ) is found to be about − 25 V ~ − 30 V. In a nanowire FET, the carrier concentration n can be estimated by using the following equation 31 :
Discussion
The THz induced photovoltage vs V G is depicted in in Fig. 8a. It is observed that there are obvious peaks and the maximums of the photovoltage happen around − 20 ~ − 22 V, which are in proximity with V th in I DS -V G curves. Analysis suggests that this behavior is in accordance with the detection mechanism of a plasmon wave FET detector 33 , in which the source-drain photovoltage can be expressed as: Where q is the electron charge, u a is the small amplitude of the ac wave, η is the ideality factor, k B is the Boltzmann constant, T is the temperature and U G denotes the gate-to-source voltage minus V th (here J 0 is the gate leakage current density, L is the channel length, m is the electron mass, C is the channel capacitance per unit area, τ is the momentum relaxation time). Under this circumstance, the maximum of the photovoltage: can be derived at the condition when U G = − (ηk B T/2)ln(1/κ). For different J 0 , U G can be on the range of 0 ~ − 5 V 33 . However precise value of maximum point U G could not be given due to other un-available parameters. Nevertheless, the maximum U G happens around the point V G ≈ V th with several volts offset, which holds good accordance in the current device data in Fig. 8a. Further, in order to get responsivity values, the following geometric relationship should be taken into account: where R V is the responsivity, P t is the incidence power impinging on the beam spot, S t is the radiation beam spot area and S a is the active area. S t = π d 2 /4 = 1.77 × 10 −6 m 2 and Sa = π R 2 /2 = 1.57 × 10 −8 m 2 can be easily calculated. However, the antenna scale is much smaller than the wavelength (~ 1 mm). So S λ = πλ 2 /4= 7.8 × 10 −7 m 2 is taken for the active area. Hence the responsivity spectra at 1000 μ W with polarization parallel (orange) and orthogonal (red) to the antenna are plotted in Fig. 8b. Obviously parallel measurement yielded much stronger intensity. which is typically due to much higher THz wave absorption rate of the antenna in parallel geometry. Notably, the peak responsivity of the parallel geometry curve reaches 0.1 V/W. The current devices geometry's tremendous advantage lays in the ability of realizing dense array of nanowires THz detectors. Hence the fabrication of pixel-like THz detectors array was done by larger-scale photolithography, transfer-printing and FIB electrodes deposition. A chip carrying 3 × 3 detectors is shown in the microscope image of Fig. 9a. Individual detector was designed the same way in Fig. 6. Wire bonding was performed on this chip and successfully enabled 7 operational devices. The active area of the chip is ~ 1.5 mm in size, which is on the same order of the incidence THz wavelength. On this chip, if more than two detectors are connected in series, it is naturally possible to render stronger photovoltage output. Responsivity characterization was performed, under the assumption that all incidence wave power were absorbed. The result for different number of devices connection is shown in Fig. 9b. Clearly, 7 devices together give the highest responsivity of 1.1 V/W. Notably, the change of responsivity intensity from 1 ~ 5 devices is not as large as from 5 ~ 7 devices, which is probably due to diffraction associated non-linear absorption phenomena. Nevertheless, the integration of multiple device not only produced stronger photoresponse, it paved the way for practical applications including focal plane pixel THz camera, high output photoconductive THz emitters, etc.
In summary, we have synthesized the InN nanowires and nano-necklaces on the Si substrate through the CVD process. The InN nanowires part was successfully aligned into micrometer sized trenches for controllable nanowire device fabrication. Then remarkable FET characteristics of aligned InN nanowires devices were presented and 99 cm 2 /VS electron mobility were achieved. The FET detector rendered 0.1 V/W peak responsivity, while multiple arrays of these detectors were able to be connected to increase
Methods
InN nanostructures growth. The InN nanostructures were synthesized in a horizontal three-temperature zones tube furnace system. A quartz boat containing 0.4 g of In powder (99.999%, Xiya Reagent) was placed in the middle of the II-temperature zone in the closed furnace. An 1.5 cm × 1.5 cm silicon substrate coated with a 60 nm gold colloid (BBI Solution) catalyst film was inversely placed on top of the quartz boat. In another word, the substrate with gold colloid side faced downward, in order to facilitate the growth of nanomaterials. The carrier gas, argon (99.999% purity, Messer), was introduced at one end of the quartz tube at a constant flow rate of 100 sccm (standard cubic centimeters per minute). The Ι and II temperature zones of the furnace were simultaneously increased to 700 °C in 30 min. Then the NH 3 (99.99% purity, Messer) was introduced at same end of the quartz tube at a constant flow rate of 250 sccm when the setting temperature was reached. The growth was kept for 80 min. After the reaction the furnace was cooled to room temperature, and gray and black-colored products were found covering the silicon substrate.
Materials characterization. The morphologies and microstructures of the as-synthesized products were systematically characterized. The XRD data was obtained with a Philips X'Pert PRO diffractometer equipped with Cu Ka radiation (λ = l.5418 Å). The accelerating voltage was set at 40 kV, with 40 mA flux in the 2Ѳ range of 10-90°. The field emission SEM images were obtained on the Model Uitra 55 from Carl Zeiss SMT Pte Ltd Co. Germany, which equipped with energy dispersive X-ray spectra (EDS). The TEM images were obtained on the Model Libra 200 PE (200kV) from Carl Zeiss SMT Pte Ltd Co. Germany.
PL characterization. The room temperature spatial resolved PL spectra were carried out on a Model inVia Raman Microscope from RENISHAW Co. UK using a 514 nm laser as the excitation source.
Nanowire alignment. First, device substrates (500 nm SiO 2 on Si) were patterned by photolithography to define the trenches where nanowires will then be transferred. The trench on photoresist width and spacing can be adjusted by designing different photolithography plate; Second, a nanowire growth substrate was brought into contact with the patterned device substrate (nanowires facing the device substrate). A pressure of ~ 2 kg/cm 2 was applied while the growth substrate was slide by several mm. After this, many of the nanowires will detach from growth substrate and transfer to device substrate by inter-molecular forces. Finally, the photoresist layer on device substrate was removed by acetone.
THz detection characterization. The source used is a AV1450C (Vendor: CETC) signal generator (30 GHz) with frequency multipliers. The output wave is 290 GHz. The THz source is with adjustable power < 1 mW and beam diameter of ~ 1.5 mm. The FET drain contact was connected to a current amplifier, which converting the current into a voltage signal. The output voltage was read by a lock-in-amplifier. The gate voltage was varied using a home built voltage meter. The detector was mounted on a motorized X-Y translation stage to maximize the measured photoresponse. | 4,081 | 2015-08-20T00:00:00.000 | [
"Physics"
] |
Equivalent Characterization on Besov Space
In Sobolev spaces, it is known that ∥f ∥H2ðR2Þ ~ ∥f ∥L2ðR2Þ + ∑i=1∥∂ 2 f /∂xi ∥L2ðR2Þ, where ∥f ∥H2ðR2Þ ≕ ∥f ∥L2ðR2Þ + ∥∂x1∂x2 f ∥ L2ðR2Þ+∥∂x1 f ∥L2ðR2Þ+∥∂ 2 x2 f ∥L2ðR2Þ: Note that on the right hand side of the definition ∥f ∥H2ðR2Þ, it contains the mixed derivative norm ∥∂x1∂x2 f ∥L2ðR2Þ: This mixed derivative norm would make the calculation more complicated or even infeasible to estimate partial differential equations with some anisotropy property, like Vlasov-Poisson equation [1, 2], in fractional Sobolev space [3]. So, separating variables becomes necessary and meaningful. In this paper, we aim to prove ∥f ∥Bsp,rðRnÞ ~∑ n j=1 ∥f ∥Bsp,r,x j ðRnÞ which realizes the separation, i.e., the right hand side does not contain the “mixed derivative” term, it only contains fractional derivative with respect to a single variable for each term. Thus, when it comes to estimate ∥f ∥Bsp,rðRnÞ in solving partial differential equations, it is equivalent to estimate ∥f ∥Bsp,r,x j ðRnÞ individually. For the other equivalent characterizations for Besov spaces, refer to [4–7] and the references therein.
In this paper, we aim to prove ∥f ∥ B s p,r ðℝ n Þ~∑ n j=1 ∥f ∥ B s p,r,x j ðℝ n Þ which realizes the separation, i.e., the right hand side does not contain the "mixed derivative" term, it only contains fractional derivative with respect to a single variable for each term. Thus, when it comes to estimate ∥f ∥ B s p,r ðℝ n Þ in solving partial differential equations, it is equivalent to estimate ∥f ∥ B s p,r,x j ðℝ n Þ individually. For the other equivalent characterizations for Besov spaces, refer to [4][5][6][7] and the references therein.
Preliminaries
We first recall definitions on Besov spaces, see [8]. Given f ∈ S which is the Schwartz function, its Fourier transform and its inverse Fourier transform is defined by F −1 f ðxÞ = f ð−xÞ: We consider φ ∈ S satisfying supp φ ⊂ fξ ∈ ℝ n : 1/2≤|ξ| ≤2g: Setting φ j ðξÞ = φð2 −j ξÞ with j = f1, 2,⋯,g, we can adjust the normalization constant in front of φ and choose φ 0 ∈ S satisfying supp φ 0 ⊂ fξ ∈ ℝ n : |ξ|≤2g, such that We observe with the usual interpretation for p = ∞ or r = ∞: Throughout this paper, all the function spaces are defined on Euclidean space ℝ n ; we will omit it whenever there is no confusion. Next, we would like to present some known results which will be used later. The first one is the unit decomposition.
Lemma 1 (see [8], page 145). Assume that n ≥ 2, and take φ as in the definition of Besov space. Then, there exist functions χ j ∈ Sðℝ n Þðj = 1,⋯,nÞ, such that Next, we recall the real interpolation characterization for Besov spaces.
Remark 3. We also have Its proof can be repeated the process of Lemma 2 completely.
Equivalent Characterization
Now, we are in the position to state and prove our theorems. Firstly, we apply the Fourier multiplier [9] to prove that H s p ðℝ n Þ = T n j=1 H s p,x j ðℝ n Þ directly; H s p space has an advantage that the factor ð1 + jξj 2 Þ s/2 is positive everywhere, which is fundamentally important when applying the Fourier multiplier theorem. For the sake of brevity, we denote We have the following equivalent norm theorem in Sobolev spaces. where Proof. On the one hand, if f ∈ H s p , i.e., ∥F −1 hξi s F f ∥ p < ∞ where hξi = ð1 + jξj 2 Þ 1/2 : Note that, for any j = 1, ⋯, n, we have Next, we just need to show that m 1 ðξÞ = hξ j i s /hξi s is an L p multiplier. To prove the assertion, we introduce an auxiliary function on ℝ n+1 defined bỹ It is easy to verify thatm 1 is homogeneous of degree 0 and smooth on ℝ n+1 \ f0g: The derivatives ∂ βm 1 are homogeneous of degree −|β | and satisfy whenever ðξ, tÞ ≠ 0 and β is a multiindex of n + 1 variables. In particular, taking β = ðα, 0Þ, we obtain and setting t = 1, we deduce that |∂ α m 1 ðξÞ | ≤C α ð1+jξj 2 Þ −|α|/2 ≤ C α jξj −|α| , which implies that m 1 ðξÞ is an L p Fourier multiplier by the Mihlin-H€ omander theorem [9] (page 446).
On the other hand, assume f ∈ T n j=1 H s p,x j , that is, ∥F −1 hξ j i s F f ∥ p < ∞, for any j = 1, ⋯, n: Note that, Similarly, we can verify that m 2 ðξÞ = hξi s /∑ n j=1 hξ j i s is an L p Fourier multiplier which finishes the proof of Theorem 4.
We return to prove the equivalent characterization on Besov spaces. However, we cannot do the same trick as in and φ j k is the dyadic block of the unit decomposition for the jth variable as in the definition of Besov spaces.
Proof. We split the proof into the following two steps: Step I. To prove | 1,260 | 2021-04-26T00:00:00.000 | [
"Mathematics"
] |
Inferring topological transitions in pattern-forming processes with self-supervised learning
The identification and classification of transitions in topological and microstructural regimes in pattern-forming processes are critical for understanding and fabricating microstructurally precise novel materials in many application domains. Unfortunately, relevant microstructure transitions may depend on process parameters in subtle and complex ways that are not captured by the classic theory of phase transition. While supervised machine learning methods may be useful for identifying transition regimes, they need labels which require prior knowledge of order parameters or relevant structures describing these transitions. Motivated by the universality principle for dynamical systems, we instead use a self-supervised approach to solve the inverse problem of predicting process parameters from observed microstructures using neural networks. This approach does not require predefined, labeled data about the different classes of microstructural patterns or about the target task of predicting microstructure transitions. We show that the difficulty of performing the inverse-problem prediction task is related to the goal of discovering microstructure regimes, because qualitative changes in microstructural patterns correspond to changes in uncertainty predictions for our self-supervised problem. We demonstrate the value of our approach by automatically discovering transitions in microstructural regimes in two distinct pattern-forming processes: the spinodal decomposition of a two-phase mixture and the formation of concentration modulations of binary alloys during physical vapor deposition of thin films. This approach opens a promising path forward for discovering and understanding unseen or hard-to-discern transition regimes, and ultimately for controlling complex pattern-forming processes.
INTRODUCTION
Identifying and characterizing transitions in pattern-forming processes are key to our understanding and control of many systems in physical, chemical, material, and biological sciences.These tasks can be particularly challenging when the transition from one type of microstructural pattern to another is continuous or ambiguous as may be the case in dynamic and second-order phase transition processes.The reason is the state of the material system may exhibit subtle changes in the microstructure when the process parameters controlling the formation of patterns vary.For example, in cellular dynamics, cytoskeletal protein actin filaments undergo a continuous isotropic to nematic liquid crystalline phase transition when polymerized 1 .In chemical systems undergoing a precipitation reaction (Liesegang systems), one can control the transition from one precipitation pattern to another by regulating solid hydrogel and reactant concentrations [2][3][4][5] .When heated up, block copolymers thin films experience a smooth, thermally induced morphology transition from cylindrical to lamellar microdomains 6,7 .Similarly, the co-sputter deposition of immiscible elements results in a variety of self-organized microstructural patterns depending on processing conditions [8][9][10][11] .
The theory describing these types of transitions was first proposed by Landau 12 and later improved using renormalization group theory 13 .In such models, the determination of a pattern transition relies on (i) the identification of a local order parameter that sharply changes from one value to another (i.e. the order parameter changes discontinuously as in first-order transitions) and (ii) straightforward symmetry-breaking considerations of that order parameter.However, in many cases, topological (structural) transitions [14][15][16][17] are gradual and elusive, making it more difficult to identify the appropriate indicators of microstructural topology transitions.Supervised learning techniques, [18][19][20][21][22] that use predetermined labelling of distinct types of patterns for given process parameters, are commonly used to classify and predict transitions.However, generating these labels requires prior knowledge of the regimes in the process parameter space where the transition may occur.This limits the scope of this class of methods primarily to already known and at least partially and previously explored transition cases.In contrast, unsupervised learning techniques [23][24][25][26][27][28][29][30] do not require hand labelling or other time-consuming manual and potentially subjective interventions.Many of these FIG. 1. Workflow to identify transition regimes in pattern-forming processes via self-supervised learning.a We simulate the dynamical evolution of the physical system for a broad range of process parameters.Next, we project the final state of the microstructural pattern into a latent space (using a pre-trained ResNet-50 v2 34 ).We regress on these latent dimensions to estimate the original process parameters.b To detect specific classes of microstructural patterns, we evaluate the model error by predicting the corresponding initial process parameters.By measuring the change in sensitivity of forming specific patterns changes for various input process parameters, we learn where the transition regime(s) might occur.
unsupervised approaches use dimensionality reduction and clustering to classify topological phases in latent space and detect the topological transitions.
Like unsupervised methods, self-supervised learning does not require predefined labels for the target task.Instead of directly solving a clustering problem, self-supervised learning focuses on solving an auxiliary prediction problem which is easy to measure and closely related and semantically connected to the target task.Often the auxiliary problem requires predicting structure of the training data instead of some external labels, hence the "self-supervised" name.The promise of this method is that, while learning to solve the auxiliary tasks, for which labels can be easy to generate and do not require human supervision, we can learn the structure of the original problem for which the true labels are unknown.In the present case, we aim at learning the structure of the data in pattern-forming processes and use this knowledge to identify the transitions between different classes of patterns for which we do not have labels.Self-learning approaches can lead to models achieving comparable performance as when trained in a traditional supervised fashion, while reducing reliance on predefined labeled data 31,32 .In computer vision for instance, learning to predict whether two images are modified versions of the same original image, allows to construct a high-quality latent representation of the data, that can subsequently be used for solving an object recognition task 33 .With all its advantages, it can be challenging to define an appropriate task to use for the self-supervised training.In the context of the present work, our target task is the identification and characterization of microstructure transitions when the process parameters controlling the pattern formation vary.We propose that a relevant auxiliary task is to solve the inverse problem of predicting input process parameters from observed microstructures.The theoretical motivation behind this inverse problem stems from the fact that in material systems that experience a topological transition (or critical point), the sensitivity of forming specific patterns varies depending on the input process parameter.This sensitivity can sometimes increase, as it would be the case if a given process parameter acts as an order parameter, uniquely characterizing a regime for which a specific pattern exists.Or the sensitivity can decrease, as it would be the case for systems that exhibit universality (i.e. the closer the parameter is to its critical value, the less sensitively the order parameter depends on the dynamical details of the system).Through our study, we found the inverse problem to be strongly related to our target task of identifying microstructure transitions.Interestingly, the most relevant signature for identifying microstructure regimes emerged from looking at changes in the neural network's uncertainty in solving the inverse problem.Our approach is akin to, but distinct from, confusion-based techniques 25,35 which attempt to learn a model that directly predicts phase transitions but side-steps the requirement for ground truth labels by training with pseudo-labels.
Figure 1 illustrates the steps of our approach.In the first step (Fig. 1a top left panel), we performed high-fidelity phase-field simulations of pattern-forming processes in binary microstructures.The intent of this first step is to generate a large and diverse set of microstructural patterns as a function of the process parameters (e.g., phase fraction, phase mobility, deposition rate) and therefore generate a database of transition regimes where the microstructural patterns switch from one class of patterns to another when the process parameters change.As prototypical examples, we chose two pattern-forming processes, namely the spinodal decomposition of a two-phase mixture and the formation of various self-organized microstructural patterns of binary alloys during the physical vapor deposition of thin films.The first problem is chosen due to its simplicity in identifying different pattern-formation regimes as a function of the concentration and mobility of the two phases.The second problem is used to test the generalizability of our approach, as it displays hard-to-discern transitions from one class of patterns to another when the deposition process parameters change.The difficulty in this second problem lies in the identification of clear transitions in microstructural patterns which appear to have seemingly continuous and non-trivial changes of classes of patterns when the input process parameters vary.In the second step of our approach, we represent these microstructural patterns in a latent space using a pre-trained convolutional neural network (CNN), namely ResNet-50 v2 34 .The goal of this step is to obtain a low-dimensional representation of these patterns (Fig. 1a right panel) in order to learn the structure of the data.While the ResNet model was trained on image data not similar to our simulations, it is still able to discern complex shapes and motifs, which allows different microstructural patterns to be distinguishable.Such performance shows the robustness of this step.As described in the Supplementary Information (Supplementary Notes 1 and 2), we can obtain similar results using different pre-trained models.In this work, we settled for ResNet-50 v2 as a prototypical CNN model due to its wide accessibility and popularity.The latent dimensions obtained from the ResNet model are subsequently regressed with a dense, feedforward, deep neural network to predict the original input process parameters that led to specific microstructural patterns.This last step (Fig. 1a bottom left panel) can be seen as solving our inverse problem (i.e.predicting the initial input process parameters from the output microstructural pattern).The error between the prediction and ground truth process parameter is then compared on a validation set.Transitions between specific classes of microstructural patterns are identified by evaluating the sensitivity of the model prediction error (Fig. 1b) as a function of the input process parameters.
RESULTS
Ambiguity of identifying microstructural transitions.We first explore the ambiguity of fingerprinting nontrivial transitions in pattern formation when those patterns are represented in latent space.We compare different dimensionality reduction techniques to represent the broad range of pattern configurations.Details on the phase-field models and latent embedding approach are provided in Methods.
Figure 2 illustrates how difficult and equivocal it is to identify transitions between different patterns as a function of the input process parameters.When taking the naive approach of performing principal component analysis 36 (PCA) directly on the original microstructure images for both problems (see panels a and b), we note that this projection technique is incapable of identifying or explaining transition regimes via distinct clusters.This assertion is especially true when trying to distinguish the different microstructural patterns forming in the case of the physical vapor deposition model when the principal components are colored as a function of the normalized deposition rate v N (defined as the deposition rate normalized by the average bulk phase mobility of the system, see exact definition in Methods).
To further illustrate this point, we present the results for two additional alternative projection techniques for both exemplar problems in Fig. 2 panels c-f.In this case, even though the pre-trained ResNet-50 v2 34 model accurately learns a compact representation of the microstructural patterns (i.e., from a 256 × 256 pixelated microstructure representation to a 2048 latent vector), the dimension of the latent space is still too large to obtain any visual and interpretable classification of the different types of patterns forming in the two processes investigated in this study.We used linear (PCA) and non-linear (uniform manifold approximation and projection 37,38 or UMAP) embedding unsupervised algorithms to further reduce the ResNet latent vectors into lower dimensions.Figure 3 shows the difference in explained variance when PCA is performed (i) directly on the actual high-dimensional microstructure images (denoted as ambient space) or (ii) on the already compact representation of the microstructures obtained from the ResNet model (denoted as latent space).We note that combining PCA and ResNet requires fewer principal components to capture most of the variance in our datasets, making the dimensionality reduction procedure (PCA/UMAP+ResNet) appealing and useful.
Unfortunately, even with this improvement in the latent representation of the microstructural patterns, regardless of the embedding technique used in combination with ResNet, we observe that there is no intuitive, distinct clustering a-b These panels show the projection using a linear embedding technique (PCA) directly on the microstructure images (defined as dimensionality reduction on the ambient space).c-f Second and third rows show the projection of the ResNet-50 v2 latent vector using PCA (panels c and d) and UMAP (panels e and f).Latent vectors are colored according to relevant input process parameters for both problems, namely the phase fraction (f ) for spinodal decomposition and the normalized deposition rate (v N ) for physical vapor deposition.Definition of the normalized deposition rate is provided in Methods.For the microstructure insets across both spinodal and physical vapor deposition problems, yellow denotes the A phase, and purple the B phase.FIG. 3. Cumulative explained variance versus ranked principal components when using PCA directly on actual microstructure images (ambient space) or on latent representation (latent space).a Explained variance for the spinodal decomposition problem.b Explained variance for the physical vapor deposition problem.Ambient space denotes dimensionality reduction when PCA is performed directly on the high-dimensional microstructure images, while latent space denotes the projection when PCA is performed on the already compact representation of the microstructures obtained from the ResNet model.
of data points about where exactly the transitions are.This statement is apparent even in the simple case of the spinodal decomposition problem (Fig. 2 panels c and e), where the transition is expected to occur for a 50/50 phase fraction.When colored as a function of the phase fraction, f , the latent representation of the microstructural patterns continuously changes from one structure type to another (i.e, from 'B'-rich spherical-precipitate patterns when f = 0.3 to 'A'-rich spherical-precipitate patterns when f = 0.7).This latent representation does not clearly disentangle the different classes of patterns and cannot provide a priori any information on where the transition occurs when the phase fraction changes.The identification of topological transitions is doubly more difficult in the case of the physical vapor deposition problem (Fig. 2 panels d and f).In this case, neither PCA nor UMAP embedding show any obvious way to distinguish transitions between vertical-oriented to horizontal-oriented patterns, aside from noticing that nearby points will have similar microstructural patterns (as PCA and UMAP methods are designed to preserve similarities in feature space).Here, in both examples, we used PCA/UMAP and the ResNet model for illustrative purposes only.The results in Fig. 2 demonstrate that identifying dynamic and second-order transitions using classical dimensionality reduction techniques does not enable us to unequivocally separate different pattern-formation regimes in latent space.To further illustrate this point, as described in the Supplementary Information (Supplementary Note 1), we tested other projection methods (simple convolutional neural network, ResNet-34, t-SNE) to show that the results are not specific to the particular choice of a pre-trained model or dimensionality reduction technique.These supplementary results show that we can obtain similar results using different pre-trained models.
Solving the inverse mapping problem to identify transition regimes.Because the seemingly obvious break down of automatically clustering and classifying microstructural patterns for the examples shown in Fig. 2, and inspired by the universality principle for dynamical systems (i.e., near a phase transition the properties of a system become nearly independent of some dynamical details of that system), we instead develop a method that aims at measuring the difficulty of solving an inverse problem: mapping the microstructural patterns back to the original input process parameters.If this task is accurate, then the input process parameters uniquely map to a given microstructural pattern.Conversely, if this mapping is difficult or ambiguous, then there is a broad range of possible microstructural patterns that can be obtained with similar input process parameters, possibly indicating a topological transition (similar to the changes in sensitivity to some parameters when reaching the critical universality classes in second-order phase transitions 39,40 ).This inverse-mapping solution is outlined in Fig. 1.Details on the inverse mapping workflow are provided in Methods.Details on the quality of the training of the neural network are also provided in Methods and in the Supplementary Information, Supplementary Note 5.
We first illustrate how intuitively this workflow works for the spinodal decomposition problem, because in this simple case, based on symmetry considerations of the free energy functional, we expect the transition to occur at the 50% phase fraction.Examples of predictions of input process parameters with high and low errors are listed in Table I.We present results for the full range of parameters in Fig. 4. To obtain those results, we calculated the average mean absolute error (MAE) of our predictions in intervals (f − 0.025, f + 0.025) where f denotes the phase TABLE I. Prediction of mobility parameters in the case of the spinodal decomposition problem.An example of instances with high and low sensitivity scores.For the microstructure insets, yellow denotes the A phase, and purple the B phase.The error ∆i is defined as ∆i = |ŷ − y|, where ŷ is the prediction and y the true value.The sensitivity score is defined as Si = ∆ −1 i .Instances 1042, 2106, 4157 belong to regime A as identified in Fig. 4b.Instances 2868, 4496, 4545 belong to regime B as identified in Fig. 4b.Instances 814, 1543, 4751 belong to regime C as identified in Fig. 4b.Note that the high sensitivity scores (in bold) correspond to different input process parameters depending on the regime of importance.Each regime is separated by horizontal lines.fraction.The sensitivity score per instance "i" is defined as the inverse of the MAE for each value of phase fraction,
Phase
where ŷ is the prediction and y the true value.The mean sensitivity score of the entire dataset in a given interval is defined as A high sensitivity score indicates that we are able to predict the input process parameters accurately and this parameter acts as an indicator for a given class microstructural pattern.Conversely, a low sensitivity score indicates that the relation between the initial value of the process parameter and the microstructural pattern is weak, and this process parameter has little influence on the formation of that specific pattern.When the sensitivity score changes from high to low or vice versa, this indicates that we are near a transition with respect to that input process parameter.In other words, the task of evaluating the change in sensitivity score is analogous to a mutual information analysis 41 from the field of information theory, which consists of evaluating the information gain to feature selection.In this context, here the underlying assumption is that, for a given set of process parameters, the microstructural pattern formation is a stochastic process, and each microstructure realization is a realization of this stochastic process.Mutual information here is calculated between the microstructural patterns and the process parameters and measures the change in uncertainty for the process parameters given a known observed microstructure.
We observe from the results listed in Table I that for high phase fractions (i.e., when A is the majority phase, f > 0.5) the outcome is highly sensitive to the choice of initial value of the phase A mobility.Conversely, the outcome does not change in any meaningful way if we vary the value of phase B mobility (as indicated by low values of the sensitivity score calculated with respect to the phase B mobility).An interesting observation is that looking at the highest sensitivity scores, we can distinguish three groups of patterns.Instances 1042, 2106, and 4157 display high sensitivity scores for the phase B mobility.Similarly, instances 814, 1543, and 4753 display high sensitivity scores for the phase A mobility.However, instances 2868, 4496, and 4545 show mixed results with sometimes high sensitivity score for phase A or phase B mobilities.While individual sensitivity scores may already be good predictors of when the topological transition may occur, we observe a large variability in those scores.For example, comparing individual sensitivity scores for similar microstructural patterns in Table I, the sensitivity score varies from 1 (instance 1042) to 14 (instance 4157).Therefore, to detect the transitions we will look at the averaged value of the sensitivity score, as discussed next.
Comparing individual scores to identify microstructural transitions could be deceiving (S i with respect to phase A mobility for example 4157 is larger than S i with respect to phase B mobility for example 2106).We can mitigate this issue by evaluating the mean sensitivity scores on the predicted mobilities.The mean sensitivity scores S on the predicted mobilities for phases A and B are presented in Fig. 4a.Variations (i.e.derivative) of the sensitivity scores with respect to the phase fraction are presented in Fig. 4b.To see whether or not we can identify the transition regime at the 50% phase fraction, we can first look at the left part of Fig. 4a.First, we would expect a strong correlation between the value of the phase B mobility and the size or shape of the phase A domains when B is the majority phase.This correlation translates into a high value of the sensitivity score with respect to phase B mobility.Conversely, we would also expect a weak relation between the value of the phase B mobility and the dynamics of the A domain when A is the majority phase, i.e. in this case the sensitivity score will be low.Indeed, if we look at the data and plot the number of phase A domains (i.e. the number of small spherical domains) as a function of mobility (see Fig. 4 panels c-d), we see that the relation to phase B mobility is very strong (especially for low values of phase fraction), while relation to phase A mobility is almost non-existent (it does not matter what the value of phase A mobility is, the number of domains might be very different).As such, we note that when predicting the mobilities for phases A and B respectively in Fig. 4a, the sensitivity of those predictions switches to a relatively low sensitivity score at the 50/50 phase fraction.This is further evidenced if we look at the variation of the sensitivity score as a function of the phase fraction in Fig. 4b.We observe a transition from (i) a regime with a monomodal microstructure of A-rich spherical precipitates entrapped in a B matrix (denoted as encircled A in Fig. 4b) to (ii) an intermediate microstructural configuration (denoted as encircled B) which consists of interconnected A or B domains to (iii) finally another regime (denoted as encircled C) with a monomodal microstructure of B-rich spherical precipitates entrapped in a A matrix.This intermediate regime corresponds to the gradual transition in pattern formation between the two other distinct regimes.
Generalization to more complex pattern-forming processes: physical vapor deposition of binary alloy thin films.We now turn to the physical vapor deposition problem which presents multiple transition regimes when the deposition rate and bulk mobility change.At low normalized deposition rates (corresponding in the present case to values of log(v N ) ∼ −4), vertical-oriented microstructural patterns form, while higher normalized deposition rates TABLE II.Prediction of mobility parameters in the case of the physical vapor deposition problem.An example of instances with high and low sensitivity score.Instances 1844, and 6316 belong to regime A as identified in Fig. 5c.Instances 15812 and 7304 belong to regime B as identified in Fig. 5c.Instances 1492 and 444 belong to regime C as identified in Fig. 5c.Instances 1752 and 3772 belong to regime D as identified in Fig. 5c.Each regime is separated by horizontal lines.We show in Fig. 5 how solving the inverse mapping problem for predicting the deposition rate, |v| (Fig. 5 panels a, c), and average bulk mobility, M Bulk (Fig. 5 panels b, d), helps us identify the transition regime(s) between verticaland horizontal-oriented microstructural patterns as a function of the normalized deposition rate v N .First, we notice in Fig. 5 panels a and b that when the orientation of the microstructural patterns changes from vertically oriented (denoted as regime A) to horizontally oriented (denoted as regime C), the sensitivity score for the deposition rate gradually changes from high to low accordingly and from low to high for the average bulk mobility.We can interpret it as a change in the importance of the input process parameter based on the resulting microstructural pattern considered (horizontal vs. vertical).The piecewise linear fits in Fig. 5 panels a and b provide a visual guide as to when a transition occurs, while the thin blue lines show individual instances and the resulting variability around the averaged sensitivity scores.While the sensitivity scores on the deposition rate and average bulk mobility enable us to detect the major change in the overall orientation of the microstructural patterns, the variations in the level of the sensitivity scores (cf .Fig. 5c) reveal much more subtle changes.Our self-supervised approach detects a second transition and the existence of an intermediate regime (denoted by encircled B) that corresponds to a regime with a class of microstructures that are neither vertical nor horizontal.We also note a third transition at log v N ∼ −1.5 which corresponds to a horizontal-to-multimodal (or horizontal-to-random) transition.When we compare the variation of the sensitivity scores between the two input process parameters in Fig. 5 panels c and d, we confirm the identification of those four regimes.The marginal differences as to when the transitions occur in terms of the normalized deposition rate illustrate the relative importance of the specific predicted input process parameters on those regimes.Second, our results reveal the "range" of each transition and existence of a specific pattern regime.Indeed, we observe for instance that the horizontal-oriented pattern regime only exists for a specific range of normalized deposition rates, namely for log v N ∈ [−2.5, −1.5].
When plotting the number of horizontal-and vertical-oriented microstructural domains as a function of the normalized deposition rate in Fig. 5c, we confirm that indeed the microstructure topology switches from one type of pattern to the other for the detected transitions via our self-supervised approach.When plotting these regimes as a function of the connectivity of the pattern in Fig. 5d, we note that the first transition (from A to B) can be associated with a transition from ordered vertical-oriented patterns to disjointed, more complex vertical patterns.The second transition (from B to C) then corresponds to a clear change from disjointed vertical patterns to horizontal layer patterns.The last transition (from C to D) corresponds to a transition from monomodal horizontal microstructure to a multimodal patterned structure.These results contrast with those obtained via a human-annotated method to map two-phase patterned morphologies 8 for a similar problem, although it is unclear if this previous work fully captures the nuanced morphology classes.
DISCUSSION
We have demonstrated above that solving the inverse problem of mapping microstructural patterns back to the corresponding input process parameters is a powerful approach to infer and characterize transitions in pattern-forming processes, without any a priori knowledge of where these transitions occur in the input space.This approach therefore allows microstructures to be objectively determined rather than relying on human perception and predefined labelling.In contrast to using clustering as a mean to identify transitions, this approach relies on the ability to relate transitions in microstructural patterns to changes in the error of predicting the inverse problem of mapping microstructure to input process parameters.This self-supervised approach relaxes the assumption in confusion-based techniques 25 which requires two distinctive phases to exist in the considered transition regime.Instead, if there are multiple, intermediate, or hard-to-discern transitions present in the data (including transitions not characterized as first-or second-order transitions, cf.Fig. 5), we can still detect and identify where these transitions occur and establish a hierarchy of transitions via our approach.As seen in the physical vapor deposition example, transitions can happen at different levels of complexity (i.e.monomodal vs. multimodal patterns as seen in panels c and d of Fig. 5).Therefore, we can use this approach to provide a hierarchy of transitions from the most pronounced (such as pattern orientation) to more subtle (such as complexity).The CNN model used in this work can detect both easy geometric shapes (such as the orientation and width of the domains, size of the subdomains, etc.) as well as more complex patterns (bifurcation of different subdomains).This ability is possible due to the way information is organized in large CNN models, such as the ResNet model used in the present work.Even if the model is initially trained on images from different scientific domains (animals, plants, and various natural objects such as cars and buildings), the neural network learns to recognize basic patterns such as the orientation of lines, curvatures of edges, etc.Those simple patterns are encoded in the lower part of the network and are easily transferable between domains.This remarkable property of CNNs to generalize 42 from one visual domain to another allowed us to efficiently apply a model that was pre-trained on the ImageNet database to our pattern-forming process data sets.One possible improvement to provide even more granularity and generality in detecting and categorizing transition sub-regimes within these pattern-forming processes would be to look at cross-correlations between sensitivity scores or consider the sensitivity scores directly as a multidimensional surface response of the input process parameter space.Local and global minima within this response could be identified as the different transitions.
In this study, we passed microstructure images to an existing, pre-trained ResNet model.We did not have to fine-tune the ResNet model.We kept this model as is to show how well the framework generalizes from one domain to another (in other words, how robust the framework is to the choice of the raw data embedding method).To further support this claim, we trained several dedicated models tasked to extract important features from the raw input data as detailed in the Supplementary Information, Supplementary Notes 1 and 2. We show that all those different embedding methods produce very close results (see Supplementary Figs. 1 and 2).This means that the methodology presented here is robust to the particular choice of the embedding method to predict topological transitions.While we used ResNet-50 v2 to demonstrate that an out-of-the-shelf method works well here, other models could have been used instead, as discussed in the Supplementary Information.Indeed, there is a variety of other models (such as ResNet-152 34 , VGG-16 43 , AlexNet 44 , and EfficientNetV2 45 to name a few) that have been trained on the same ImageNet dataset 46 as the ResNet model used in this study.These models may prove to increase the sensitivity and accuracy of our results by enabling further clustering and disentangling of complex datasets in latent space.As an alternative to neural network models, statistical and spectral-density functions 47-50 could be used for example as cost-effective, representative approaches to characterize microstructural patterns via a finite set of characteristic function(s) and/or feature(s).As pointed out by Aggarwal et al . 51, besides the improvements on these latent embedding approaches, the choice of a particular distance metric may significantly improve the results of these standard algorithms.If the (microstructure) data lies on some smooth manifold and we are interested in preserving the local geometry of the microstructure phase-field data on that manifold, then using different distance metrics (i.e.non Euclidean) when applying ResNet + PCA/UMAP to our data can shed alternative light on the same data and potentially further improve our classification of hierarchy of transition regimes.
In addition to the self-taught labelling of topological transitions, compared to other methods that focus solely on detecting transitions, our self-supervised approach can also provide additional insights into the role of the different input process parameters on the process-microstructure linkage.By interpreting changes in sensitivity scores, we reveal the role of different input process parameters in controlling different pattern-formation regimes.This includes which process parameters are responsible for the largest portion of the variation in microstructural patterns, and therefore which parameter(s) triggers specific classes of microstructural patterns.For instance, our results reveal that the bulk mobility plays a role in controlling the complexity of the pattern seen in the physical vapor deposition problem.Such linkage is important in the context of unraveling the mechanisms needed for fabricating multi-scale, microstructurally precise materials.
In the results presented above, we show that predicting error is not only useful to determine pattern class transitions, but also for sensitivity analysis.Namely, when the microstructure morphology changes slowly as a function of a given input process parameter, there will be a high error when mapping the final microstructural pattern back to that process parameter.In contrast, when the morphology changes significantly, there is a narrow range where a process parameter reaches a particular microstructural pattern, and therefore error will be low.This observation provides insight into the process parameters that need low or high tolerance.In addition, some input process parameters may yield a class of microstructural consistently, while others may be affected by stochasticity in their initial values (see variability in sensitivity scores in Fig. 5a for instance).This consistency metric will be inferred by the error when mapping control parameters to the microstructure morphology; in other words, much like a typical surrogate model, one can find the final microstructural pattern from initial conditions; error in this task will give insight into input process parameters that consistently, or inconsistently, yield a given microstructure of interest.
Going forward, we note that our framework could be extended to other, more complex pattern-forming processes.In the current work, we assessed topological transition in binary phase systems, but the same methods could also be applied to, for example, complex multi-component ternary or quaternary phase-separating processes 52,53 .Further, our self-supervised-learning model did not have to be built exclusively on simulation-based data.We chose to illustrate our concept via simulation to establish comprehensive data sets to serve as ground truths for model evaluation.However, experimental microscopy data could have been used and combined with our simulation data to explore nuanced effects 54,55 on actual synthesis processes.Another direction would be to increase the efficiency of our approach.As discussed above, interpreting individual sensitivity score can be misleading.Instead, one must average the individual scores on a previously prepared validation set.However, it might be possible to directly measure the uncertainty of the network in solving the inverse problem.This could be done using e.g., the concrete dropout 56 .Consequently, it would allow us to measure the score on the individual sample-label, eliminating the need of averaging over a large number of samples.Although the approach used in this study could be improved both to enhance sensitivity and accuracy, this work is a step towards automatically detecting critical transitions during evolutionary systems and opens new ways for identifying discovering and understanding unseen or hard-to-discern transition regimes in these systems.
METHODS
Spinodal decomposition of a two-phase mixture.Spinodal decomposition of a two-phase mixture is one of the oldest and simplest phase-field problems 57 .This model looks into the diffusion of solutes within a matrix and serves as the basis for a large number of phase-field models.In this model, a single compositional order parameter c(x, t) is used to describe the atomic fraction of solute in space (x) and time (t).
The evolution of c is described via the Cahn-Hilliard equation 57 such that, where ω c is the energy-barrier height between the equilibrium phases and κ c is the archetypal curvature-energy penalty term.The concentration-dependent Cahn-Hilliard mobility is taken to be Physical vapor deposition of binary-alloy thin films.The phase-field model used to simulate physical vapor deposition (PVD) is described in detail elsewhere 17 .This model uses conserved order-parameters, φ and c to describe the structural and compositional ordering of the system as the thin film grows.The field variable φ is meant to distinguishes between the vapor and solid phases and is used to track the growing thin film.The compositional field variable c distinguishes between the two phases of the growing binary alloy thin film.The composition evolution is described via the Cahn-Hilliard equation such that, where M c is the structurally and compositionally dependent mobility function and F is the total free energy of the system.The mobility functional M c is constructed to describe both bulk and surface mobilities for each phase, M Bulk i and M Surf i respectively.The complete expression for F is the same as in Stewart and Dingreville 17 without the elasticity contributions.The expression for M c is provided in Eqs. ( 8) and ( 9) in Stewart and Dingreville 17 .To describe thin-film growth, a source term is incorporated in the Cahn-Hilliard equation, such that the evolution of the vapor-solid field variable, φ, is given by, where M φ is the Cahn-Hilliard mobility.The second term couples the thin-film evolution to the incident vapor flux (via a vapor density field ρ and deposition rate v) and acts as the source term for interfacial growth and surface roughening.We defined a normalized deposition rate ) (where α is the deposition angle) which is meant to account for the competition between film growth and phase separation in the growing film.In this problem, the choice of the initial concentrations of phase A and B (c A and c B ), the choice of bulk and surface mobilities for each phases (M i A and M i B , with i being either Bulk or Surf), and the deposition rate v constitute the input process parameter space and dictate the resulting microstructures.Data preparation.These two phase-field models have been implemented in Sandia's in-house multi-physics phasefield modeling code: Mesoscale Multiphysics Phase-Field Simulator 17,58 (MEMPHIS).Examples of simulation results for these two models are provided as supplementary information in Supplementary Note 3. The two-dimensional (2D), spatio-temporal evolution of the microstructural patterns (as captured through the c compositional field variable in both models) were numerically solved using a finite-difference scheme with second-order central difference stencils for all spatial derivatives.Numerical temporal integration of the equations was performed using the explicit Euler method.All 2D simulations were performed on a uniform numerical grid of 512 × 512 for the spinodal decomposition model and 256 × 256 for the PVD model, a spatial discretization of ∆x = ∆y = 1, and a temporal discretization of ∆t = 10 −4 for the spinodal decomposition model and a variable ∆t ∈ [10 −3 , 10 −2 ] for the PVD model.The composition field within the simulation domain was initially randomly populated by sampling a truncated Gaussian distribution between −1 and 1 with a standard deviation of 0.35.Each spinodal decomposition simulation was run for 50,000,000 time steps with periodic boundary conditions applied to all sides of the domain.The run time for the PVD simulations varied depending on the deposition rate to achieve a given film height.
We used a Latin Hypercube Sampling (LHS) approach to generate our training and testing datasets.For the spinodal decomposition model, we varied the phase concentrations and phase mobilities parameters.For the spinodal decomposition model, this resulted in total 4998 simulations, 80% of which was used to train the model (3998 instances) and the remaining 20% of data (1000 instances) was used for testing.For the PVD model, we selected the final microstructure pattern at the time step that corresponded to a film height of 224 pixels, resulting in a 224 × 224 image.For this model, our dataset had 11775 simulations.We used 60% of the data instances for training and 40% for testing purposes.
Selection of neural network architecture.The outcome of detecting transitions is determined by both the initial condition (e.g., value of a random seed) and values of the process parameters (phase mobility, deposition rate in case of PVD, etc.).While the local details of microstructural patterns depend on the specific initial conditions, the overall characteristics of those patterns are determined mostly by the values of the input process parameters.When solving the inverse problem, we wish to measure and gauge the role of the process parameters, not the random initialization of the simulations.We achieved this by first transforming the input using a CNN.This type of network is equivariant to translations of the input and therefore insensitive to relative shifts of the overall characteristics of the microstructural patterns that could be incurred by random initialization values of the simulations.We can use CNNs to extract features from the raw data, that are related to the overall shape of the patterns, not the specific position of those.See an extended discussion of that topic in Supplementary Information, Supplementary Note 3.
While there are many different possible convolutional architectures, one natural choice is to use a CNN from the ResNet family 59 .ResNet networks are built with what are known as residual blocks.CNNs are those with convolution layers, in which a small patch of an image is scanned, and features, such as colors and shades, of that small image are fed through a node.There are multiple ways to weigh features in a patch, which are known as filters.Residual blocks take features from earlier convolution layers and feed them wholesale into a layer that could be many levels away from the initial layer.This allows for features to not get "lost in translation" when moving from one layer to the next.This network can therefore achieve respectable predictive accuracy with comparatively more layers, thus capturing long-range and non-linear correlations between far-away parts of an image, or in this case, a microstructure evolution simulation.In effect, ResNet models can take advantage of deeper architectures reducing the impact of the infamous vanishing gradient problem 60 .We further elaborated on this topic in Supplementary Information, Supplementary Note 4.
Latent embedding.We embedded microstructural pattern images using ResNet-50 v2 34 , a popular model that has been used to automatically classify a variety of images.ResNet-50 v2 is a fifty-layer-deep CNN.In particular, the ResNet-50 v2 model used in this work was trained on images from the ImageNet 46 database but not on the phase-field data.Training of the ResNet-50 v2 model was first done by He et al . 34.The model trained on ImageNet learns to extract a hierarchy of features, from simple geometric shapes (vertical, horizontal, or skew lines, shapes of different sizes, lines of different curvatures, etc.) to more complex, composed patterns.Images present in the ImageNet database depict 1000 classes such as different types of animals, everyday objects, and various building structures.While most of the training examples represent colorful (3-channel) images, there is also a small proportion (about ∼2%) of monochromatic images.To make the phase-field simulation data input compatible with our model, we copied each image three times (created 3 identical pseudo-color channels).While it creates some redundancy (the network might detect "red", "green" and "blue" edges of the same orientation), it did not affect the ability to produce a consistent embedding for the collection of our data.Examples and details of the data used for training the ResNet-50 v2 model are presented in the Supplementary Information, Supplementary Note 4. We took all but the classification layer of this pre-trained model to embed simulation images into several hundred dimensional latent feature space.The size of this feature space is small enough to reduce the feature dimension from 65536 (256 × 256 pixels where we treat each pixel as a feature) to something that is far more manageable, 2048 latent dimensions (i.e.1% of this), but this space is large enough to capture nuances in microstructural patterns from subtle variations in control parameters or initial conditions.Parameter predictions are then made by applying a few dense all-to-all connection neural network layers to this set of latent features.
Inverse mapping workflow. Spinodal decomposition:
The original simulations were saved as 4998 × 101 × 512 × 512 (simulation number, time frame, X position, Y position) dimensional numpy arrays, with float16 numbers that range from around −1 to +1.We reduced the dimensions of each frame from 512 × 512 to 256 × 256 (replacing every block of 2 × 2 pixels by the mean value).From the 4998 original simulations, we randomly selected 1000 of them (20%) to form a validation set.We used the remaining 3998 simulations to construct the training set.The input images are expected to have color values in the range [0,1] and the expected size of the input images is 224 × 224 pixels by default, but other input sizes are possible.Because the ResNet-50 v2 model requires a 3-channel input, we just tripled our output (each channel has the same pictures).The inverse problem was solved by training a neural network, a simple feedforward neural network with two hidden layers (with 512 and 1024 neurons, respectively).The output of the network was a dense layer with two neurons with linear activation -one neuron to predict mobility of the phase A, and another to predict mobility of the phase B. We used dropout layers and ReLU activation functions.Schematically, the network architecture was Dense ReLU .We trained the model to predict the correct values of mobility, with the mean squared error (MSE) loss function.We applied early stopping with the patience 4 and tolerance of 0.0001.
Physical vapor deposition:
The original simulations were saved as 11775 × 51 × 256 × 256 (simulation number, time frame, X position, Y position) dimensional numpy arrays.For each simulation, we selected the time step that corresponds to a thin film height of 224 pixels to avoid capturing any surface roughness effects.We rejected simulations that had heights smaller than 224 (this simulation corresponded to extremely slow deposition rates; only 5 simulations were rejected in the process).We cut four 224 × 224 squares from each original frame, translating the cut origin by 64 pixels each time (we apply horizontal periodic boundary conditions).This gave us 11770 • 4 = 47080 individual data points.We truncated values larger than 1, and smaller than -1, and then we scaled everything from 0 to 1 (we applied transformation x → (x + 1)/2).To speed-up the next step, we reduced the dimension of each square by two, from 224 × 224 to 112 × 112 pixels.We used the pre-trained ResNet-50 v2 model to produce 2048-dimension latent embeddings of each square (to do this, we had to emulate the 3 channel colors by copied the one-channel input three times, just as we did for spinodal decomposition).Since original simulations were repeated 5 times for each unique set of parameters, the 47080 data points correspond to 2354 unique sets of initial parameters.We reserved 40% of the data for the validation set (18840 input examples, corresponding to 942 unique sets of initial parameters).The remaining 60% was used for training (28240 input examples, corresponding to 1412 unique sets of initial parameters).The inverse problem was solved by training a neural network, a simple feed-forward neural network with two hidden layer (similar to the architecture used for spinodal decomposition), namely Dense ReLU .The only modification with respect to the previous setup was, that we used here only a single output neuron.The motivation for this was that was that the predicted parameters had different physical units.When predicting multiple values at once, the MSE loss function would be sensitive to the particular choice of the units for each parameter.To address this issue, we decided to train separate network for each control parameterthus, the single output unit in the architecture above.We trained the network with early stopping with the patience 4 and tolerance of 0.001.
Model training setup for the inverse problem.What matters when solving the inverse problem is the relative performance.This situation is similar to that described in the "Learning phase transitions by confusion" paper by van Nieuwenburg et al . 25.In that work, the proposition was to use a small feedforward neural network, with one hidden layer and 80 neurons.There are two possible regimes where neural networks generalize well.One is when the number of training examples is larger than the number of trainable parameters.Second, when the number of trainable parameters is much larger than the number of training examples (an overparameterized region) 61 .Keeping in mind that in practical applications the number of training examples can be limited, we decided to explore the
FIG. 2 .
FIG. 2.Latent representations of various microstructural patterns.Spinodal decomposition of a two-phase mixture representations are displayed in left panels, physical vapor deposition of a binary alloy thin film representations in right panels.a-b These panels show the projection using a linear embedding technique (PCA) directly on the microstructure images (defined as dimensionality reduction on the ambient space).c-f Second and third rows show the projection of the ResNet-50 v2 latent vector using PCA (panels c and d) and UMAP (panels e and f).Latent vectors are colored according to relevant input process parameters for both problems, namely the phase fraction (f ) for spinodal decomposition and the normalized deposition rate (v N ) for physical vapor deposition.Definition of the normalized deposition rate is provided in Methods.For the microstructure insets across both spinodal and physical vapor deposition problems, yellow denotes the A phase, and purple the B phase.
FIG. 4 .
FIG. 4. Predicting the process parameters for spinodal deposition of a two-phase mixture.a The mean sensitivity score for predicting mobilities as a function of phase fraction.When changing the fraction from low to high, the dominant role of the phase mobility switches from B to A. The dashed line indicated the expected transition from a 'A'-rich patterns regime to a 'B'-rich patterns regime.Insets depict example simulation realizations for particular values of the phase fraction order parameter.For the microstructure insets, yellow denotes phase A, and purple phase B. b Variation (derivative) of the sensitivity score as a function of phase fraction.c-d Scatterplots of the variability on the number of phase domains A as a function of the phase A and B mobilities respectively.
FIG. 5 .
FIG. 5. Identifying microstructural topological transitions for the physical vapor deposition of thin films.a Sensitivity score for predicting the deposition rate (|v|) as a function of the normalized deposition rate.b Sensitivity score for predicting the bulk mobility (M Bulk ) as a function of the normalized deposition rate.c Variation (derivative) of the sensitivity score for the deposition rate as a function of the normalized deposition rate.d Variation (derivative) of the sensitivity score for predicting the bulk mobility as a function of the normalized deposition rate.e Number of horizontal and vertical phase domains as a function of the normalized deposition rate.f Topology connectivity (defined as the number of irregular elements of morphologies e.g., kinks, terminated stripes, etc.) as a function of the normalized deposition rate.Thin blue lines in a and b illustrates particular runs of our procedure and indicate the variability around the averaged sensitivity score.
4 ( 2
where M i denotes the mobility of each phase.The switching function s(c) = 1 − c)(1 + c) 2 is a smooth interpolation function between the mobilities.The values of the energy barrier height between the equilibrium phases and the gradient energy coefficients were assumed to be constant with ω c = κ c = 1.The choice of the initial concentrations of phase A and B (c A and c B ) as well as the choice of mobilities for each phase (M A and M B ) constitute the input process parameter space and dictate the resulting microstructures.
For the phase concentration parameter, we varied the concentration of phase A, c A ∈ [0.15, 0.85].Note that the phase concentration of B is simply c B = 1 − c A .For the mobility parameters, we independently varied the mobility values over four orders of magnitude such that M i ∈ [0.01, 100], i = A or B. For the PVD model, we keep the phase fraction at 50/50 and sampled the bulk and surface mobilities M Bulk i ∈ [0.1, 10], M Surf i ∈ [0.1, 100], as well as the deposition rate conditions |v| ∈ [0.1, 1]. | 11,855.2 | 2022-03-19T00:00:00.000 | [
"Physics"
] |
Genome-Wide DNA Methylation and Gene Expression Analyses of Monozygotic Twins Discordant for Intelligence Levels
Human intelligence, as measured by intelligence quotient (IQ) tests, demonstrates one of the highest heritabilities among human quantitative traits. Nevertheless, studies to identify quantitative trait loci responsible for intelligence face challenges because of the small effect sizes of individual genes. Phenotypically discordant monozygotic (MZ) twins provide a feasible way to minimize the effects of irrelevant genetic and environmental factors, and should yield more interpretable results by finding epigenetic or gene expression differences between twins. Here we conducted array-based genome-wide DNA methylation and gene expression analyses using 17 pairs of healthy MZ twins discordant intelligently. ARHGAP18, related to Rho GTPase, was identified in pair-wise methylation status analysis and validated via direct bisulfite sequencing and quantitative RT-PCR. To perform expression profile analysis, gene set enrichment analysis (GSEA) between the groups of twins with higher IQ and their co-twins revealed up-regulated expression of several ribosome-related genes and DNA replication-related genes in the group with higher IQ. To focus more on individual pairs, we conducted pair-wise GSEA and leading edge analysis, which indicated up-regulated expression of several ion channel-related genes in twins with lower IQ. Our findings implied that these groups of genes may be related to IQ and should shed light on the mechanism underlying human intelligence.
Introduction
Individual differences in cognitive abilities have long been an intriguing phenomenon to both lay people and scientists. Differences in intelligence, as measured by IQ tests, appear to remain relatively stable from childhood to late life [1,2]. Further, the fact that intelligence, in the general population, has a normal distribution and long-term constancy allows for the assumption of the nature of IQ as being, at least partly, hereditary with quantitative trait features. In effect, the heritability of intelligence is estimated to be somewhere between 30% to over 80% in classical MZ twin versus dizygotic twin studies [3][4][5], marking it one of the highest among human quantitative traits.
Despite the significant role that genes are supposed to play in deciding an individual's cognitive abilities, progress to identify intelligence-related genes in healthy adults is not as promising [6,7], and contrasts the increasing list of some 300 genes associated with mental retardation [8]. One possible explanation for the lack of replicated genetic findings in normal-range intelligence is the small effect size of each gene. The fact that genome-wide association studies in the scale of thousands of subjects identified no specific genetic variants associated with human intelligence implies that very large sample sizes are necessary to detect individual loci [9].
As underpowered studies face challenges in the attempt to identifying small effect quantitative trait loci, twin research might provide an alternative. Twin studies serve more than a means to estimate heritability of the aforementioned complex traits; they also present an important resource to evaluate quantitative trait loci. There are accumulating evidences that long thought to be genetically identical MZ twins manifest variations in copy number [10] or point mutations limited to one twin [11]; however, these genomic discordances have failed to explain all phenotypic discrepancies [12]. Differences in phenotypes between MZ twins can possibly be attributed to environmental factors, as well as epigenetic variants, which refer to the gene expression-related modifications that occur without altering DNA sequences.
Epigenetic regulatory mechanisms have been reported to be associated with a number of biological phenotypes, including intelligence [13]. Among all known epigenetic mechanisms, DNA methylation has been the most extensively studied. Specifically, such studies have observed patterns of negative correlation between promoter region methylation and gene expression [14]. Additionally, studies based on phenotypically discordant MZ twins have revealed that those who are as best matched for genetics, gender, age, prenatal influences, and shared environment as nature could provide, are considered to possess the potential to detect epigenetic and transcriptomic differences, although research has yet to yield exclusive results [12,15].
In the present study, we recruited 17 healthy MZ twin pairs who manifested discordance for intelligence between co-twins (i.e., more than 1 standard deviation (SD)). Regarding that MZ twins are identical in genetic composition, the IQ difference could be associated with environmental factors, via epigenetic mechanisms regulations. By analyzing array-based genome-wide DNA methylation and gene expression profiles, a novel list of genes with functions related to protein synthesis, DNA helicase activities, and ion channels was generated. To our knowledge, this is the first study tested for epigenetic and expression differences between phenotypically normal, yet discordant, MZ twins via genome-wide approaches.
General characteristics of participants
This study is a part of the Keio Twin Study project [16]. We have collected 240 MZ twin pairs with IQ scores of both siblings tested (Fig.S1). Seventeen MZ twin pairs (5 male pairs and 12 female pairs) aged 25.162.5 years (range, 21-31 years), with IQ scores of normal range yet manifesting significant between cotwins differences formed the present study sample (Table 1). Mean IQ score of all 34 participants was 100.91613.32 (range, 67-139), while the mean difference between co-twins was 20.76 points (range, . No documented psychological or physiological conditions were noted at the time of recruitment. Since there is no standard definition of discordance in MZ twins IQ scores, we adopted 15-point-difference as the principle inclusion criterion. Fifteen-point is not only one standard deviation of IQ scores in general populations, but it is also the average IQ difference for genetically unrelated individuals sharing family environments, whereas identical twins differ by only about 6 IQ points on average [17].
genes identified by screening for epigenetically regulated candidate loci
We analyzed the methylation profiles of 25,500 human promoters utilizing methylated DNA enriched genomic DNA derived from peripheral blood cells. We used MAT (model-based analysis of tiling arrays) program [18] to identify genes that significantly differed between co-twins pairwisely. With a significance threshold set to p,10 26 , a total of 27 genes were recognized in 13 of the 17 twin pairs; however, none was shared in plural pairs (Table S1). In addition, a moderate positive correlation of 0.417 (Spearman's rank correlation coefficient, p = 0.048) was found between the number of genes that epigenetically differed and pair-wise IQ differences (Fig. S2). On the other hand, after applying the log signal ratio (of the higher IQ twins to the lower IQ co-twins) to one-class t-test with the same significance threshold across all 17 twin pairs, we identified no locus manifesting significantly different methylation status.
Validation of ARHGAP18 by bisulfite sequencing and quantitative RT-PCR
We performed a sodium bisulfite analysis to confirm the methylation status at the 27 gene loci putatively identified as being differentially methylated. After sequencing at least 30 clones for each locus, statistically significant difference in methylation status between co-twins was validated on two genes, ARHGAP18 (Rho GTPase activating protein 18, p = 5.12806610 28 ; chi-square test) and OR4D10 (olfactory receptor 4D10, p = 0.0234658; chi-square test) (Fig. 1A and B).
After confirmation of methylation status, we correlated the data with their expression levels using total RNA derived from lymphoblast cell lines by qRT-PCR. The observed reduction in DNA methylation status of ARHGAP18 was correlated with its increased expression level in the subject with lower IQ scores of the twin pair from which the gene was identified (2.04 fold, p = 0.024767; Mann-Whitney U test) (Fig. 1C, left graph). Increased transcription levels were observed after a 3-day treatment of the demethylating agent 5-aza-2-deoxycytidine (5-azadC), and the difference between the twins was eliminated, suggesting that the expression is regulated by the methylation of the promoter (Fig. 1C, right graph). Meanwhile, no expression of OR4D10 could be detected in the lymphoblast cell lines.
No gene manifesting statistically significant differences between the group of twins with higher IQ and that of their co-twins after the expression array analysis Apart from the epigenetic approach described above, we used an expression microarray analysis to directly compare the genomewide gene expression profiles of the higher IQ twins versus their lower IQ co-twins. At first, principal component analysis (PCA) was performed to facilitate visualization of the relationships between groups (Fig. S3). As a result, the two groups compositing twins with higher or lower IQ scores could not be readily distinguished. Likewise, the dendrograms, produced by hierarchical clustering, also failed to demonstrate differences in general expression patterns between these two phenotypic groups (Fig. S4). Next, ANOVA analysis was applied to identify whether there were differentially expressed genes between these two groups; however, no gene met the criteria of a FDR (false discovery rate)-adjusted p of 0.05. A one-class t-test analysis with multiple sample correction was conducted across all log ratios. Similarly, no significantly differentially expressed gene was identified.
Candidate genes list resulted from direct pair-wise comparison of the expression array data Next, we applied a direct pair-wise comparison to focus on the genes up-regulated in the same tendency. Fold-change values of the expression levels of all genes were first calculated for each twin pair, from which genes with a fold-change value more than 2 were included (Dataset S1). We then generated a list of candidate genes by picking up those replicated in most twin pairs ( Table 2). UCHL1 (ubiquitin carboxyl-terminal esterase L1), along with the other 7 genes, were found to have higher expression levels in the higher IQ twins of at least 4 pairs, while 6 genes were up-regulated in some lower IQ twins.
Identification of 3 genes with borderline significance by grouping samples according to individual gene expression level
To further increase the possibility of identifying candidate genes, we performed an analysis based on individual gene expression level. After dividing the twin siblings of each pair into higher and lower expression groups according to the expression level of every gene, a paired t-test was carried out to determine if there was a significant difference between the mean IQ scores of the two groups. Whereas not a single gene reached the corrected cutoff p of 10 26 , 3 genes, RFK (riboflavin kinase), RPL12 (ribosomal protein L12), and RMRP (RNA component of mitochondrial RNA processing endoribonuclease), manifested borderline significance (Fig. 2). The twins manifesting upregulated expression level of RFK showed the tendency to have lower IQ scores than their co-twins, while RPL12 and RMRP might likely to contribute to higher intelligence.
Identification of 4 differentially expressed gene sets by GSEA
It remained possible that functionally-related genes might have important gene expression changes in a set-wise matter without any individual transcript meeting the criteria of significance. Using GSEA [19] under the cutoff FDR q-value of ,0.25, we denoted 1 and 4 up-regulated gene sets from KEGG and Gene Ontology (GO) database respectively in the group of higher IQ twins, while 1 gene set each from KEGG and Reactome pathway database was found to be up-regulated in the group of lower IQ twins (Table S2 and Fig. 3). From the results employing GO database, the leading edge analysis revealed 8 genes (MRPS35, MRPL23, MRPL52, MRPL41, MRPL12, MRPS15, MRPS22, and MRPL55) with core enrichment in the gene sets ''Organellar Ribosome,'' ''Ribosomal Subunit,'' and ''Mitochondrial Ribosome'', whereas the fourth gene set ''ATP-dependent DNA Helicase Activity'' was comprised of 7 other genes (XRCC5, XRCC6, DHX9, PIF1, G3BP1, RUVBL2, and CHD4) with core enrichment. By utilizing Reactome database, all 6 genes from the gene set ''Reactome CREB phosphorylation through the activation of CAMKII'' were enriched.
A different approach was carried out to the same end. Specifically, we performed a GSEA on each twin pair. Depending on the twin pairs analyzed and pathway databases used, as many as 284 gene sets were found to be significantly different between the siblings (Dataset S2, S3, S4, S5). Gene sets replicated in most twin pairs were listed. In general, pathways related to DNA replication, ribosomes, and proteases were found in higher IQ twins of most twin pairs, while cell signaling associated ones tended to be up-regulated in lower IQ twins (Table 3). In order to focus on those genes which effectively contributed to the enrichment of each given gene set, we first generated up-and down-regulated leading edge subsets for each twin pair, and then extracted those genes that were nominated most often across plural twin pairs (Table 4). Up-regulation of IGF1 was found in the higher IQ twins of 4 pairs, whereas potassium channel-coding genes KCNE2 and KCNQ3, along with an acetylcholine receptor-coding gene CHRNA2 manifested higher expression levels in some lower IQ twins. Among all the candidates, IGF1 was selected for further analysis for its important role in growth and development [20]. We performed bisulfite sequencing of the promoter regions for the 4 twin pairs manifesting an up-regulated expression level in the higher IQ siblings. However, no significant differences were noted in the methylation status of the 2 promoter domains (P1 and P2) between the siblings (Table S3).
Discussion
Given that the concept of general cognitive ability, designated as g, has been widely accepted to depict a near-universal positive covariation among diverse measures of cognitive abilities, naming even one genetic locus that is reliably related with normal-range intelligence remains challenging [21]. IQ is easy to quantify and compare among different individuals. Although not conclusively, the substantial g-loading for IQ [22] justifies its role to represent the general intelligence levels. Benefiting from the extraordinary similarities in genomic constitution and environmental factors, studies based on discordant monozygotic twins, even with limited sample size (as small as 20-50 twin pairs), are capable of uncovering phenotype-associated epigenetic changes independent of underlying sequence variance [23]. By successfully recruiting 17 pairs of identical twins discordant for intelligence levels, we shall have a modest power in the identification of intelligence-related epigenetic differences. To our knowledge, this is the first genomewide methylation and gene expression study administering the characteristics of monozygosity to access the epigenetic and expression changes for a quantitative trait.
Researchers have reported that patterns of epigenetic modifications in MZ twins diverge as they age [24]. Provided that all 17 twin pairs in this study were in early adulthood, it might not be surprising that only few loci revealed significant differences in methylation status. Of the 27 candidate loci nominated by promoter-arrays-based methylation analyses, bisulfite sequencing successfully validated only 2 genes. One explanation of the discrepancy between these two methods is that we adopted a less stringent criterion of p-value (10 26 , instead of a Boferroni corrected p-value of 10 28 considering that GeneChip Human The percentages below refer to the ratio of CpG methylation. C. qRT-PCR analysis of ARHGAP18 mRNA relative to GAPDH in the twin pair ID 9. ARHGAP18 expression in twin 9A untreated with 5-azadC was normalized to 1. Error bars indicate 6 SD. P-value was calculated using Mann-Whitney U test with asterisks indicating statistical significance (p,0.05). ns: not significant. doi:10.1371/journal.pone.0047081.g001 Promoter 1.0 Array contains 4.6 million probes). As a result, we were able to detect more candidate loci from the practically congruent MZ twin samples, only at the expense of precision rate. The technical limitations of MethylMiner collection might also contribute to false positives. After bisulfite sequencing and qRT-PCR validation, we identified one candidate gene, ARHGAP18, which encodes one of the Rho GTPase-activating proteins (GAPs) that modulate cell proliferation, migration, intercellular adhesion, cytokinesis, proliferation differentiation, and apoptosis [25]. Mutations in a handful of Rho-linked genes were documented to be associated with X-linked mental retardation [8], by which the importance of GAP activity in normal neuronal functions was proposed [26]. Notwithstanding the identification of ARHGAP18 in a genome-wide association study for schizophrenia [27], it had not been previously connected to cognitive abilities until the present study.
We were not able to identify a single gene that displayed significantly different expression level between the group of twins with higher IQ scores and their co-twins. From the list of candidate genes generated by direct pair-wise comparison, UCHL1 is a brain-specific de-ubiquitinating enzyme. While the substrates are still unknown, loss of its enzyme activity has been reported in neurological diseases such as Alzheimer's disease and Parkinson's disease [28]. In a different approach, RFK, RPL12, and RMRP showed borderline significance. RFK encodes riboflavin kinase, an essential enzyme to form flavin mononucleotide, is important in a wide range of biological metabolisms [29]. RPL12 encodes a ribosomal protein of the 60S subunit, while RMRP encodes the RNA component of mitochondrial RNA processing endoribonuclease. Although none of these 3 genes had ever been connected with cognitive functions, it remains possible that their biophysical characteristics might become more pronounced in cells having as high a metabolic rate as neurons.
In the gene set based approach, GSEA of between-group and between co-twin comparisons revealed several mitochondrial ribosomal protein-coding genes. Mitochondria, which are responsible for most of the energy requirement for cellular metabolism, have their own translation system for the 13 proteins essential for oxidative phosphorylation in mammals. All 78 human mitochondrial ribosome proteins are translation products of nuclear genes, of which some were identified as candidate genes for several congenital diseases [30]. No exclusive conclusion about the connection of mitochondrial ribosomal function and cognitive ability could be drawn before being further validated. Nevertheless, we hypothesized that, for the highly differentiated and high energy-demanding central nervous system, essential proteins for mitoribosome function might play a role in the maintenance of neuronal biological processes. Apart from mitochondrial ribosomal protein-related gene sets, ''ATP-dependent DNA Helicase Activity'' from the GO database was also found. DNA helicases are molecular motor proteins that use nucleoside 59-triphosphate hydrolysis as a source of energy to open energetically stable duplex DNA into single strands. As such, they are essential in almost all aspects of cellular DNA machinery including DNA replication, repair, recombination, and transcrip-tion [31]. Of which, XRCC5 and XRCC6 encode the two subunits of the Ku protein, which plays an important role in the repair of double-stranded DNA breaks and telomere protection [32]. In neurodegenerative diseases, such as Alzheimer's disease, where cellular damage due to oxidative stress is proposed to contribute to pathophysiology, reduced Ku protein expression and its DNA binding activity have been thought to be involved [33]. Of the remaining five helicases denoted, G3BP1 was demonstrated to play an essential part in proper embryonic growth and neonatal . Heat map of the 37 genes with core enrichment from up-regulated genes sets. GSEA analysis was carried out to identify if any pre-defined gene set showing different expression levels between the group of higher IQ twins and the group of their lower IQ co-twins. Gene set databases BioCarta, KEGG, Reactome, and Gene Ontology were applied separative. Gene sets meeting the cutoff FDR q-value of 0.25 were subjected to leading edge analysis to determine the genes with core enrichment. The result was a list of 37 genes and we generated a heat map accordingly. Red and blue cells signify genes that were either up-or down-regulated, respectively, after the expression levels of the twins with higher IQ scores compared to their co-twins. The scale represents fold changes in log2 values, according to the color map at the bottom of the figure. The general tendency of higher expression levels in twins with higher IQ scores from the gene SHMT1 to CHD4, and their lower expression levels from the gene PLA2G2A to CAMK2B, was visualized. doi:10.1371/journal.pone.0047081.g003 survival [34]. Although the direct associations between these seven genes and cognitive abilities have not been depicted, mutations in a list of DNA repair-related genes have already been reported to cause mental retardation [8]. As such, one hypothesis we proposed here is that the up-regulated expression of these helicases might provide better protection from oxidative damages and, thus, improve neuronal function and survival, which could bring forth higher levels of intelligence (or in other words, less compromised) as a phenotype.
On the other hand, the pair-wise GSEA, along with leading edge analysis, identified CHRNA2 which encodes the a2 subunit of nicotinic acetylcholine receptors (nAChRs). Initially related to nicotine dependence, the role of nAChRs in cognitive performance has gained attention because nicotine is considered a powerful enhancer of cognitive capabilities [35] via the interaction of nicotine and nAChRs [36].
Additionally, two potassium voltage-gated channel-coding genes, KCNE2 and KCNQ3, were identified. Voltage-gated ion channels possess diverse functions, include regulating neurotransmitter release, heart rate, insulin secretion, neuronal excitability, epithelial electrolyte transport, and smooth muscle contraction. By assembling with KCNQ2 or KCNQ5, KCNQ3 forms the M channel, a slow activating and deactivating potassium channel that plays a critical role in the regulation of neuronal excitability [37]. In addition to being identified as one cause for a dominantly inherited form of human generalized epilepsy, called benign familial neonatal convulsions, the electrogenic characteristics of KCNQ/M channels have importance in controlling intrinsic firing patterns of principal hippocampal neurons, thus, further modulating hippocampal learning and memory [38]. Of note, rats treated with Linopirdine, an M channel-specific inhibitor, demonstrated improved performance in various tests of learning and memory [39]. Identified by utilizing BioCarta pathway database, IGF1 manifested up-regulation in higher IQ twins. The insulin-like growth factor (IGF) system is important in growth and development. While the exact mechanism remains unknown, the growth hormone (GH) and IGF-1 axis has been reported to play a role in the reduction of cognitive functions in aging population and patients with GH deficiency [20]. Methylation status of its promoter regions was studied, yet no difference was discerned between the twins. It is possible that IGF1 is under some other epigenetic regulation, considering the actual mechanisms responsible for the cell type-specific expression patterns of this gene remain to be elucidated [40].
Since intelligence is a complex trait associated with many genes of small effect [41,42], it was not surprising that we failed to identify a single gene manifesting prevailing expression changes across all 17 twin pairs. We also noticed that by microarrays none of the candidate genes identified by methylation analyses was listed in the results of expression studies. A reason of the discrepancy of these two methods may be the lack of comprehensive accession to all known epigenetic regulations. DNA methylation at CpG sites across promoter regions has been deeply studied, while a number of other epigenetic regulatory mechanisms are also found to modulate gene expression [22]. Furthermore, ever since the genome-wide, single-base human DNA methylome mapping became possible, the correlation of methylation status of gene bodies and expression levels has been gaining attention [43]. It has been documented that DNA methylation of gene bodies is associated with gene activity. Therefore, it is possible that the intelligence-related expression profiles were subjected to this novel epigenetic regulation.
The present study has its conceptual and technical limitations. Conceptually, we hypothesized that changes in methylation status and expression levels could be captured in genomic DNA and total RNA extracted from whole blood and derivative lymphoblastic cell lines, respectively. Given the inability to probe DNA methylation status or gene expression in the human brain, except in postmortem studies, human blood is commonly used in transcriptional studies of various diseases including psychiatric disorders [44]. Although there is still no consensus regarding blood-based gene expression profiles as good surrogates for addressing neuroscientific research, the moderate correlation between transcripts in whole blood and the central nervous system makes it an accessible alternative [45,46]. Several studies utilizing similar strategies to study twin pairs discordant for psychiatric disorders through comparisons of CpG islands methylation of peripheral blood cells or lymphoblast cell lines had detected a number of disease-associated epigenetic changes [47][48][49]. Moreover, it has been documented that the methylation changes of large-scale domains are linked to cell-specific differentiation [50]. Several functional gene sets we found (i.e., ''cation channel activity'', ''cell-cell signaling'', ''extracellular region part'', ''Gprotein coupled receptor protein signaling pathway'', and ''gated channel activity'') were observed within neuronal highly methylated domains. The association between expression difference observed in lymphoblast cell lines and neuron-specific methylation patterns implies that these candidate gene sets are more likely to reflect the true differences in brains. Technical limitations of this study include low fold-change differences in expression levels manifested between co-twins. We did not carry out qRT-PCR for the genes identified by GSEA, considering that differences of less than 1.5-fold are thought to be beyond the limit of reproducibility [51]. Reverse causality should be considered in epigenetic studies, considering all known epigenetic marks are influenced by environmental exposures including diet, smoking, alcohol consumption, stress, or physical activities [52], It might be plausible that the changes we observed in this study resulted from the divergent lifestyle choices by subjects with different levels of intelligence, and not that these epigenetic changes caused the twins to differ.
Here, we presented the first study that used genome-wide epigenetic and transcriptomic profiling to identify epigenetic changes related to the discordance between MZ twins with normal-range intelligence. A list of new candidate genes possibly related to cognitive abilities was generated while further replications and functional analysis remain necessary.
Ethics statement
This study was conducted according to the principles expressed in the Declaration of Helsinki. Attendance was voluntary, and signed informed consent including information on genetic analyses was obtained from all participants. The Ethical Committees of Kobe University Graduate School of Medicine and Keio University Faculty of Letters approved study protocols.
Samples
A sub-sample of 326 twin pairs of twins from the Keio Twin Project were invited to Keio University, where the Kyodai Nx15-, one of the most often used group intelligence tests in Japan, was applied. The zygosity of participants was diagnosed by 15 polymorphic STR loci (AmpF,STR identifier kit, Applied Biosystems). Among the 240 pairs to have monozygosity, 34 MZ twin pairs who manifested differences in IQ score of more than 15 points between co-twins. One pair was excluded for a lower-thannormal IQ score (52 points). From the17 of the remaining 33 twin pairs, who agreed to participate in the study, peripheral blood was drawn and B-lymphoblastoid cell lines were established. For the treatment of 5-azadC (WAKO), a daily aliquot of 5 mM stock solution was added to flasks and thoroughly resuspended (final concentration of 1 mM). Cells were harvested after 3 days from the start of treatment.
Nucleic acids extraction
For human promoter microarrays and bisulfite genomic sequencing, genomic DNA was extracted from the blood via established methods. For gene expression microarrays and quantitative RT-PCR, total RNA was isolated using RNeasy Plus Mini kit (QIAGEN) from B-lymphoblastoid cell lines.
DNA methylation profiling
One microgram of genomic DNA was sonicated and subjected to methylated DNA enrichment using the MethylMiner methylated DNA enrichment kit (Invitrogen) as per the manufacturer's instructions. The methylated DNA fragments, amplified by the GenomePlex WGA reamplification kit 3 (SIGMA) and supplemented with dUTP, were further purified using the QIAquick PCR purification kit (QIAGEN).
According to Affymetrix's chromatin immunoprecipitation assay protocol, enriched methylated DNA was hybridized to GeneChip Human Promoter 1.0R arrays (Affymetrix), which comprised a coverage of over 25,500 human promoter regions.
AGCC (Affymetrix GeneChip Command Console)-format CEL files were first created, and then converted to GCOS (GeneChip Operating Software, Affymetrix)-format CEL files. For the pairwise analyses, paired CEL files were imported into MAT software to specify candidate regions (approximately 600 base pairs in length) with significantly different probe intensities between co-twins (p,10 26 ).
To detect candidate loci across all 17 twin pairs, we utilized Partek Genomic Suite 6.5 software (Partek) to import the CEL files, and have the data converted to log 2 values after normalized by the RMA (Robust Multichip Averaging) algorithm. After the signal from each probe for the higher-IQ sibling was subtracted from that of the lower-IQ co-twin across all probes, one-class t-test with statistical parameters set at p,10 26 was carried out to detect significant regions.
Bisulfite sequencing
Genomic DNA was bisulfite-treated using the Methylcode bisulfite conversion kit (Invitrogen) as per the manufacturer's instructions. Amplification was performed with Takara LA Taq polymerase, with converted DNA-specific primers that were designed using MethPrimer. The amplicons were cloned into vectors using a TOPO TA cloning kit (Invitrogen). We performed direct sequencing of the plasmid DNA that was isolated using PI-200 auto-plasmid-isolator (KURABO) via the ABI 3730xl sequencing system (Applied Biosystems).
Quantitative RT-PCR
Two micrograms of the total RNA was subjected to reverse transcription using the SuperScript III first-strand synthesis system for RT-PCR (Invitrogen). Gene target amplifications, using Takara SYBR premix Ex Taq, were performed in triplicate in a matter of a serial 10 fold dilution. Housekeeping gene GAPDH served as the internal control gene. Mann-Whitney U test was performed to compare the relative expression levels between cotwins.
Gene expression profiling
We processed 300 nanograms of total RNA using the Ambion WT expression kit and the Affymetrix GeneChip WT terminal labeling kit according to the manufacturers' recommended methods. Hybridization and scanning of GeneChip Human Gene 1.0 ST arrays (Affymetrix), which comprised of more than 28,000 gene-level probe sets, were performed as per the manufacturer's instructions. Partek Genomic Suite 6.5 software was used to import AGCC-format CEL files and normalize the data according to the RMA algorithm. ANOVA with an FDR-adjusted p set to 0.05 was used to determine those probe sets that were significantly different between the groups of twins with a higher IQ and their lower IQ co-twins. A one-class t-test analysis with multiple sample correction was conducted across all log 2 ratios (higher-IQ twin/ lower-IQ co-twin) for all 17 twin pairs. We also carried out pairwise comparison for the expression array data and then included genes with a fold-change value more than 2. Genes replicated in the same tendency (up-regulated in the higher IQ twins, or upregulated in the lower IQ twins) in most pairs were listed.
In another approach, the twins of each pair were categorized into higher and lower expression groups according to the expression level of every individual gene. A paired t-test was carried out to compare the mean IQ scores of the two groups. Corrected p of 10 26 was applied as the cutoff to define being positive.
GSEA was performed for functionally related genes across a spectrum of gene sets of C2 curated gene sets including BioCarta gene sets (217 gene sets), KEGG gene sets (186 gene sets), and Reactome gene sets (430 gene sets), and C5 GO gene sets (1,454 gene sets) separately. Pre-ranked gene lists, including lists with upregulated/down-regulated genes in the group of twins with higher IQ scores sorted according to the p calculated by a between-group ANOVA test, and lists for each twin pair with genes sorted by between-sibling fold-change values, were constructed for the analyses. Gene sets with FDR q-value,0.25 after 1,000 permutation cycles were considered significantly enriched. Lists of leading edge subset genes, the cores of gene sets that account for the enrichment signal, were then generated. To create the list of enriched genes shared by plural twin pairs, the upper 100 enriched genes of each pair were included in the test.
Supporting Information
Dataset S1 Pair-wise comparison of expression array data. Genes with a fold-change value .2 were included. The positive value of fold-change designates up-regulation in the higher IQ twin, while the negative value designates the other way round. n/a indicates no gene matched the fold-change cutoff value.
(XLS)
Dataset S2 Pair-wise GSEA results using BioCarta database. Gene sets with a FDR q-value,0.25 were included. n/a indicates no gene set matched the FDR cutoff value.
(XLS)
Dataset S3 Pair-wise GSEA results using KEGG pathway database. Gene sets with a FDR q-value,0.25 were included. n/a indicates no gene set matched the FDR cutoff value.
(XLS)
Dataset S4 Pair-wise GSEA results using Reactome database. Gene sets with a FDR q-value,0.25 were included. n/a indicates no gene set matched the FDR cutoff value.
(XLS)
Dataset S5 Pair-wise GSEA results using GO database. Gene sets with a FDR q-value,0.25 were included. n/a indicates no gene set matched the FDR cutoff value. (XLS) Figure S1 Scatterplot of the 240 MZ twin pairs IQ scores. This diagram provides an overview of the IQ distribution for all 240 MZ twins from the Keio Twin Study. Each circle identifies one twin pair with its x-coordinate and y-coordinate representing, respectively, the IQ score of Twin A and Twin B. With the black line standing for regression, the correlation coefficient of 0.72 suggests the similarities between twins. Circles located outside of the space between two blue lines indicate twin pairs manifesting between-sibling IQ differences larger than 15 points and were considered to be recruited, while the red arrowhead points to one pair being excluded as a possible subject for a lower-than-normal IQ score. (TIF) Figure S2 Positive correlation between IQ scores differences and the number of loci different in methylation status. The positive correlation between the number of loci with significant differences in methylation patterns and the differences of IQ scores of the twin pairs was visualized. Twin pairs with larger differences in IQ scores tended to have more loci identified by screening for epigenetically regulated genes. (TIF) Figure S3 PCA results for the expression profiles of 17 twin pairs. Principle components ranked from the highest variance are named accordingly as PC 1 st and PC 2 nd . The PCA projection maps these two components data to 2 dimensions for visualization. In the scatter plots, each point represents a sample. The color of the symbol represents the relative IQ scores with red as higher and blue as lower. The number on each symbol indicates the twin pair ID. In contrast to the similarity between co-twins from each pair, no apparent gathering pattern could be recognized by the relative IQ scores. (TIF) Figure S4 Clustering analysis for the expression profiles of 17 twin pairs. Clustering analysis for the list of 644 genes manifesting more than a 1.1-fold change between the groups of twins with higher IQ scores and the groups of their co-twins under ANOVA analysis was performed. The number below each symbol indicates the twin pair ID. The color red and blue of the symbol indicate respectively the higher IQ and the lower IQ twin of each twin pair. The scale represents fold changes, according to the color map at the bottom of the figure. No apparent clustering could be recognized. (TIF) | 7,859.6 | 2012-10-17T00:00:00.000 | [
"Biology",
"Psychology"
] |
Multiple Frequency Shifting and Its Application to Accurate Multi-Scale Modeling of Induction Machine
The use of the shift frequency as a simulation parameter has been widely acknowledged as an enabler for multi-scale modeling of electrical power systems. So far, the frequency shifting concept has been applied to modeling electrical components considering one shift frequency. This letter aims to complement the previous work and present a multiple-frequency shifting modeling methodology. The methodology is applied to the high-slip induction machine. Two major achievements are introduced: 1) Application of frequency shifting concept for components considering multi-carriers. The Fourier spectra of machine stator and rotor quantities are shifted by different shift frequencies. 2) Modeling of components with time-varying shift frequencies. The frequency shifting modeling methodology is so extended to a wider application area. Case studies are included to demonstrate the effectiveness of the proposed method and the developed machine model.
Multiple Frequency Shifting and Its Application to Accurate Multi-Scale
Modeling of Induction Machine Yue Xia , Member, IEEE, Peng Zhao , Kai Strunz , Senior Member, IEEE, Ying Chen , Senior Member, IEEE, and Yufei Jin Abstract-The use of the shift frequency as a simulation parameter has been widely acknowledged as an enabler for multi-scale modeling of electrical power systems.So far, the frequency shifting concept has been applied to modeling electrical components considering one shift frequency.This letter aims to complement the previous work and present a multiple-frequency shifting modeling methodology.The methodology is applied to the high-slip induction machine.Two major achievements are introduced: 1) Application of frequency shifting concept for components considering multi-carriers.The Fourier spectra of machine stator and rotor quantities are shifted by different shift frequencies.2) Modeling of components with time-varying shift frequencies.The frequency shifting modeling methodology is so extended to a wider application area.Case studies are included to demonstrate the effectiveness of the proposed method and the developed machine model.Index Terms-Induction machine, multiple shift frequencies, frequency shifting, time-varying shift frequency.
I. INTRODUCTION
T HE electromagnetic transient program (EMTP) based on Dommel's algorithm is widely used for simulating electromagnetic transient phenomena in power systems.In EMTP, small time steps are required for tracking the ac carrier.This results in a considerable increase in computational cost when simulating electromechanical transients whose frequencies are of the order of 1 Hz and which modulate the carrier at 50 or 60 Hz.The modeling technique based on frequency shifting was proposed to achieve computational efficiency in the simulation of such low-frequency transients [1], [2].The shift frequency is introduced as a novel simulation parameter in addition to time step size.When the shift frequency is set equal to the ac carrier frequency, the ac carrier is eliminated and larger time step sizes can be chosen.The technique was proven valuable in [3], [4] for the modeling of ac machines.
In the literature on frequency shifting-based component models, one shift frequency rather than multiple shift frequencies is considered.Moreover, it is common practice to associate the shift frequency with the ac grid frequency in the simulation of low-frequency transients.However, this approach should be extended in some cases where multi-carriers appear in different areas.As a typical example, the induction machine consisting of stator and rotor circuits is considered hereafter.The frequency of the quantities in the stator circuit is around the ac grid frequency, while the frequency of the quantities in the rotor circuit associated with the slip speed of the rotor is time-varying.It has been common to simulate induction machines in the phase domain [3], [4] by considering the frequency shifting for the stator circuit.However, with the frequency shifting for the rotor circuit neglected, the time step size may be limited due to the presence of the time-varying carrier.The accuracy of those models would benefit from also applying frequency shifting on the rotor side.
This letter addresses this issue.It is shown how multiplefrequency shifting is applied to the induction machine model.The setting of the shift frequencies is now two-dimensional instead of just being one-dimensional.The Fourier spectrum of the machine stator quantities is shifted by the ac grid frequency.The Fourier spectrum of the rotor quantities is shifted by the time-varying slip frequency.Case studies are performed to confirm benefits for simulation accuracy and efficiency.
II. MULTIPLE FREQUENCY SHIFTING IN SIMULATION
In the frequency shifting-based modeling techniques, all ac system quantities are represented through analytic signals instead of real signals based on Hilbert transform: with where s 1 , s 2 , . . ., s n are the system quantities, s is the n-row vector representing system quantities, H denotes the Hilbert transform, underscore indicates an analytic signal.The Fourier spectra of quantities in the system may be different, as shown on the left of Fig. 1.The original real signals s(t) have negative frequency components.This is not the case for the corresponding analytic signals s n (t), as shown in the middle of the Fig. 1.
Frequency shifting the vector s(t) is performed as follows: with where ω is referred to as the frequency shifting matrix, which represents the multiple shift operations.Multi-dimensional shift frequencies can so be used in the simulation.The variables f s1 , f s2 , . . ., f sn are the shift frequencies corresponding to the quantities s 1 , s 2 , . . ., s n .It should be noted that the shift frequency can be either constant or time-varying depending on the operating condition.Through the operation in (3), the spectra of system quantities are shifted by the frequency shifting matrix as shown on the right of Fig. 1.The maximum frequency of the shifted signal is lower than that of the original real bandpass signal.Thus, larger time step sizes can be chosen according to Nyquist criteria.The modeling technique is distinguished through the introduction of multiple and time-varying shift frequencies.
III. INDUCTION MACHINE MODELING BASED ON MULTIPLE FREQUENCY SHIFTING
In frequency shifting-based models, the quantities are represented by analytic signals for shift operations as presented in Section II.Using analytic signals, the voltage equations describing the behavior of the three-phase symmetrical induction machine is expressed in the phase domain as follows [3]: where v abcs and v abcr are the stator and rotor voltages, respectively; i abcs and i abcr are the stator and rotor currents, respectively; λ abcs and λ abcr are the stator and rotor flux linkages, respectively; R s and R r are stator and rotor resistances, respectively; L ss and L rr are the stator and rotor inductance matrices; L sr and L rs are the mutual inductance matrices; θ r is the rotor position.
Multiplying both sides of (5) by the frequency shifting matrix and introducing the notation of (3) gives: with where ω mac is the frequency shifting matrix of the machine model; f s1 is the stator shift frequency typically either 50 Hz or 60 Hz when tracking electromechanical transients; f s2 is the rotor shift frequency which is equal to the slip frequency depending on the rotor speed.Application of the trapezoidal integration method to (7) gives a difference equation: where τ is the time step size, k is the time step counter.By insertion of analytic signals and rearrangement of (11), the following expressions for v abcs and i abcr are obtained: with and From (6), the stator flux linkage may be expressed as: Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
TABLE I MACHINE PARAMETERS
Substitution of (13) and ( 16) in (12) gives the following Thevenin equivalent: where R eq is the equivalent resistance matrix, e h is the total history term.In the presented model, there are two shift frequencies: f s1 and f s2 .The shift frequency f s1 can be set to the ac grid frequency to eliminate the ac carrier on the stator.The shift frequency f s2 is a function of the rotor speed and thus time-varying.The purpose of the introduction of f s2 is to eliminate the carrier on the rotor side.This feature distinguishes the models from [3] and [4] where no frequency shifting is applied on the rotor side.In [3] and [4], the shift frequency f s2 is not available.corresponds to having f s2 = 0 Hz in the presented model.Moreover, the shift frequencies f s1 and f s2 may be set to zero when fault studies are of interest.
IV. CASE STUDIES
A test system covering a high-slip induction machine connected directly to an ideal voltage source is used to examine the results provided by the proposed machine model.The focus of this test system is on the investigation of numerical properties of the individual machine model.The parameters of type 1 of tested machine from [5] are given in Table I.
It is assumed that the machine is initially running at rated speed in the steady state.The rotor speed is ramped down by 10% of rated value within 0.5 s to 1.5 s and ramped up by 5% of rated value within 2.5 s to 3 s.In this test, low-frequency transients are present.To allow for accurate and efficient simulation of low-frequency transients, multiple shift frequencies are used for the proposed model.The shift frequency f s1 is set equal to the grid frequency of 60 Hz.The shift frequency f s2 is set adaptively to the slip frequency.The time step size is set to 20 ms.For the purpose of comparison, the case of non-adaptive constant shift frequency settings is considered as e.g., performed in [3] or [4].The time step size for this case is also set to 20 ms.In accordance with [3], the reference solution was obtained using the dq model implemented in MATLAB/Simulink and solved with the Runge-Kutta 4th order method with a time step size of 1 μs.
The results of the comparison of the phase a stator currents are shown in Fig. 2. The frequency shifting-based models support the tracking of envelope waveforms.As shown in Fig. 2(a), the envelope produced by the proposed model accurately touches the instantaneous amplitudes as observed with the reference solution.To focus on details, zoomed-in views of stator currents are displayed in Fig. 2(b).The real parts of the shifted analytic signals represent the time domain instantaneous values shown using circles.The latter may be used to further examine the accuracies of envelope waveforms produced by different shift frequency settings.From Fig. 2(b), it can be seen that the results obtained with the multiple and adaptive shift frequency setting are in good agreement with the reference curve.For constant shift frequency settings, the deviations of the simulation results from the reference are visibly larger.
The 2-norm cumulative relative errors are used to further investigate the accuracies of various shift frequency settings: The 2-norm errors of stator current and rotor current of the adaptive shift frequency are 0.0409% and 0.0474%.The errors of stator current and rotor current are 14.8287% and 16.5746%, respectively, for constant shift frequency setting f s1 = 60 Hz.The improved accuracy is attributed to the operation of multiple and adaptive frequency shifting.The errors for adaptive setting of the shift frequency are close to zero and remain nearly constant even at rising time step sizes, as shown in Table II.At a constant shift frequency setting, the errors are shown to rise as the time step size increases.At time step size of 1 ms, the errors of stator current and rotor current for constant shift frequency setting are 0.0419% and 0.0434%, respectively.To achieve similar accuracy, the time step size for the model with multiple and adaptive shift frequencies may be increased to 20 ms.
To evaluate the computational efficiency, the proposed model and the model with constant shift frequency are implemented using standard C language.The models are executed on a personal computer with an Intel Core i5-10400F, 2.90-GHz processor and 16 GB RAM.The CPU times per time step required by the proposed model and the model with constant shift frequency are 0.396 μs and 0.262 μs, respectively.Taking into account both computation time per step and the used time step size, the simulation efficiency with multiple and adaptive shift frequencies is improved by a factor of (20 ms)•(0.262μs)/(1 ms)/(0.396μs) ≈ 13.2 as compared to that with constant shift frequency.
To further illustrate the value introduced by the proposed method, an example of multi-scale simulation of a multimachine system for a three-phase fault is considered.The test system is shown in Fig. 3.A distribution network consisting of 6 induction machines with rated load torque is connected to the WSCC 9-bus system at bus 7. The parameters of the test system and machines can be referred to from [3] and [5].At t = 0.1 s, a three-phase-to-ground fault occurs at bus 6.The fault is cleared after four cycles.The test system is also modeled using PSCAD with a small time step size of 10 μs to provide reference solutions.For purposes of comparison, the model which only considers the frequency shifting on the stator side as in [3] is also included in the simulation.The simulation results are depicted in Fig. 4 and Fig. 5(a).
At the beginning of the simulation, the system is running in the steady state.The envelope waveforms are tracked at τ = 20 ms.Multiple shift frequencies are used for the proposed machine model.The shift frequencies f s1,M1 , . . ., f s1,M6 are set equal to 60 Hz to eliminate the ac carriers on the stator.The shift frequencies f s2,M1 , . . ., f s2,M6 are set equal to the slip frequencies of different machines M1, • • • , M6 to eliminate the carriers on the rotor, as shown in Fig. 5(b).From t = 0.1 s, electromagnetic transients appear due to the occurrence of fault.Then, no frequency shifting is applied and all shift frequencies are set to zero.Natural waveforms are tracked at τ = 10 μs.As the electromagnetic transients damp out, multiple frequency shifting resumes at t = 0.27 s.The envelope is tracked with a time step size of 20 ms.As observed in Fig. 4 and Fig. 5(a), the multi-scale simulation results obtained with the proposed model closely match reference solution and are more accurate than that obtained with the model which neglects the frequency shifting for the rotor side.
V. CONCLUSION
In this letter, the multiple frequency shifting theory is developed, implemented and validated.The presented work is distinguished through three contributions.First, the frequency shifting modeling methodology is extended to accommodate the components containing multi-carriers.Through the multidimensional and time-varying setting of shift frequencies, ac carriers are eliminated, which enables accurate and efficient simulation of low-frequency transients.Second, it was shown how the method can be applied in the modeling of induction machine.The Fourier spectra of stator quantities are shifted by the ac grid frequency, while the Fourier spectra of rotor quantities are shifted adaptively by the time-varying slip frequency.Third, the performance is validated through case study.The results show that the use of multiple and adaptive shift frequency settings is beneficial in the simulation of induction machine and provides more accurate solution compared to the use of constant shift frequency settings at large time step sizes.
Fig. 4 .
Fig. 4. Phase b current of i T ; (a) reference solution in PSCAD; (b) multi-scale simulation results obtained with the proposed model.
Fig. 5 .
Fig. 5. Phase b current of i T and shift frequency f s2 ; (a) zoomed-in view of i T,b during the period of electromechanical transients; solid light: reference solution; circles and dashed bold: natural waveform and envelope waveform obtained with the proposed model; triangles and solid bold: natural waveform and envelope waveform obtained with the model only considering the frequency shifting on the stator side; (b) time-varying setting of shift frequency f s2 of different machines.
TABLE II TWO
-NORM ERROR OF STATOR AND ROTOR CURRENT UNDER VARIOUS TIME STEP SIZES | 3,627.4 | 2024-01-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
TensAIR: Real-Time Training of Neural Networks from Data-streams
Online learning (OL) from data streams is an emerging area of research that encompasses numerous challenges from stream processing, machine learning, and networking. Stream-processing platforms, such as Apache Kafka and Flink, have basic extensions for the training of Artificial Neural Networks (ANNs) in a stream-processing pipeline. However, these extensions were not designed to train ANNs in real-time, and they suffer from performance and scalability issues when doing so. This paper presents TensAIR, the first OL system for training ANNs in real time. TensAIR achieves remarkable performance and scalability by using a decentralized and asynchronous architecture to train ANN models (either freshly initialized or pre-trained) via DASGD (decentralized and asynchronous stochastic gradient descent). We empirically demonstrate that TensAIR achieves a nearly linear scale-out performance in terms of (1) the number of worker nodes deployed in the network, and (2) the throughput at which the data batches arrive at the dataflow operators. We depict the versatility of TensAIR by investigating both sparse (word embedding) and dense (image classification) use cases, for which TensAIR achieved from 6 to 116 times higher sustainable throughput rates than state-of-the-art systems for training ANN in a stream-processing pipeline.
INTRODUCTION
Online learning (OL) is a branch of Machine Learning (ML) which studies solutions to time-sensitive problems that demand real-time answers based on fractions of data received in the form of data streams [14].A common characteristic of data streams is the presence of concept drifts [16], i.e., changes in the statistical properties among the incoming data objects over time [23].Consequently, pretrained ML models tend to be inadequate in OL scenarios as their performance usually decreases after concept drifts [23].Differently, OL models mitigate the negative effects of such concept drifts by being ready to instantly update themselves for any new data from the data stream [14].
Due to the intrinsic time-sensitiveness of OL, it is not feasible to depend on solutions that spend an undue amount of time on retraining.Thus, if concept drifts are frequent, currently, complex OL problems cannot rely on robust solutions common to other ML problems due to their long training time [14], like those involving Artificial Neural Networks (ANNs) [12].
Therefore, how to solve complex OL problems like those involving data streams of audio, video, or even text remains an open research question, especially when they are affected by frequent concept drifts.Currently, most OL researchers are focused on how to improve the quality of the input data and how to adapt to concept drifts [6,15,23,27,31] and they do not address the issue of solving such complex problems.Our intuition is that current hardware is already capable of training a large class of ANN models in real time if the training is distributed across multiple nodes efficiently.In this case, problems that today are deemed too complex to be effectively solved by standard OL learners could be solved using ANNs.
Nowadays, instead of retraining ANNs in real-time, state-ofthe-art extensions [2,3] for Apache Flink [8] and Kafka [19] were developed to adapt/re-train their models using datasets created from buffered samples from the data-stream.Thus, these approaches enable the usage of ANN in OL by giving up the real-time adaptation of the ANN models, which can only be updated after buffering a dataset substantially large to be used for retraining.If adapted to be trained in real-time, those approaches suffer from performance and scalability issues (cf.Section 4).Thus, they cannot sustain throughput high enough for many real-world problems.
Consequently, when real-time adaptation is not available, one can expect a lower prediction/inference performance of models between the instant a concept drift occurs and the moment the model is updated.Thus, considering that non-trivial ANN models demand a high amount of training examples before convergence, one can expect low-quality predictions/inferences for an extended amount of time (until the training dataset is buffered and the model is retrained).This makes it unfeasible to apply this approach to real-world problems that suffer from frequent concept drifts.
To mitigate the prediction/inference performance decrease, we argue that it is necessary to adapt the ANN models in real-time.However, the real-time adaptation of ANN models on an OL scenario is not straightforward.We therefore highlight the following two challenges: (1) real-time data management: Not all training data is available from the beginning.Thus, it is necessary to incrementally update the model with fractions of data at each step.Different from commonly used pre-defined training datasets.
(2) backpressure: The model must process a higher number of data samples per second (for training and for inference/ prediction) than the data stream produces.This avoids a sudden surge in latency or even a system crash.
In this paper, we present the architecture of TensAIR, the first OL framework for training ANN models (either freshly initialized or pre-trained) in real time.TensAIR leverages the fact that stochastic gradient descent (SGD) is an iterative method that can update a model based only on a fraction of the training data per iteration.Thus, instead of using pre-defined or buffered datasets for training, TensAIR models are updated after each data sample (or data batch) is made available by the data stream.In addition, TensAIR achieves remarkable scale-out performance by using a fully decentralized and asynchronous architecture throughout its whole dataflow, thus leveraging the usage of DASGD (decentralized and asynchronous SGD) to update the ANN models.
To assess TensAIR, we performed experiments on sparse (word embedding) and dense (image classification) models.In our experiments, TensAIR achieved nearly linear scale-out performance in terms of (1) the number of worker nodes deployed in the network, and (2) the throughput at which the data batches arrive at the dataflow operators.Moreover, we observed the same convergence rate in the distributed models independently of the number of worker nodes, which shows that the usage of DASGD did not negatively impact the models' convergence.When compared to the state of the art, TensAIR's sustainable throughput in the realtime OL setting was from 6 to 175 times higher than Apache Kafka extension [3] and from 6 to 120 times higher than Apache Flink extension [2].We additionally compared TensAIR to Horovod [30], distributed ANN framework developed by Uber, and achieved from 4 to 335 times higher sustainable throughput than them in the same real-time OL setting.
Below, we summarize the main contributions of this paper.would not be feasible with standard OL approaches.
BACKGROUND
Considering that the real-time training of ANNs in an OL scenario involves multiple areas of research, we give in the following subsections a short summary of the most important concepts and techniques used in this paper.
Online Learning
Online learning (OL) has gained visibility due to the increase in the velocity and volume of available data sources compared to the past decade [11].OL algorithms are trained using data streams as input, which differs from traditional ML algorithms that have a pre-defined training dataset.
Streams & Batches.Formally, a data stream S consists of ordered events with timestamps , i.e., ( 1 , 1 ), . . ., ( ∞ , ∞ ), where the denote the processing time at which the corresponding events are ingested into the system.These events are usually analysed in batches of fixed size , as follows: . . .
Batches are analyzed individually.Thus, if processed in an asynchronous stream-processing scenario, the batches (and in particular the included events ) can become out-of-order as they are handled within the system, even if the initial were ordered.In common stream-processing architectures, such as Apache Flink [8], Spark [38] and Samza [25], batches are distinguished into sliding windows, tumbling windows and (per-user) sessions [5].
Latency vs. Throughput.When analyzing systems that process data streams, one typically benchmarks them by their latency and throughput [18].Formally, latency is the time it takes for a system to process an event, from the moment it is ingested to the moment it is used to produce a desired output.Throughput, on the other hand, is the number of events that a system can receive and process per time unit.The sustainable throughput is the maximum throughput at which a system can handle a stream over a sustained period of time (i.e., without exhibiting a sudden surge in latency, then called "backpressure" [21], or even a crash).
Passive & Active Drift Adaptation.To adapt to concept drifts, one may rely on either passive or active adaptation strategies [13].The passive strategy updates the trained model indefinitely, with no regard to the actual presence of concept drifts.Active drift adaptation strategies, on the other hand, only adapt the model when a concept drift has been explicitly identified.
Artificial Neural Networks
ANNs denote a family of supervised ML algorithms which are designed to be trained on a pre-defined dataset [12].A training dataset is composed of multiple (, ) pairs, in which is a training example and is its corresponding label.ANNs are usually trained using mini-batches , which are sets of (, ) pairs of fixed size that are iteratively (randomly) sampled from the training dataset, thus = ( , ), . . ., ( + , + ).
An ANN model is represented by the weights and biases of the network, described together by and it is usually trained with variants of stochastic gradient descent (SGD) [29].SGD updates by considering ∇(, ), which is the gradient of a pre-defined loss function with respect to when taking as input.Thus, we can represent the update rule of as in Equation 1, in which is the iteration in SGD, and is a pre-defined learning rate.
Based on Equation 1, +1 is defined based on two terms.The second term is the more computationally expensive one to calculate, which we refer to as gradient calculation (GC).The remainder of the equation we call gradient application (GA), which consists of the subtraction between the two terms and the assignment of the result to +1 .
Distributed Artificial Neural Networks.Over the last years, ANN models have substantially grown in size and complexity.Consequently, the usage of traditional centralized architectures has become unfeasible when training complex models due to the high amount of time they spend until convergence [30].Researchers have been studying how to distribute ANN training to mitigate this.Distributed ANNs reduce the time it takes to train a complex ANN model by distributing its computation across multiple compute nodes.This distribution can follow different parallelization methods, system architectures, and synchronisation settings [24].
The most common form of distributing ANNs, which we also use in this work, is referred to as data parallelism [26], in which workers are initialized with replicas of the same initial model and trained with disjoint splits of the training data.Moreover, the synchronisation among the workers' parameters in a data-parallel ANN setting is either centralised or decentralised [24].In a centralised architecture [26], workers systematically send their parameter updates to one or multiple parameter servers.Those servers aggregate the updates of all workers and apply them to a centralised model [24].Thus, by relying on parameter servers to aggregate updates, the parameter servers may become the bottleneck of such an architecture [9].On the other hand, in a decentralised architecture [26], the workers synchronize themselves using a broadcasting-like form of communication [26].This broadcast eliminates the bottleneck of the parameter servers but requires a direct communication among worker nodes.
The parameter updates in a data-parallel ANN system can be synchronous or asynchronous.In a synchronous setting [26], workers have to synchronize themselves after each mini-batch iteration.This synchronization barrier wastes computational resources at idle times (i.e., when workers have to wait for others to resume their computation) [24].In an asynchronous SGD (ASGD) setting [26], workers are allowed to compute their gradient computations also on stale model parameters.This behaviour obviously minimizes idle times but makes it harder to mathematically prove SGD convergence.Recent developments on ASGD [17,32,39], however, have tackled exactly this issue under different assumptions.Zhang et al. [39] recently proved an o(1/ √ ) convergence rate for unbounded non-convex problems using ASGD under a centralised parameter server setup (where denotes the iteration among the ASGD updates).Additionally, [7,22,34] proved the convergence of ASGD on decentralized networks under distinct assumptions and network topologies.
TENSAIR
We now introduce the architecture of TensAIR, the first framework for training and predicting in ANNs models in real-time.TensAIR was designed to work in association with stream-processing engines that allow asynchronous and decentralized communication among dataflow operators.
TensAIR introduces the data-parallel, decentralized, asynchronous ANN operator Model, with train and predict as two new OL functions.This means that TensAIR can scale out both the training and prediction tasks of an ANN model to multiple compute nodes, either with or without GPUs associated with them.TensAIR dataflow can be visualized using a graph (see Figure 1).Note that, throughout this paper, we use the terms prediction and inference interchangeably.send_results(predictions)
Model Consistency
Despite TensAIR's asynchronous nature, it is necessary to maintain the models consistent among themselves during training in order to guarantee that they are aligned and, therefore, they eventually convergence to a same common model.In TensAIR, this is given by the exchange of gradients between the various Model instances.
Due to our asynchronous computation and application of the gradients on the distributed model instances, Model receives gradients calculated by Model (with ≠ ) which are similar but not necessarily equal to itself.This occurs whenever Model , which has already applied to itself a set of = {∇, ∇, ..., ∇} gradients, calculates a new gradient ∇, and sends it to Model , such that ≠ at the time when Model applies ∇.The difference | ∪ | − | ∩ | between these two models is defined as staleness [33].This staleness , (∇ ) metric is the symmetric distance between and with respect to the times at which a new gradient ∇ was computed by a model and is applied to model , respectively.We illustrate this phenomenon and the staleness metric in Figure 2.
Figure 2 illustrates the timeline of messages (containing both mini-batches and gradients) exchanged among TensAIR models considering maxGradBuffer = 1.Assume the UDF distributes 5 minibatches to 3 models.After receiving their first mini-batch, each Model calculates a corresponding gradient.Note that, when applied locally, the staleness of any gradient is 0 because it is computed and immediately applied by the same model.While computing or applying a local gradient, each Model may receive more gradients to calculate and/or apply from either the UDF or other models asynchronously.In our protocol, the models first finish their current gradient computation, apply it locally, then buffer and send maxGradBuffer many locally computed gradients to the other models, and wait for their next update.
Mini Batch Generator
As an illustration, take a look at Model 2 in Figure 2.While computing ∇ , it receives the yellow mini-batch from the Mini Batch generator, which it starts computing immediately after it finishes processing the blue one-which it had already started when it received the yellow mini-batch.During the computation of ∇ , Model 2 receives ∇ to apply, which it does promptly after finishing ∇ .Note that when Model 3 computed ∇ and Model 1 computed ∇ , they have not applied a single gradient to their local models at that time.Thus, | 1 | = | 3 | = 0.However, before applying ∇ , 2 = {∇ , ∇ } with | 2 | = 2 and staleness 3,2 (∇ ) = 2. Along the same lines, before applying ∇ , | 2 | = 3 and staleness 1,2 (∇ ) = 3.
Model Convergence
Since TensAIR operates on data streams and is both asynchronous and fully decentralized (i.e., it has no centralized parameter server), it exhibits characteristics that most SGD proofs of convergence [17,32,39] do not cover.Therefore, we next discuss under which circumstances TensAIR is guaranteed to converge.
First, we consider that training is performed between significant concept drifts.Therefore, we assume that the data distribution between two subsequent concept drifts does not change.Thus, if a concept drift occurs during the training, the model will not converge until the concept drift ends.By considering this, the data stream between two concept drifts will behave like a fixed data set.In this case, if given enough training examples, as seen in [12], each of the local model instances will eventually converge.
Second, considering TensAIR's decentralized and asynchronous SGD (DASGD), model updates can be staled.Nevertheless, as proven by Tosi and Theobald [34] iterations, with Ŝ and Ŝ representing the maximum and average staleness, calculated using an additional recursive factor.
Implementation
TensAIR was implemented on top of the Asynchronous Iterative Routing (AIR) [36,37] dataflow engine.AIR is a native streamprocessing engine that processes complex dataflows in an asynchronous and decentralized manner.TensAIR dataflow operators extend a basic Vertex superclass in AIR.Vertex implements AIR's asynchronous MPI protocol via multi-threaded queues of incoming and outgoing messages, which are exchanged among all nodes (aka."ranks") in the network asynchronously.This is crucial to guarantee that worker nodes do not stay idle while waiting to send or receive messages during training.The number of instances of each Vertex subclass and the number of input data streams can be configured beforehand, as seen in Figure 1.
TensAIR is completely implemented in C++.It includes the Ten-sorFlow 2.8 native C API to load, save, train, and predict ANN models.Therefore, it is possible to develop a TensorFlow/Keras model in Python, save the model to a file, and load it directly into TensAIR.TensAIR is completely open-source and available from our GitHub repository 1 .
EXPERIMENTS & DISCUSSION
To assess TensAIR, we performed experiments to measure its performance on solving prototypical ML problems such as Word2Vec (word embeddings) and CIFAR-10 (image classification).We empirically validate TensAIR's model convergence by comparing its training loss curve at increasing levels of distribution across both CPUs and GPUs.Our results confirm that TensAIR's DASGD updates achieve similar convergence on Word2Vec and CIFAR-10 as a synchronous SGD propagation.At the same time, we achieve a nearly linear reduction in training time on both problems.Due to this reduction, TensAIR significantly outperforms not just the current OL extensions of Apache Kafka and Flink (based on both the standard and distributed TensorFlow APIs), but also Horovod which is a long-standing effort to scale-out ANN training.Finally, by providing an in-depth analysis of a sentiment analysis (SA) use-case on Twitter, we demonstrate the importance of OL in the presence of concept drifts (i.e., COVID-19 related tweets with changing sentiments).In particular the SA usecase is an example of task that would be deemed too complex to be adapted in real-time (at a throughput rate of up to 6,000 tweets per second) when using other OL frameworks.
HPC Setup.We carried out the experiments described in this section using the HPC facilities of the University of Luxembourg [35].We distributed the ANNs training using up to 4 Nvidia Tesla V100 GPUs in a node with 768 GB RAM.We also deployed up to 16 regular nodes, with 28 CPU cores and 128 GB RAM each, for the CPU-based (i.e., without using GPU acceleration) settings.Event Generation.We trained both sparse (word embeddings2 ) and dense (image classification3 ) models based on English Wikipedia articles and images from CIFAR-10 [20], respectively.Instead of connecting to actual streams, we chose those static datasets to facilitate a consistent analysis of the results and ensure reproducibility.Moreover, to simulate a streaming scenario, we implemented the MiniBatchGenerator as an entry-point Vertex operator (compare to Figure 1) which generates events with timestamps , groups them into mini-batches by using a tumbling-window semantics, and sends these mini-batches to the subsequent operators in the dataflow.Furthermore, this allows us to simulate streams of unbounded size by iterating over the datasets multiple times (in analogy to training with multiple epochs over a fixed dataset).Sparse vs. Dense Models.We chose Word2Vec and CIFAR-10 because they represent prototypical ML problems with sparse and dense model updates, respectively.Sparse updates mean that only a small portion of the neural network variables actually become updated per mini-batch [28].Hence, sparseness should assist the models' convergence when using DASGD, as observed also in Hogwild! [28].We trained by sampling 1% from English Wikipedia which corresponds to 11.7M training examples (i.e., word pairs).On the other hand, we chose CIFAR-10 for being dense.Thus, we could analyze how this characteristic possibly hinders convergence when models are distributed and updated asynchronously.We train on all of the 50,000 labeled images of the CIFAR-10 dataset.
Convergence Analysis
We first explored TensAIR's ability to converge by determining if and how DASGD might degrade the quality of the trained model (Figure 3).We compared the training loss curve of Word2Vec and CIFAR-10 by distributing TensAIR models from 1 to 4 GPUs using 1 TensAIR rank per GPU (Figures 3b & 3d).We additionally explored the models convergence when trained with distributed CPU nodes (Figures 3a & 3c).In this second scenario, we trained up to 64 ranks on 16 nodes simultaneously without GPUs.Note that, when using a single TensAIR rank, TensAIR's gradient updates behave as in a synchronous SGD implementation.
The extremely low variance among all loss curves shown in Figures 3a and 3b demonstrates that our asynchronous and distributed SGD updates do not at all negatively affect the convergence of the Word2Vec models.We assume that this is due to (1) the sparseness of Word2Vec, and (2) a low staleness of the gradients (which are relatively inexpensive to compute and apply for Word2Vec).The low staleness indicates a fast exchange of gradients among models.
In Figure 3c, we however observe a remarkable degradation of the loss when distributing CIFAR-10 across multiple nodes.This is due to the fixed learning rate used on all settings being the same.When distributing dense models on multiple ranks without adapting the mini-batch size, it is well known to result in a degradation of the loss curve (even on synchronous settings).This degradation occurs because the behaviour of training models with mini-batches of size is similar to training 1 model with mini-batches of size • .
Loss Epochs
Loss -W2V with GPUs (512 batch size) To mitigate this issue, Horovod increases the learning rate by the number of ranks used to distribute the model [1], i.e., = • .Accordingly, in Figure 3d, we again do not see any degradation of the loss when distributing CIFAR-10 because we use a maximum of 4 GPUs.
Speed-up Analysis
Next, we explore the performance of TensAIR under increasing levels of distribution and with respect to varying mini-batch sizes over both Word2Vec and CIFAR-10.This experiment is also deployed on up to 64 ranks (16 nodes) and up to 4 GPUs (1 node).We observe in Figure 4 that TensAIR achieves a nearly-linear scale-out under most of our settings.In most cases, TensAIR achieves a better speedup when training with smaller mini-batches.This difference is because, differently than the gradient calculation, the gradient application is not distributed and, with smaller mini-batches, more gradients are applied per epoch.Thus, models with expensive gradient computations will have a better scale-out performance.Nevertheless, when gradient calculation is not the bottleneck of the dataflow, one can reduce the computational impact of the gradients application and the network impact of their broadcasts by simply increasing maxBuffer.For instance, by increasing maxBuffer in times, the network complexity and the computational impact of the gradients applications are also expected to be reduce in times.
Baseline Comparison
Apart from TensAIR, it is also possible to train ANNs by using Apache Kafka and Flink as message brokers to generate data streams of varying throughputs.Kafka is already included in the standard TensorFlow I/O library (tensorflow_io), which however allows no actual distribution in the training phase [3].Flink, on the other hand, employs the distributed TensorFlow API (tensorflow.distribute).However, we were not able to run the provided dl-onflink use-case [2] even after various attempts on our HPC setup.We therefore report the direct deployment of our Word2Vec and CIFAR-10 use-cases (Figures 5a & 5b) on both the standard and distributed TensorFlow APIs (the latter using the MirroredStrategy option of tensorflow.distribute).We thereby, simulate a streaming scenario by feeding one mini-batch per training iteration into Tensor-Flow, which yields a very optimistic upper-bound for the maximum throughput that Kafka and Flink could achieve.In a similar manner, we also determined the maximum throughput of Horovod [30], which is however not a streaming engine by default.
In Figures 5a and 5b, we see that TensAIR clearly surpasses both the standard and distributed TensorFlow setups as well as Horovod.This because, as opposed to TensAIR, their architectures were not developed to train on batches arriving from data streams.Thus, in a streaming scenario, the overhead of transferring the training data to the worker nodes increases by the number of training steps.On the other hand, TensAIR was designed to train ANN models from high throughput data streams in real-time.Thus, the transfer of training data overhead is mitigated by the asynchronous protocol adopted and the training is speed-up by DASGD.This allows TensAIR to (1) reduce both computational resources and idle times while the data is being transferred, and (2) have an optimized buffer management for incoming mini-batches and outgoing gradients, respectively.
In our experiments, we could sustain a maximum training rate of 285,560 training examples per second on Word2Vec and 200,000 images per second on CIFAR-10, which corresponds to sustainable throughputs of 14.16 MB/s and 585 MB/s respectively.We reached these values by training with 3 GPUs on Word2Vec and 4 GPUs on CIFAR-10.Note that, while using more than 3 GPUs simultaneously, TensAIR did not achieve better sustainable throughput in the W2V usecase due to the relatively low complexity of the gradient calculations.In this scenario, the training bottleneck, typically associated with gradient calculation, shifted to the gradient application when using more than 3 GPUs, as the former is not distributed.Nevertheless, this issue can be mitigated by simply increasing the variable (as explained in Section 4.2).This adjustment, delays the communication among distributed models while reducing the locally applied gradients by a factor of .
Sentiment Analysis of COVID19
Here, we exemplify the benefits of training an ANN in real-time from streaming data.To this end, we analyze the impact of concept drifts on a sentiment analysis setting, specifically drifts that occurred during and due to the COVID19 pandemic.First, we trained a large Word2Vec model using 20% Sentiment140 dataset [10].Then, we trained an LSTM model [4] using the Sentiment140 dataset together with the word embeddings we trained previously.After three epochs, we reached 78% accuracy on the training and the test set.However, language is always evolving.Thus, this model may not sustain its accuracy for long if deployed to analyze streaming data in real-time.We exemplify this by fine-tuning the word embeddings with 2M additional tweets published from November 1st, 2019 to October 10th, 2021 containing the following keywords: covid19, corona, coronavirus, pandemic, quarantine, lockdown, sarscov2.Then, we compared the previously trained word embeddings and the fine-tuned ones and found an average cosine difference of only 2%.However, despite being small, this difference is concentrated onto specific keywords.
As shown in Table 1, keywords related to the COVID-19 pandemic are the ones that most suffered from a concept drift.Take as example pandemic, booster and corona, which had over 62% of cosine difference before and after the Word2Vec models have been updated.Due to the concept drift, the sentiment over specific terms and, consequently, entire tweets also changed.One observes this change by comparing the output of our LSTM model when: (1) inputting tweets embedded with the pre-trained word embeddings; (2) inputting tweets embedded with the fine-tuned word embeddings.Take as an example the sentence "I got corona.",which had a sentiment of +2.0463 when predicted with the pre-trained embeddings; and −2.4873 when predicted with the fine-tuned embeddings.Considering that the higher the sentiment value the more positive the tweet is, we can observe that corona (also representing a brand of a beer) was seen as positive and now is related to a very negative sentiment.
To tackle concept drifts in this use-case, we argue that TensAIR with its OL components (as depicted in Figure 6) could be readily deployed.A real-time pipeline with Twitter would allow us to constantly update the word embeddings (our sustainable throughput would be more than sufficient compared to the estimated throughput of Twitter).Consequently, the sentiment analysis algorithm would always be up-to-date with respect to such concept drifts.
Figure 6 depicts the dataflow for a Sentiment Analysis (SA) usecase on a Twitter data stream.This dataflow predicts the sentiments of live tweets using a pre-trained ANN model (Model ).However, it does not rely on pre-defined word embeddings.The dataflow constantly improves its embeddings on a second Word2Vec (W2V) ANN model (Model 2 ), which it trains using the same input stream as used for the predictions.By following a passive concept-drift adaptation strategy, it can adapt its sentiment predictions in real-time based on changing word distributions among the input tweets.Moreover, it does not require any sentiment labels for newly streamed tweets at Model , since only Model 2 is re-trained in a self-supervised manner by generating mini-batches of word pairs (, ) directly from the input tweets.
Our SA dataflow starts with Map which receives tweets from a Twitter input stream (implemented via cURL or a file interface) and tokenizes the tweets based on the same word dictionary also used by Model 2 and Model .Split then identifies whether the tokenized tweets shall be used for re-training the word embeddings, for sentiment prediction, or for both.If the tokenized tweets are selected for training, they are turned into mini-batches via the UDF operator.The (, ) word pairs in each mini-batch are sharded across Model 2 1 , . .., Model 2 with a standard hash-partitioner using words as keys.Model 2 implements a default skip-gram model.If the tokenized tweets are selected for prediction, a tweet is vectorized by using the word embeddings obtained from any of the Model 2 instances and sent to the pre-trained Model which then predicts the tweets' sentiments.
CONCLUSIONS
OL is an emerging area of research which still has not extensively explored the real-time training of ANNs.In this paper, we introduced TensAIR, a novel system for real-time training of ANNs from data streams.It uses the asynchronous iterative routing (AIR) protocol to train and predict ANNs in a decentralized manner.The two main features of TensAIR are: (1) leveraging the iterative nature of SGD by updating the ANN model with fresh samples from the data stream instead of relying on buffered or pre-defined datasets; (2) its fully asynchronous and decentralized architecture used to update the ANN models using decentralized and asynchronous SGD (DASGD).Due to those two features, TensAIR achieves a nearly linear scale-out performance in terms of sustainable throughput and with respect to its number of worker nodes.Moreover, it was implemented using TensorFlow, which facilitates the deployment of diverse use-cases.Therefore, we highlight the following capabilities of TensAIR: (1) processing multiple data streams simultaneously; (2) training models using either CPUs, GPUs, or both; (3) training ANNs in an asynchronous and distributed manner; and (4) incorporating user-defined dataflow pipelines.We empirically demonstrate that-in a real-time streaming scenario-TensAIR supports from 4 to 120 more sustainable throughput than Horovod and both the standard and distributed TensorFlow APIs (representing upper bounds for Apache Kafka and Flink extensions).
As future work, we believe that TensAIR may also lead to novel online learning use cases which were previously considered too complex but now become feasible due to the very good sustainable throughput of TensAIR.Specifically, we intend to study similar learning tasks over audio/video streams, which we see as the main target domain for stream processing and OL.To reduce the computational cost of training an ANN indefinitely, we shall also investigate how different active concept-drift detection algorithms behave under an OL setting with ANNs.
( 1 )
Design and implementation of TensAIR, the first framework for real-time training and prediction in ANN models.(2) Creation and usage of our Decentralized and Asynchronous SGD (DASGD) algorithm.(3) Experimental evaluation of TensAIR showing almost linear training time speed-up in terms of nodes deployed.(4) Sustainable throughput comparison between TensAIR and state-of-the-art systems, with TensAIR achieving from 4 to 120 times higher sustainable throughput than the baselines; (5) Depiction of real-time Sentiment Analysis use case that
Figure 6 :
Figure 6: TensAIR dataflow with distributed Model 2 instances and a single instance of Map, Split, UDF and Model .
1 , 1 ), . . ., ( , ) 2 = ( +1 , +1 ), . . ., ( 2 , 2 ) Stream Processing.As shown in Algorithm 1, a TensAIR Model operator has two new OL functions train and predict, which can asynchronously send and receive messages to and from other operators.During train, Model receives either encoded mini-batches or gradients ∇ as messages.Each message encoding a gradient that was computed by another model instance is immediately used to update the local model accordingly.Each mini-batch first invokes a local gradient computation and is then used to update the local model.Each such resulting gradient is also locally summed until a desired number of gradients (maxGradBuffer) is reached, upon which the buffer then is broadcast to all other Model instances.Algorithm 1 TensAIR Model class 1: Constructor Model (tfModel, maxBuffer): , the model will converge in this setting in up to O ( 2 ) + O ( ) iterations to an -small error, considering as the average staleness observed during training and a constant that bounds the gradients size.If bounded gradi- of English Wikipedia plus the | 7,660.8 | 2022-11-18T00:00:00.000 | [
"Computer Science"
] |
Crystal structure of 3-(4-bromophenylsulfonyl)-2,5,6-trimethyl-1-benzofuran
In 3-(4-bromophenylsulfonyl)-2,5,6-trimethyl-1-benzofuran, molecules are linked into a chain along the b-axis direction by C—H⋯π hydrogen bonds and C—Br⋯π interactions.
Structural commentary
In the title molecule ( Fig. 1), the benzofuran unit (O1/C1-C8) is essentially planar, with a mean deviation of 0.015 (2) Å from the mean plane defined by the nine constituent atoms. The 4bromophenyl ring (C12-C17) is inclined to the benzofuran ring by 89.29 (6) . The title compound crystallized in the noncentrosymmetric space group Pc in spite of having no asymmetric C atoms.
Supramolecular features
In the crystal, molecules are linked into a chain along the baxis direction by C-HÁ Á Á hydrogen bonds ( Fig. 2
Cg1 is the centroid of the C2-C7 benzene ring.
Figure 1
The molecular structure of the title compound with the atom-numbering scheme. Displacement ellipsoids are drawn at the 50% probability level. H atoms are shown as small spheres of arbitrary radius.
Refinement
Crystal data, data collection and structure refinement details are summarized in Table 2. All H atoms were positioned geometrically and refined using a riding model, with C-H = 0.95 for aryl and 0.98 Å for methyl H atoms, and with U iso (H) = 1.2U eq (C) for aryl and 1.5U eq (C) for methyl H atoms. program(s) used to solve structure: SHELXS97 (Sheldrick, 2008); program(s) used to refine structure: SHELXL97 (Sheldrick, 2008); molecular graphics: ORTEP-3 for Windows (Farrugia, 2012) and DIAMOND (Brandenburg, 1998); software used to prepare material for publication: SHELXL97 (Sheldrick, 2008).
3-(4-Bromophenylsulfonyl)-2,5,6-trimethyl-1-benzofuran
Crystal data 3H)). Geometry. All esds (except the esd in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell esds are taken into account individually in the estimation of esds in distances, angles and torsion angles; correlations between esds in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell esds is used for estimating esds involving l.s. planes. Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > 2sigma(F 2 ) is used only for calculating R-factors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger. | 585.2 | 2014-10-11T00:00:00.000 | [
"Chemistry"
] |
AM and DSE colonization of invasive plants in urban habitat: a study of Upper Silesia (southern Poland)
Interactions between invasive plants and root endophytes may contribute to the exploration of plant invasion causes. Twenty plant species of alien origin differing in invasiveness were studied in terms of status and typical structures of arbuscular mycorrhizal fungi and dark septate endophytes (DSE) in urban habitats in Silesia Upland (southern Poland). We observed that 75 % of investigated plant species were mycorrhizal. The arbuscular mycorrhiza (AM) of most plant species was of the Arum morphology. The nearly 100 % mycorrhizal frequency, high intensity of AM colonization within root cortex and the presence of arbuscules in all mycorrhizal plant species indicate that the investigated species are able to establish AM associations in the secondary range and urban habitats. DSE were present in all mycorrhizal and non-mycorrhizal species. The frequency of DSE was significantly lower in non-mycorrhizal group of plants, however, sclerotia of DSE were found mainly in the roots of non-mycorrhizal plant species. The group of species native to North America including three Solidago congeners have the highest values of all AM mycorrhization and DSE indices. Moreover, we observed that most mycorrhizal invasive species belonged to the family Asteraceae. In turn, representatives of Poaceae had the lowest values of AM mycorrhization. Nevertheless, quite high values of DSE frequency were also encountered in roots of Poaceae species. The high invasiveness of the representatives of the Asteraceae family from North America support theory that both taxonomic pattern, and the fact of root endophytes colonization contribute to invasion success. While, the taxa of Reynoutria also represent successful invaders but they are of Asiatic origin, non-mycorrhizal and weakly colonized by DSE fungi. Electronic supplementary material The online version of this article (doi:10.1007/s10265-016-0802-7) contains supplementary material, which is available to authorized users.
Introduction
Arbuscular mycorrhiza (AM) is the most ancestral and commonest type of mycorrhizal symbiosis (Brundrett 2002), in which the fungal hyphae penetrate the cortical cell wall of the host plant's root. It is characterized by the arbuscules and vesicles formed by the aseptate, obligately symbiotic fungi of the phylum Glomeromycota (Schüßler et al. 2001). In this association the host plant provides the fungus with assimilates i.e. soluble carbon sources, whereas the fungus provides the host plant with an increased capacity to absorb water and nutrients from the soil. It has been discovered that invasive alien plants take advantage from mycorrhizas (Klironomos 2002;Smith and Read 2008). The feedback between alien plants and soil fungal communities may strongly contribute to species Abstract Interactions between invasive plants and root endophytes may contribute to the exploration of plant invasion causes. Twenty plant species of alien origin differing in invasiveness were studied in terms of status and typical structures of arbuscular mycorrhizal fungi and dark septate endophytes (DSE) in urban habitats in Silesia Upland (southern Poland). We observed that 75 % of investigated plant species were mycorrhizal. The arbuscular mycorrhiza (AM) of most plant species was of the Arum morphology. The nearly 100 % mycorrhizal frequency, high intensity of AM colonization within root cortex and the presence of arbuscules in all mycorrhizal plant species indicate that the investigated species are able to establish AM associations in the secondary range and urban habitats. DSE were present in all mycorrhizal and non-mycorrhizal species. The frequency of DSE was significantly lower in non-mycorrhizal group of plants, however, sclerotia of DSE were found mainly in the roots of non-mycorrhizal plant species. The group of species native to North America including three Electronic supplementary material The online version of this article (doi:10.1007/s10265-016-0802-7) contains supplementary material, which is available to authorized users. invasiveness, affecting the ability of a plant to grow, establish, invade and persist in a local habitat (Bray et al. 2003;Chmura and Gucwa-Przepióra 2012). There are many case studies demonstrating that arbuscular mycorrhiza-invasive plants feedback can be rather positive than negative when AMF also become beneficial and increase their abundance (Levine et al. 2006;Stampe and Daehler 2003;Zhang et al. 2010). It is important to determine the role of AM in species invasion. It is possible that invasive alien species benefit from arbuscular mycorrhiza or conversely, they are not encouraged by arbuscular mycorrhizal fungi (AMF) and other factors influence their invasiveness (Shah et al. 2009).
Dark-septate root endophytes (DSE) are an artificial assemblage of fungi that have darkly pigmented, septate hyphae and are frequent intracellular root associates of plants (Piercey et al. 2004). They colonize the cortical cells and intercellular regions of roots and form densely septated intracellular structures called microsclerotia (Jumpponen and Trappe 1998). In contrast to the wide knowledge of arbuscular mycorrhizal fungi, the role of DSE in the ecosystem is not clearly understood. The relationship between host plants and DSE range from symbiotic to parasitic associations (Newsham 2011). At the beginning the association of DSE with plant roots was described as being parasitic (Melin 1922;Wilcox and Wang 1987) while later studies demonstrated commensal to beneficial effects on the host plant (Addy et al. 2005;Likar and Regvar 2013). Only few studies concern DSE colonization in invasive plant species (Knapp et al. 2012). Similarly to AMF it might be possible that DSE colonization play an important role in improving alien plant healthy, especially those which are non-mycorrhizal.
The invasion of alien plants alter the local biological community structure, leading to biodiversity loss (Pimentel et al. 2001;Vitousek et al. 1996). Many previous studies concerning invasion by non-native plant species were focused on aboveground features with little attention given to belowground soil organisms (Levine et al. 2004;Pyšek and Jarošík 2005;Sala et al. 2000;Vitousek et al. 1996). A few recent studies demonstrated the role of AM in plant invasion however most of studies focused on greenhouse, pot or microcosm experiments not on field studies (Koske and Gemma 2006;Richardson et al. 2000;Štajerová et al. 2009;Stampe and Daehler 2003). Root endophytes like AMF and DSE are common colonizers of plant roots across wide range of habitats (Kauppinen et al. 2014;Mandyam and Jumpponen 2005;Smith and Read 2008). In several studies, AMF and DSE have been found to enhance plant growth, photosynthetic activity, phosphorus content, act antagonistically towards soil borne fungal pathogens, and modify the concentration of plant metabolites (Toussaint 2007;Wu et al. 2010). As an important component of soil microorganisms in terrestrial ecosystem, AMF and DSE could be key factors in the invasive plant process, not only by facilitating local adaptation or reducing environmental stress but also through their effects on plant competition (Fumanal et al. 2006;Richardson et al. 2000;Wilson et al. 2012). On the other hand, invasive plants can affect function of these fungi (Callaway et al. 2008). For instance in Ageratina adenophor increase of AMF was observed, whereas in European Alliara petiolata but in invasive range (North America) native AMF were reduced (Shah et al. 2009 and literature cited therein). Some plant invaders produce allelopathicals which disrupt the belowground competitive outcome between plants and mycorrhizal fungi. The reduction of mycorrhizal colonization caused by allelopathic invasive alien plants can indirectly have a negative impact on the native plants which benefit from mycorrhiza (Bothe et al. 2010 and the literature cited therein). The plant species which negatively affect soil mycobiota are often weakly dependent on AMF or are non-mycorrhizal. Thus revitalization of some habitats, after removal of invasive species, requires the introduction of native plants which promote AMF (Ruckli et al. 2014;Tanner and Gange 2013). Our knowledge of DSE fungi diversity and their function in ecosystems and their interactions with vascular plants is limited. Thus, impact of invasive alien species is also unknown (Knapp et al. 2012).
The general objective of this study was to answer the question whether the colonization of AM and/or DSE would enhance the plant invasion. Due to limitation of the study we wanted to answer indirectly by defining the mycorrhizal status, features and the degree of colonization of arbuscular mycorrhiza fungi and dark septate endophytes of twenty invasive and alien plant species in the Polish flora and by making comparison and relating the obtained results to the species invasiveness. We hypothesize that the study of interactions between invasive plants and root endophytes may contribute to the exploration of plant invasion causes. In literature one can find rare studies which try to relate plant traits and invasion status with AM and DSE status (Majewska et al. 2015).
The second hypothesis assumes that non-mycorrhizal species should have higher frequency of DSE and within mycorrhizal species there is competition between AMF and DSE that should be revealed by negative relationships. The third hypothesis states as follows: differences in functional diversity i.e. various plant traits among plant species can be key factor explaining vulnerability to fungi colonization. The specific goals were as follows: to examine AMF and DSE type and structures in plant species differing in invasiveness and occurring in disturbed habitats within urban zone; to analyse associations between frequency of DSE and AMF colonization both between non-mycorrhizal and mycorrhizal species and within mycorrhizal species; to relate plant traits, habitat requirements and parameters of species invasiveness with AM and DSE colonization.
Plant material and field sampling
The material was collected from Katowice city, which is situated in the centre of The Upper Silesian Industrial Region (19°00′E, 50°15′N). We selected species which are quite common and are invasive in the study area. Majority of them are neophytes (=kenophytes) sensu Tokarska-Guzik (2005) i.e. alien species introduced after the year 1500. Two species are exceptions: Sonchus oleraceusarchaeophyte, post-invasive plant and Avena fatua, synanthropic species but native to Eurasia (Table 1). Amongst neophytes there are some of the most invasive taxa in Poland and Europe: Reynoutria spp, highly invasive Solidago canadensis, S. gigantea, Impatiens parviflora, I. glandulifera and also weakly invasive species such as Cardaria draba, Eragrostis minor (Tokarska-Guzik et al. 2012). In respect to invasion status (Richardson et al. 2000) in the study region (Silesian Upland), ten species are considered as transformers i.e. subset of invasive species that have clear ecosystem impact, five species are weeds i.e. those plants which grow in sites where they are not wanted, for instance-arable fields. Five species are classified as notharmful or non-invasive (Tokarska-Guzik et al. 2010). In total, 20 plant species were collected during the flowering period in 2012. The range of a species (scale: 0-5) and abundance of population (scale: 1-5) was given after Zarzycki et al. 2002, whereas tendency (1-4) and invasiveness (scale: 1-21) was adopted after Tokarska-Guzik et al. 2010. The nomenclature of vascular plant species follows Mirek et al. 2002. All plants were collected from urban and suburban habitats i.e. wastelands, roadsides, disturbed managed forests. In total 20 sites were chosen. At each site four 1 3 repetition samples were taken. In order to avoid pseudoreplication repetition samples were gathered from places away from each other. To evaluate fungi root endophytes colonization, root samples of at least five flowering plants of each species were collected from a depth of 0-20 cm for one sample. Sampling was carried out in the peak of flowering period for each taxon separately. Only well-developed, undamaged individual were taken. The list of species with some characteristics is given in Table 1, whereas detailed information about GPS coordinates and types of habitat is in attachment (Table S1, Supplementary Material). Plants were excavated in their entirety and manually cleaned of soil.
Assessment of AMF and DSE colonization
For the estimation of mycorrhizal development, the roots were prepared according to a modified method of Phillips and Hayman (1970). After careful washing in tap water the roots were softened in 7 % KOH for 24 h and then rinsed in a few changes of water. The material was acidified in 5 % lactic acid for 24 h and then stained with 0.01 % aniline blue in lactic acid for 24 h. The entire procedure was carried out at room temperature. Root fragments approximately 1 cm long, at 30 fragments per one repetition sample, were mounted on slides in glycerol: lactic acid (1:1) and pressed using cover slides. In total 120 fragments were taken for particular species. AMF colonization and AM morphology were identified on the basis of aseptate hyphae growing intracellularly, forming arbuscules terminally in the cortical cells (the Arum-type AM morphology); intracellularly with arbuscules developed on coils in the cortical cells (the Paristype) or forming intermediate types (Dickson 2004).
The following parameters describing the intensity and effectiveness of the mycorrhization were recorded: mycorrhizal frequency (F %)-the ratio between root fragments colonized by AMF mycelium and the total number of root fragments analyzed, relative mycorrhizal root length (M %)-an estimate of the amount of root cortex that is mycorrhizal relative to the whole root system, intensity of colonization within individual mycorrhizal root (m %), relative arbuscular richness (A %)-arbuscule richness in the whole root system and arbuscule richness in root fragments where the arbuscules were present (a %) (Trouvelot et al. 1986). DSE colonization was identified on the basis of regularly septate hyphae, usually dark pigmented, with facultatively occurring sclerotia (Jumpponen 2001). The mycelium does not stain with aniline blue and remain brownish. The frequency of DSE mycelium (hyphae and sclerotia) occurrence in roots (FDSE %) was estimated similarly as it was calculated for the mycorrhizal frequency Zubek and Błaszkowski 2009).
Statistical analysis
In order to compare species cluster analysis was done on the basis of mean values of mycorrhization indices i.e., F %, M %, m %, A %, and a % and FDSE %. Clustering methods such as Euclidean distance and Ward method were applied. To do this, arithmetic means of mycorrhization indices per species were calculated. The obtained clusters of species were analyzed in terms of particular AM and DSE indices. The significance of differences in FDSE % between distinguished groups was performed by the Kruskal-Wallis test followed by the Conover test for multiple comparisons, whereas AM colonization indices were tested using Wilcoxon sum rank test only within AM species. The relationship between DSE and mycorrhization by arbuscular fungi was carried out by the Spearman rank correlation analyses between FDSE % and mycorrhization indices. All samples were subjected to this analysis except for non-mycorrhizal plants (all indices of AM equal zero). To estimate whether plant traits, their habitat requirements and invasiveness have an influence on AM colonization and frequency of DSE two statistical approaches were employed. For the purpose of these analyses the following traits were used (plant traits), i.e. Grime strategy (the following strategies were used: C/CSR competitive/ intermediate strategy, CR competitive ruderal, R ruderal, R/CR ruderal/competitive ruderal, SR stress ruderal), mean height of stem, and type of seed bank (no seed bank, short-term, long-term bank). As a measure of habitat associations, Ellenberg indicator values (EIVs) for moisture F, soil reaction R and trophy N were adopted. Finally, data about invasiveness of species was included. As a measure of species invasiveness the following data was included: range-expressed in 5 scale; population size (5° scale); the type of habitats colonized (3° scale) habitats invaded; dynamic tendency (5° scale) i.e. tendency in spread; and residence time (time since putative date of introduction till 2005 (Tokarska-Guzik 2005, 2012; Zarzycki et al. 2002). Since we did not have abundance data of species we used multidimensional functional diversity (FD), which does not require abundance and presence/absence data (Mouillot et al. 2013). We treated groups of species revealed after cluster analysis as "community", in the sense of species which respond in a similar way to AM and DSE colonization. We computed distance-based functional diversity indices FD using R library FD: functional richness (FRic), functional evenness (FEve), and functional divergence (FDiv) (Villéger et al. 2008) as well as functional dispersion (FDis; Laliberté and Legendre 2010), Rao's quadratic entropy (Q) (Botta-Dukát 2005) and the community-level weighted means of trait values (CWM; e.g. Lavorel et al. 2008). Since FD does not provide a formal statistical test for the significance of differences among communities, we applied ordination technique-Redundancy Analysis and permutation test. To that end, three RDAs redundancy analyses were performed based on two matrices. In all, RDAs first matrix contained data of F %, M %, m %, A %, and a % including non-mycorrhizal plants (values of AM colonization equal zero) and FDSE %. Contrary to cluster analysis, raw data from repetitions, instead of means, was included. In first RDA, as shown in the second table, matrix data on plant traits was employed. The second RDA was done with habitat associations and the third one with invasiveness features. In total, 999 permutations were computed to assess statistical significance of variables in the model. All statistical analyses were performed using R software (R Core Team 2015).
Mycorrhizal studies
In this work, we present a detailed report on the mycorrhizal status, AMF colonization rate and AM morphology of 20 alien plant species in Polish flora. Arbuscular mycorrhiza were found in 15 out of 20 investigated plant species except for the roots of Cardaria draba, Diplotaxis muralis and all Reynoutria species (Table 2). The AM of 13 plant species was of the Arum morphology. Hyphae were observed mainly in the intercellular spaces of root cortex, forming arbuscules terminally in cortical cells. Only one species-Bidens frondosa was characterized by Paris-type colonization in which neighbouring cortical cells contained hyphal coils, without hyphae in the intercellular spaces. The intermediate AM colonization was found only in Erigeron annuus roots ( Table 2). The mycorrhizal structures of all investigated plant species found to host AMF comprised arbuscules and vesicles, with the exception of Avena fatua and two Impatiens species, in which the mycorrhizal roots did not contain vesicles. Coils were encountered in only two species belonging to Asteraceae family (Table 2). In roots of all mycorrhizal plants only coarse AMF (hyphae diameter above 2 μm) were found. The fine AM endophyte (Glomus tenue) was not observed at all.
Analysis of the mycorrhizal status of investigated invasive alien plant species showed that 75 % of them were associated with AMF of the phylum Glomeromycota. In the majority of species investigated in this study, AM status has already been known. However, previous studies were based on reviews by Harley and Harley (1987) or Wang and Qiu (2006) therefore most of those plants were analyzed as native species in their natural habitat range e.g. Solidago (Table 2). However, the finding of the AM in Eragrostis minor by the authors of this paper is the first report of the mycorrhizal status of this plant. Also, the mycorrhizal status of some neophytes evaluated in our research has already been given, but only in the Czech Republic so far (Štajerová et al. 2009) (Table 2). However, they did not give AM morphotype and level of colonization in roots cortex of those plants.
The dominance of the Arum-type among the plant species we studied is comparable with a previous report, where this AM morphotype was also the most common in nonnative plants from India (Shah et al. 2009). Plant species identity plays a major role in determining the pattern of AMF development in roots, although AM-type may depend on fungal identity and environmental conditions (Smith and Read 2008). The dominance of the Arum-type among investigated invasive plant species may therefore not be incidental.
The richness of mycorrhizal structures in roots varied among different species. However average frequencies (F %) of all mycorrhizal species were very high and ranged from 86 to 100 %. Intensity of AMF colonization in root system (M %) and within individual mycorrhizal root (m %) was between 3 and 62 %. Root colonization (M %, m %) of many AM plant species was high and reached over 40 % ( Table 3). All of them belong to the Asteraceae family. In contrast, low M % and m % values were observed only in two investigated grass species. The high level of AM frequency and root cortex colonization within mycorrhizal species indicates that investigated plant species are able to establish AMF associations in their new urban and suburban habitats in Silesia Upland. Another indicator of well functioning mycorrhiza of investigated alien plants is the presence of arbuscules in all plant species recognized to associate with AMF and high arbuscular richness of most species. Arbuscules are structural and functional criterion of this kind of mycorrhizal symbiosis. Both measures of root arbuscules occurrence (A % and a %) followed the same pattern as mycorrhizal colonization indices (M %, m %). The highest mycorrhizal parameters were observed in Erigeron annuus roots and other plant species of the Asteraceae family (Table 3). The highest value of arbuscule abundance in whole root system (A %) was about 50 % whereas arbuscule richness of the colonized root section (a %) was above 88 %. The highest values of arbuscular richness were observed in plant species of the Asteraceae family (Table 3). The lowest arbuscule occurrence was found in Avena fatua and Eragrostis minor-representatives of Poaceae family (Table 3). The latter species is a typical urban plant, which is frequently found in many (Brandes 1995) and occurs even in harsh conditions e.g. tramlines (Sudnik-Wójcikowska and Galera 2005). In such habitats there are no favourable conditions for AMF development. Although in greenhouse cultivation the presence of AMF enhances growth of the species through increasing the weight of seedlings, it is treated as non-dependant on AMF species (Wurst et al. 2011).
Generally, AMF are known as ubiquitos in grass roots in different habitats, even the harsh ones. (Gucwa-Przepióra et al. 2007;Gucwa-Przepióra and Błaszkowski 2007;Kauppinen et al. 2014). Also, previous research showed well functioning AM and high level of mycorrhizal colonization in exotic grass Miscanthus × giganteus from sites contaminated by heavy metals in Silesia Upland in Poland (Gucwa-Przepióra et al. 2010). There was a considerable difference in the mycorrhizal colonization and arbuscule abundance between native and invasive grass species in Hungary. Lower degrees of AMF colonization parameters were observed for invasive grasses than for native residents in the Hungarian semiarid grassland community (Endresz et al. 2013).
On the other hand, important plant families to which many invasive plant species belong, are often considered as non-mycorrhizal e.g.: Brassicaceae, Polygonaceae, Chenopodiaceae, Caryophyllaceae (Harley and Smith 1983). Our research confirmed that a group of Reynoutria species (Polygonaceae) although non-mycorrhizal was very successful in the invasion process. This result supports the hypothesis of Pringle et al. (2009) that an invasive plant is likely to be non-mycorrhizal or a facultative symbiont. Also, non-mycorrhizal plant species prefer disturbed sites such as the early stages of industrial heaps (Gucwa-Przepióra and Turnau 2001; Janos 1980) and ruderal sites (Gange et al. 1990).
It is believed that there is a taxonomic pattern in invasiveness plants. As Pyšek (1998) in a worldwide review study demonstrated, the largest families (Poaceae, Asteraceae, Brassicaceae) contribute most to the total number of alien species in local floras. These families are also the most species-rich taxa. However, when it comes to the pool of potentially invasive species, Polygonaceae and Poaceae are prevalent (Pyšek 1998), this was also true for representatives of those families in our study. The most successful species in terms of invasiveness derive from these taxa. The most successful families possess some properties that could be attributed to their invasiveness, but these are rather complex and can hardly be related to the invasiveness of a particular family (Pyšek 1998). Perhaps independence from arbuscular mycorrhiza or weak dependence, expressed by lower values of AM colonization, may be considered a trait that makes some plants more invasive. That means that some species can thrive in sites free from AMF in soils. They do not need root endophytes to establish, persist and finally initiate further spread and become invasive.
DSE colonization
DSE were found in all investigated plant species both in mycorrhizal and non-mycorrhizal species (Table 3). In the case of AM species DSE were observed in the cortex together with AMF but mainly in root fragments where arbuscules were absent. The regularly septated hyphae, accompanied sporadically by sclerotia, were found in rhizodermis and outer cortical cells. The frequency of DSE occurrence in roots of most species was below 50 % or even 20 %. The exception was Avena fatua roots where DSE colonization was observed to be more frequent (FDSE % >70 %). In contrast to the wide knowledge of arbuscular mycorrhizal and ectomycorrhizal fungi (Smith and Read 2008), relatively little is known about the DSE fungi and their functions, although various reports of positive impacts of DSE colonization on their plant hosts have supported the view that DSE do indeed have a beneficial role for plant growth and survival (Fernando and Currah 1996;Mullen et al. 1998). Among the fungal endophytes that colonize the roots, DSE are often frequent colonists of plants even when they are growing under extreme conditions, like drought (Barrow 2003), high salinity (Sonjak et al. 2009), andmetal-enriched soils (Deram et al. 2008;Likar and Regvar 2013). Some authors have suggested that DSE may assume the role of AMF especially in the case of taxa which are rarely or not colonized by AMF like Carex species (Haselwandter and Read 1982), Atriplex canescens (Barrow et al. 1997), Saponaria officinalis (Zubek and Błaszkowski 2009). We believe that is unlikely because in our study DSE were present in all mycorrhizal and nonmycorrhizal species and in the latter group DSE are less frequent. Thus, DSE are rather not alternative to AMF in terms of all aspects of positive symbiosis, but this requires further detailed ecophysiological research.
Functional analysis of AM and DSE colonization
On the basis of AM and DSE colonization indices cluster analysis revealed three groups of plants (Fig. 1). The first group comprises non-mycorrhizal plants. The remaining two groups contained 5 and 10 species, respectively: Aster novibelgii group and Galinsoga ciliata group. For instance in the same group congeneric species were found i.e. three taxa from Solidago genus (Aster novi-belgii group) as well as Galinsoga ciliata and G. parviflora and both Impatiens species in the second group. FDSE % significantly varied between the three mentioned groups (Kruskal-Wallis, Chi-squared = 16.50, p < 0.001), but it was the lowest in non-mycorrhizal plants ( Table 4). As only AM + species are concerned Aster novi-belgii group has significantly higher values of M %, m %, A % and a % indices (Table 4). Moreover, in mycorrhizal species there is a weak negative correlation between AM colonization indices and frequency of DSE, however, in case of F % it turned out to be significant (rs = −0.47, p < 0.0001). Taking into account all samples frequency of hyphae vs sclerotia among clusters of plant species varied significantly; hyphae were significantly more frequent (Chi-squared = 8.702, p = 0.013) in AM plants i.e. Aster novi-belgii group followed by Galinsoga ciliata group (Fig. 2).
The permutation tests based on RDAs yielded several significant variables which explained differences in AM and DSE colonization (Table 5). This analysis was almost congruent with comparison of community-level weighted means of trait values among groups. According to functional analysis non-mycorrhizal plants showed that this group is the most functional rich and is characterized by the lowest functional diversity. In turn, Aster novi-belgii group, comprising species with higher values of AM and DSE colonization, is the most functionally diverse followed by the second mycorrhizal group Galinsoga ciliata group in which values of functional analysis were a little lower (Table 5). This is an interesting result which demonstrates that species which are colonized by fungi are not homogeneous. Analysis of particular variables using RDA can shed more light on the question of which variables make species more vulnerable to fungi colonization. Competitive ability, height and no seed bank were features associated with non-mycorrhizal plants (Table 5). It was the influence of Reynoutria taxa which can grow up to 3 meters during vegetation season and do not form seed bank. Non-mycorrhizal species showed relatively high values of Ellenberg moisture index. Once again, taxa which are confined to moist habitats are Reynoutria taxa which are very invasive in river valleys (Gerber et al. 2008). Total invasiveness and number of habitats invaded as well as invading range in the country turned out to be significant. The species from Aster novi-belgii group which are most widely distributed also penetrate more types of habitats (Table 5). Mean higher residence time was found in Galinsoga ciliata
Conclusions
To summarise, we noticed that Asteraceae representatives, native to America, were characterized by both the highest values of AM and DSE colonization. In most of the studied species taxonomic pattern and AM colonization are significant factors in invasiveness and the taxa of the Asteraceae family are examples confirming this theory.
On the other hand, taxa from the Polygonaceae family are also indicated as invasive but usually are non-mycorrhizal. Thus it can be inferred that taxonomic pattern better predicts species invasiveness than presence of AM. Moreover, Rao's quadratic entropy 16 17.5 17.14 many neophytes (including invasive species) in Central Europe originated from the temperate forest biome of eastern North America or eastern Asia (Chytrý, et al. 2005). The taxa of Reynoutria represent the contrary example of highly invasive plants compared to Asteraceae members. They are of Asiatic origin, non-mycorrhizal and weakly colonized by DSE fungi. It is not known if DSE colonization can enhance invasiveness of alien plant species because DSE were present in all studied species and in all samples.
To conclude, root endophytes can determine the success of non-native plant species in the process of plant invasion. However, it must be emphasized that for an alien plant species its distribution can be determined by the combination of certain abiotic and biotic variables, but because the group of plants in question is very heterogeneous, it is unlikely that a single hypothesis could explain their success of invasion.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 6,955.2 | 2016-02-19T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Reactivity of a triamidoamine terminal uranium(vi)-nitride with 3d-transition metal metallocenes
Reactions between [(TrenTIPS)UVI
<svg xmlns="http://www.w3.org/2000/svg" version="1.0" width="23.636364pt" height="16.000000pt" viewBox="0 0 23.636364 16.000000" preserveAspectRatio="xMidYMid meet"><metadata>
Created by potrace 1.16, written by Peter Selinger 2001-2019
</metadata><g transform="translate(1.000000,15.000000) scale(0.015909,-0.015909)" fill="currentColor" stroke="none"><path d="M80 600 l0 -40 600 0 600 0 0 40 0 40 -600 0 -600 0 0 -40z M80 440 l0 -40 600 0 600 0 0 40 0 40 -600 0 -600 0 0 -40z M80 280 l0 -40 600 0 600 0 0 40 0 40 -600 0 -600 0 0 -40z"/></g></svg>
N] (1, TrenTIPS = {N(CH2CH2NSiPri3)3}3−) and [MII(η5-C5R5)2] (M/R = Cr/H, Mn/H, Fe/H, Ni/H) were intractable, but M/R = Co/H or Co/Me afforded [(TrenTIPS)UV
<svg xmlns="http://www.w3.org/2000/svg" version="1.0" width="13.200000pt" height="16.000000pt" viewBox="0 0 13.200000 16.000000" preserveAspectRatio="xMidYMid meet"><metadata>
Created by potrace 1.16, written by Peter Selinger 2001-2019
</metadata><g transform="translate(1.000000,15.000000) scale(0.017500,-0.017500)" fill="currentColor" stroke="none"><path d="M0 440 l0 -40 320 0 320 0 0 40 0 40 -320 0 -320 0 0 -40z M0 280 l0 -40 320 0 320 0 0 40 0 40 -320 0 -320 0 0 -40z"/></g></svg>
N-(η1:η4-C5H5)CoI(η5-C5H5)] (2) and [(TrenTIPS)UIV–NH2] (3), respectively. For M/R = V/H [(TrenTIPS)UIV–NVIV(η5-C5H5)2] (4), was isolated. Complexes 2–4 evidence one-/two-electron uranium reductions, nucleophilic nitrides, and partial N-atom transfer.
area detectors using Cu Kα radiation (λ = 1.54184Å).Intensities were integrated from data recorded on narrow (1.0 °) frames by ω rotation.Cell parameters were refined from the observed positions of all strong reflections in each data set.Gaussian grid face-indexed absorption corrections with a beam profile correction were applied.The structures were solved by direct methods and all non-hydrogen atoms were refined by full-matrix least-squares on all unique F 2 values with anisotropic displacement parameters with exceptions noted in the respective cif files.Except where noted, Hydrogen atoms were refined with constrained geometries and riding thermal parameters.CrysAlisPro 2 was used for control and integration, SHELXS 3,4 was used for structure solution, and SHELXL 5 and Olex2 6 were employed for structure refinement.ORTEP-3 7 and POV-Ray 8 were employed for molecular graphics. 1H, 13 C{ 1 H}, and 29 Si{ 1 H} spectra were recorded on a Bruker 400 spectrometer operating at 400.1, 125.8, and 79.5 MHz, respectively; chemical shifts are quoted in ppm and are relative to tetramethylsilane ( 1 H, 13 C, 29 Si).FTIR spectra were recorded on a Bruker Alpha spectrometer with a Platinum-ATR module in the glovebox.UV/Vis/NIR spectra were recorded on a Perkin Elmer Lambda 750 spectrometer where data were collected in 1mm path length cuvettes and were run versus the appropriate reference solvent.Variable-temperature magnetic moment data were recorded in an applied direct current (DC) field of 0.1 or 0.5 Tesla on a Quantum Design MPMS3 superconducting quantum interference device magnetometer using recrystallised powdered samples.Samples were carefully checked for purity and data reproducibility between independently prepared batches.
Samples were crushed with a mortar and pestle under an argon atmosphere and immobilised in an eicosane matrix within a borosilicate glass NMR tube to prevent sample reorientation during measurements.The tube was flame-sealed under dynamic vacuum (1x10 -3 mbar) to a length of approximately 3 cm and mounted in the centre of a drinking straw, with the straw fixed to the end of an MPMS 3 sample rod.Care was taken to ensure complete thermalisation of the sample before each data point was measured by employing delays at each temperature point and the sample was held at 1.8 K for 60 minutes before isothermal magnetisation measurements to account for slow thermal equilibration of the sample.Diamagnetic corrections were applied using tabulated Pascal constants and measurements were corrected for the effect of the blank sample holders (flame sealed Wilmad NMR tube and straw) and eicosane matrix.Elemental microanalyses were carried out by Mr Martin Jennings at the Micro Analytical Laboratory, Department of Chemistry, University of Manchester.
Attempted Reactions of 1 with [M II (h 5 -C5H5)2] (M = Cr, Mn, Fe, and Ni)
Cr: To a Schlenk flask was added a solid mixture of 1 (0.35 g, 0.40 mmol) and freshly sublimed [Cr II (h 5 -C5H5)2] (0.073 g, 0.40 mmol).At -78 °C, toluene (20 mL) was added, and the solution was allowed to warm to room temperature before being stirred for 16 hours.Volatiles were removed in vacuo to afford a yellow-brown solid.Despite exhaustive attempts, an isolable product could not be obtained. 1H NMR spectroscopic analysis of the solid (C6D6, 298 K) exhibited multiple resonances across the range δ +32 to -14 ppm.
Mn: To a Schlenk flask was added a solid mixture of 1 (0.35 g, 0.40 mmol) and freshly sublimed [Mn II (h 5 -C5H5)2] (0.074 g, 0.40 mmol).At -78 °C, toluene (20 mL) was added, and the solution was allowed to warm to room temperature before being stirred for 16 hours.Volatiles were removed in vacuo to afford a brown solid.Despite exhaustive attempts, an isolable product could not be obtained. 1H NMR spectroscopic analysis of the solid (C6D6, 298 K) exhibited multiple resonances across the range δ +28 to -6 ppm.
Fe: To a Schlenk flask was added a solid mixture of 1 (0.35 g, 0.40 mmol) and freshly sublimed [Fe II (h 5 -C5H5)2] (0.074 g, 0.40 mmol).At -78 °C, toluene (20 mL) was added, and the solution was allowed to warm to room temperature before being stirred for 16 hours.Volatiles were removed in vacuo to afford a brown solid.Despite exhaustive attempts, an isolable product could not be obtained.Ni: To a Schlenk flask was added a solid mixture of 1 (0.35 g, 0.40 mmol) and freshly sublimed [Ni II (h 5 -C5H5)2] (0.076 g, 0.40 mmol).At -78 °C, toluene (20 mL) was added, and the solution was allowed to warm to room temperature before being stirred for 16 hours.Volatiles were removed in vacuo to afford a brown solid.Despite exhaustive attempts, an isolable product could not be obtained. 1H NMR spectroscopic analysis of the solid (C6D6, 298 K) exhibited multiple resonances across the range δ +28 to -38 ppm.
Preparation of [(Tren TIPS )U V =N-(h 1 :h 4 -C5H5)Co I (h 5 -C5H5)] (2)
To a Schlenk flask was added a solid mixture of 1 (0.35 g, 0.40 mmol) and freshly sublimed [Co II (h 5 -C5H5)2] (0.151 g, 0.80 mmol).At -78 °C, toluene (20 mL) was added, and the solution was allowed to warm to room temperature before being stirred for 16 hours.Volatiles were removed in vacuo to afford a dark red solid.Soluble residues were then extracted into hexanes (2 ´ 10 mL), and the resultant dark red solution was concentrated to approximately 5 mL and stored at 5 °C for 24 hours to afford 2 as red crystals, along with both crystalline 1 and [Co II (h 5 -C5H5)2].Despite exhaustive efforts, it was not possible to isolate 2 cleanly, with 1 H NMR spectroscopic analysis of the solid mixture revealing the three complexes present -1, 2, and [Co II (h 5 -C5H5)2] in varying ratios.Further analysis was conducted on this mixed component solid. 1
Reaction of 1 with [Co II (h 5 -C5Me5)2] and Isolation of [(Tren TIPS )U IV -NH2] (3)
To a Schlenk flask was added a solid mixture of 1 (0.35 g, 0.40 mmol) and freshly sublimed [Co II (h 5 -C5Me5)2] (0.132 g, 0.40 mmol).At -78 °C, toluene (20 mL) was added, and the solution was allowed to warm to room temperature before being stirred for 16 hours.Volatiles were removed in vacuo to afford a turquoise-blue solid.Soluble residues were then extracted into hexanes (2 ´ 10 mL), and the resultant solution was concentrated to approximately 5 mL and stored at 5 °C for 24 hours to afford 3 as emerald-green crystals.Characterisation data matched that previously reported. 9
Density Functional Theory Calculations
Complex 2 was geometry optimised without constraints and then a single point energy calculation was conducted on the optimised coordinates.Attempts to geometry optimise 4 resulted in the central N-atom of the U-N-V unit moving such that the U-N and V-N distances were both ~1.95 Å.Given the poor agreement of geometry optimised 4 to the experimentally observed crystal structure metrics, we used the crystal structure coordinates, froze the heavy atom positions, and geometry optimised the H-atom positions.Single point energy calculations were then performed on the resulting coordinates.
Complex 2 was computed with a doublet (1 unpaired electron) formulation, and given the results of the state-averaged complete active space self-consistent field (SA-CASSCF) calculations below we computed 4 with doublet (1 unpaired electron, 4') and quartet (3 unpaired electrons, 4'') spin-states to probe any possible U-V antiferromagnetic coupling.The calculations were performed using the Amsterdam Density Functional (ADF) suite version 2017 with standard convergence criteria. 10,11The DFT calculations employed Slater type orbital (STO) triple-ζ-plus polarisation all-electron basis sets (from the Dirac and ZORA/TZP database of the ADF suite).Scalar relativistic approaches (spin-orbit neglected) were used within the ZORA Hamiltonian [12][13][14] for the inclusion of relativistic effects and the local density approximation (LDA) with the correlation potential due to Vosko et al was used in all of the calculations. 15Generalised gradient approximation (GGA) corrections were performed using the functionals of Becke and Perdew. 16,17Natural Bond Order (NBO) and Natural Localised Molecular Orbital (NLMO) analyses were carried out with NBO 6.0.19. 18The Quantum Theory of Atoms in Molecules analysis 19,20 was carried out within the ADF program.We quote Nalewajski-Mrozek bond orders since they reproduce expected bond multiplicities reliably in polar heavy atom structures whereas Mayer bond orders for polar bonds often do not conform with chemical intuition. 21e ADF-GUI (ADFview) was used to prepare the three-dimensional plots of the electron density.
State-Averaged Complete Active Space Self-Consistent Field Calculations
SA-CASSCF calculations were performed with OpenMolcas v21.06. 22Basis sets were exclusively of the ANO-RCC type, with VTZP quality for U, V and the bridging N atoms, VDZ quality for all other non-H atoms, and MB quality for all H atoms. [23][24][25] The DKH-2 relativistic Hamiltonian 26 and Cholesky decomposition of the two-electron integrals at a threshold of 10 -8 were employed.The starting materials contain U VI (5f 0 ) and V II (3d 3 ), but the distribution of oxidation states in 4 is unknown a priori.Preliminary state-averaged complete active space self-consistent field (SA-CASSCF) calculations with and active space of 3 electrons in 12 orbitals (3d and 5f) examining high spin (Stot = 3/2) and low spin (Stot = 1/2) multiplicities suggest that the ground state is dominated by U IV (5f 2 ) and V IV (3d 1 ) configurations.Hence, it appears that [V(Cp)2] has doubly-reduced [U(N)(Tren TIPS )] in this case.The 3d 1 configuration for V IV defines five SV = 1/2 roots in the basis of configuration state functions (CSFs) while the 5f 2 configuration for U IV defines 21 SU = 1 and 28 SU = 0 roots; this would lead to 105 Stot = 3/2 and 245 Stot = 1/2 roots excluding charge transfer (CT) excitations.However, it is likely that this excitation space will be polluted with ligand to metal CT, metal to ligand CT or intervalence CT states.Hence, we performed SA-CASSCF calculation restricted to the subspace defined by product of the 2 D term of V IV and the lowest-lying 3 H term of U IV , comprising 55 Stot = 3/2 and 55 Stot = 1/2 roots, and then mix the resulting states with spin-orbit (SO) coupling.We note, however, that due to covalency and crystal field splitting that the assignment of these roots to the 2 D⊗ 3 H space is only approximate.Indeed, the resulting states should include m K, m I, m H, m G and m F terms (where m = 4 for Stot = 3/2 and m = 2 for Stot = 1/2); however, upon projection of the wavefunction onto an angular momentum basis using irreducible spherical tensor methods, 27 only the m H, m G and m F terms were found.Hence, we then performed a CAS configuration interaction (CI) calculation for 105 Stot = 3/2 and 175 Stot = 1/2 roots (followed by SO coupling) using the optimised orbitals to attempt to capture the 3 H, 3 F, 3 P, 1 G and 1 D terms of U IV .Projection of the resulting wavefunctions yielded more components of the desired spectrum (Table S4), however even still the maximal angular momentum of the m K terms (Ltot = LV + LU = 2 + 5 = 7) is not realised.This indicates that the significant covalency and crystal field splitting of the 3d and 5f orbitals quenches the total orbital angular momentum of the complex.None-the-less, we can project the SO states onto an angular momentum basis to inspect the low-lying spectrum of 4 (Table S4).The ground Kramers doublet is dominated by Stot = 1/2 states, suggesting that the interaction between the V IV and U IV is antiferromagnetic.Clearly though, the states are very mixed in terms of angular momentum projections, which arises from non-trivial interplay between exchange coupling, crystal field splitting (covalency) and SO coupling effects.The ground Kramers doublet has easy axis anisotropy parallel to the U-V vector (Figure S33), and while some other doublets share this axis as one of their principal g-vectors, many do not and many show significant rhombicity.Hence, we have also reported the effective g-value along the U-V vector (gzz in Table S5).
1 H
NMR spectroscopic analysis of the solid (C6D6, 298 K) exhibited multiple resonances across the range δ +28 to -38 ppm.
Figure S16 .
Figure S16.UV/Vis/NIR spectrum of 4 (black line), and 2 obtained by a subtraction method (red
Figure S17 .
Figure S17.UV/Vis/NIR spectrum of 4 (black line), and 2 obtained by a subtraction method (red
Figure S22 .
Figure S22.X-band (9.38 GHz) EPR spectrum of a powdered sample of 4 at 15 K (black) and its
Figure S24 .
Figure S24.X-band (9.38 GHz) EPR spectrum of a powdered sample of 4 at 7 K (black line) and a
Figure S25 .
Figure S25.X-band (9.38 GHz) EPR spectrum of a powdered sample of 4 at 7 K (black line) and a
Figure S32 .
Figure S32.Illustration of the easy axis anisotropy of the Kramers doublet of 4. | 3,249.4 | 2024-08-15T00:00:00.000 | [
"Chemistry"
] |
CTCF mediates chromatin looping via N-terminal domain-dependent cohesin retention
Significance The DNA-binding protein CCCTC-binding factor (CTCF) and the cohesin complex function together to establish chromatin loops and regulate gene expression in mammalian cells. It has been proposed that the cohesin complex moving bidirectionally along DNA extrudes the chromatin fiber and generates chromatin loops when it pauses at CTCF binding sites. To date, the mechanisms by which cohesin localizes at CTCF binding sites remain unclear. In the present study we define two short segments within the CTCF protein that are essential for localization of cohesin complexes at CTCF binding sites. Based on our data, we propose that the N-terminus of CTCF and 3D geometry of the CTCF–DNA complex act as a roadblock constraining cohesin movement and establishing long-range chromatin loops.
Fig. S1. Overlap of CTCF and cohesin occupancy in multiple cell lines. (A)
Venn diagram representing overlap of CTCF and RAD21 ChIP-seq binding regions mapped in MCF7 cells. (B) Heatmaps of CTCF (red), RAD21 (pink) and IgG (black) occupancy at genomic regions bound either by CTCF or RAD21 or both in MCF7 cells demonstrate that both CTCF and RAD21 peaks not overlapping with each other show some enrichment of RAD21 and CTCF occupancy, respectively. The heatmaps correspond to the overlapping CTCF and RAD21 binding sites in panel (A), with the connection between two panels shown by black arrows. (C) Average profiles of CTCF (blue), RAD21 (pink), and IgG (green) occupancy at the binding sites determined in panel A confirm the enrichment of RAD21 and CTCF occupancy at CTCF and RAD21 peaks not overlapping with each other, respectively. The connection between two panels (B and C) is shown by black brackets. (D) Based on the enrichment of CTCF and RAD21 occupancies mapped in human (MCF7 and HEPG2) and mouse (mES and CH12) cells, the three classes of CTCF and RAD21 sites were identified (labelled on the left). Venn diagrams illustrate the overlap of these three classes between the two human and mouse cell types. The percentages show how many of the sites from the cell type with a smaller number of sites overlap with the cell type with the larger number of sites. CTCF and RAD21 sites were the most reproducible, followed by CTCF sites depleted of RAD21 enrichment, while CNC sites were more cell typespecific.
Fig. S2. The cohesin loading factor NIPBL is sufficient to explain CTCF-independent cohesin occupancy in different mouse and human cell lines, while the tissue-specific transcription factors ESR1 (MCF7), CEBPA (HepG2), OCT4 (mESC) do not generally overlap with cohesin (RAD21). (A)
Heatmaps of ESR1 (purple), CTCF (red), RAD21 (pink) and NIPBL (green) occupancy at 51,395 ESR1 binding sites mapped in MCF7 cells. (B) Genome browser view of CTCF, RAD21 and ESR1 ChIP-seq data in MCF7 cells confirms that ESR1 binding sites generally do not coincide with cohesin occupancy. (C) Heatmaps of CEPBA (black), CTCF (red), RAD21 (pink), and NIPBL (green) occupancy at 89,721 CEBPA binding sites mapped in HepG2 cells demonstrate that CEPBA binding sites are generally do not coincide with CTCF-depleted cohesin binding sites. (D) Genome browser view of CTCF, RAD21, NIPBL, and CEBPA occupancy in HepG2 cells shows that RAD21 sites depleted of CTCF correspond better to NIPBL binding sites than CEPBA sites, highlighted by red arrows. (E) Heatmaps of OCT4 (blue), CTCF (red), RAD21 (pink) and NIPBL (green) occupancy at 32,338 OCT4 binding sites mapped in mESCs. S5. Comparison of the 5K lost CTCF sites with all CTCF sites mapped in CH12 cells with respect to their genomic distribution and to their association with epigenetic marks and transcription factors (A) Genomic distribution of the remaining (56K) and the lost (5K) CTCF ChIP-seq peaks in mut CH12 cells in comparison with the genomic distribution of all CTCF peaks (61K) mapped in wt CH12 cells (left). The lost 5K sites and the remaining 56K CTCF sites showed a similar distribution with respect to genomic context. (B) A heatmap showing row z-scores of overlapping ChIP-seq data for multiple transcription factors and histone modifications (labeled at the right of the heatmap) with the 5K lost CTCF sites (Lost 5K), with the 5K sites randomly selected from a total number of 61K CTCF sites mapped in wt CH12 cells (Random 5K), and with the total number of all 61K CTCF sites mapped in wt CH12 cells (Total 61K). Venn diagrams representing overlap of CTCF ChIP-seq data with RNA-seq data in wt and mut CH12 cells. The genes that significantly changed expression upon deletion of ZFs 9-11 in CTCF (A) overlapped with CTCF peaks lost in mut CH12 cells. Gene coordinates were extended 100 kb up-and downstream of their transcription start and end sites. The majority (60%) of deregulated genes had a lost CTCF peak within 100 kb, suggesting that they might be direct targets of CTCF. (C) The major pathways that are significantly deregulated in mut CH12 cells compared to wt CH12 cells. S8. Ectopically expressed CTCF constructs restore CTCF occupancy at the majority of CTCF sites lost in mut CH12 cells. (A) Heatmaps demonstrating that V5-tagged full-length (FL) CTCF restores CTCF occupancy at the 5K lost sites in mut CH12 cells. FL-CTCF was mapped by ChIP-seq with both CTCF and V5-tag Abs, shown at the top of the heatmap in comparison with CTCF occupancy in both wt and mut CH12 cells. (B) Heatmaps showing 188 lost CTCF sites that do not restore occupancy upon ectopic expression of CTCF in mut CH12 cells. (C) Heatmaps demonstrating that the binding pattern of FL-CTCF and truncated mutants in mut CH12 cells generally reproduced that of full-length CTCF, including the occupancy at the 5K lost CTCF sites. (D) Genome browser view of CTCF, V5, and RAD21 ChIP-seq data mapped in wt and mut CH12 cells. The App promoter, residing in a CpG island (green track), contains one of the 188 "permanently" lost CTCF sites (B) (shown by red arrows). Fig.4. First, we selected 2529 CTCF anchored chromatin loops by overlapping CTCF ChIPseq data with a deeply sequenced Hi-C dataset where 3331 chromatin loops were identified in wt CH12 cells (PMID: 25497547). Second, we selected 344 loops that overlapped with the 5K lost CTCF sites at one or both anchors. Third, we sorted out 70 loops for Hi-C analysis by removing the short-range loops (those that span less than 300kb). (pink), and Chimera2 (purple) occupancy at the 5K lost CTCF sites demonstrates an overall gain of cohesin occupancy following the gain of Chimera2 occupancy, albeit to a lower extent than with FL-CTCF stably expressed in mut CH12 cells. K-means ranked clustering of ChIP-seq data along the 5K lost CTCF sites shows that only some of them were enriched with cohesin, reflecting Chimera2 occupancy, while the majority of the lost CTCF sites were occupied by cohesin following FL-CTCF occupancy. Clusters shown on the right side of heatmap explain the observed patterns. No coimmunoprecipitation of CTCF with any cohesin subunits was detected when DNA-assisted protein interactions were inhibited by ethidium bromide. V5-tag and cMYC Abs were used as a negative control. YY1 and PARP1 Abs were used as a positive control for CTCF. All four cohesin subunits (RAD21, SMC1, SMC3, and SA2) are co-immunoprecipitated together. The asterisk shows a nonspecific band, not corresponding to the molecular weight of SMC3 protein. ) to see if the proteins form a stable complex that can be supershifted with both CTCF and RAD21 Abs. EMSA with CTCF-cohesin overlapping nuclear extract fractions demonstrated that the labelled DNA-protein complexes could be supershifted with antibodies against CTCF and BORIS, a known interacting partner of CTCF, but not with antibodies against cohesin subunit RAD21, thus confirming the absence of CTCF-cohesin complexes in the nuclear extracts, consistent with our co-IP results (Fig. S23). The P 32 -labelled p53 promoter probe, described in (PMID: 26268681), was used in EMSA assay. The black arrows show the supershift with both CTCF and BORIS Abs, but not with RAD21 Abs (red arrows). Aa sequences highlighted in red, blue, green, black and purple belong to CTCF, BORIS, AZF, flexible linker and V5-tag peptides, respectively. CTCF and BORIS ZFs are underlined, the N-terminus of both proteins is in bold. In the sequence #16, the amino acids that shown to be poly(ADP)ribosylated are replaced by alanine (A, highlighted by black color and underlined).
Bioinformatic analysis of ChIP-seq data
Single-end sequences were generated by the Illumina genome analyzer (36-60 bp reads) were aligned against either the human (build hg19) or mouse (build mm9) genome using the Bowtie program with the default parameters (8), except the sequence tags that mapped to more than one location in the genome were excluded from the analysis using the -m1 option. Peaks were called using Model-based Analysis for ChIP-seq (MACS2) using default parameters (https://github.com/taoliu/MACS). The ChIP-seq data were visualized using the Integrative Genomics Viewer (IGV) (9). The peak overlaps between ChIP-seq data sets were determined with the BedTools Suite (10). We defined peaks as overlapping if at least 1 bp of of each peak overlapped. The normalized tag density profiles were generated using the BedTools coverage option from the BedTools Suite (10), normalized to the number of mapped reads, and plotted in Microsoft Excel. The heatmaps and the average profiles of ChIP-Seq tag densities for different clusters were generated using the seqMINER 1.3.3 platform (11). We used k-means ranked method for clustering normalization. Position weight matrices were calculated using Multiple EM for Motif Elicitation (MEME) software (12). The sequences under the summit of ChIP-seq peaks were extended 100 bp upstream and downstream for motif discovery. We ran MEME with parameters (−mod oops -revcomp -w 40 or -w 20) to identify the long and short CTCF motifs considering both DNA strands. Genomic distribution of CTCF ChIP-seq peaks relative to reference genes was performed using the Cis-regulatory Element Annotation System (CEAS) (13). To call the genomic regions bound either by CTCF or RAD21 or by both proteins in the four cell lines (Fig.1C-D, F), we calculated CTCF and RAD21 ChIPseq tag densities at each binding region. For this we combined CTCF and RAD21 binding sites into a composite set, extended the summit of peaks to 300 bp, and calculated either CTCF or RAD21 normalized ChIP-seq tag density at each binding region using BedTools Coverage option. We classified the sites as "Cohesin-Non-CTCF" or "CTCF depleted of RAD21" if a difference in the tag density between the two factors was more than 3-fold at the binding region. To calculate the percent of cohesin (RAD21) occupancy at the lost CTCF sites in Fig. 7, we calculated RAD21 ChIP-seq tag densities (normalized to the number of mapped reads) mediated by the ectopic expression of either empty vector, FL-CTCF, chimeric or mutant constructs in mut CH12 cells. The RAD21 ChIP-seq tag density at the lost CTCF sites either followed by the ectopic expression of empty vector was taken as 0% or followed by the expression of FL-CTCF was taken as 100% cohesin occupancy. The percent of cohesin occupancy at the lost CTCF sites by chimeric and mutant proteins was calculated on the scale between 0% and 100%. In the case of chimeric proteins (Fig. 7B), we calculated RAD21 ChIP-seq tag density only at the lost CTCF sites that have a similar occupancy (ChIP-seq tag density) for FL-CTCF and the corresponding chimeric protein at these sites. All ChIP-seq data have been deposited in the Gene Expression Omnibus (GEO) repository with the following GEO accession number: GSE137216.
RNA-seq
The RNA sequencing library preparation and sequencing procedures were carried out according to Illumina protocols. FASTQ files were mapped to the UCSC Mouse reference (mm9) using TopHat2 (14) with the default parameter setting of 20 alignments per read and up to two mismatches per alignment. The aligned reads (BAM files) were analyzed with Cufflinks 2.0 to estimate transcript relative abundance using the UCSC reference annotated transcripts (mm9). The expression of each transcript was quantified as the number of reads mapping to a transcript divided by the transcript length in kilobases and the total number of mapped reads in millions (FPKM). Cuffdiff was applied to obtain the list of deregulated genes. Transcripts having more than 2-fold changes in their expression and p-value less than 0.005 were used for further analysis. RNA-seq data have been deposited in the GEO repository with the following accession number: GSE137216.
Hi-C
In situ Hi-C experiments were performed as previously described using the MboI restriction enzyme (15). The crosslinked pellets (1.5 million cells) were incubated and washed with 200 μL of lysis buffer (10 mM Tris-HCl pH 8.0, 10 mM NaCl, 0.2% Igepal CA630, 33 μL Protease Inhibitor (Sigma, P8340)) on ice, and then incubated in 50 μL of 0.5% SDS for 10 min at 62°C. After heating, 170 μL of 1.47% Triton X-100 was added and incubated for 15 min at 37˚C. To digest chromatin, 100 U MboI and 25 μL of 10X NEBuffer2 were added followed by overnight incubation at 37˚C. The digested ends were filled and labeled with biotin by adding 37.5 μL of 0.4 mM biotin-14-dATP (Life Tech), 1.5 μL of 10 mM dCTP, 10 mM dTTP, 10 mM dGTP, and 8 μL of 5 U/μL Klenow (New England Biolabs) and incubating at 23°C for 60 minutes with shaking at 500 rpm on a thermomixer. Then the samples were mixed with 1x T4 DNA ligase buffer (New England Biolabs), 0.83% Triton X-100, 0.1 mg/mL BSA, 2000 U T4 DNA Ligase (New England Biolabs, M0202), and incubated for at 23°C for 4 hours to ligate the ends. After the ligation reaction, samples were resuspended in 550 μL 10 mM Tris-HCl, pH 8.0. To reverse the crosslinks, 50 μL of 20 mg/mL Proteinase K (New England Biolabs) and 57 μL of 10% SDS were mixed with the samples, and incubated at 55°C for 30 minutes, and then 67 μL of 5 M NaCl were added followed by overnight incubation at 68°C. After cooling at room temperature, 0.8X Ampure (Beckman-Coulter) purification was performed, and the samples were sonicated to a mean fragment length of 400 bp using Covaris M220. Two rounds of Ampure (Beckman-Coulter) beads purification was performed for size selection. Biotin-labeled DNA was purified using Dynabeads MyOne T1 Streptavidin beads (Invitrogen). The beads were washed with 400 μL of 1x Tween Wash Buffer (5 mM Tris-HCl pH 7.5, 0.5 mM EDTA, 1 M NaCl, 0.05% Tween- 20), and resuspended in 300 μL of 2x Binding Buffer (10 mM Tris-HCl pH 7.5, 1 mM EDTA, 2 M NaCl). The beads were added to samples and incubated for 15 minutes at room temperature. Then the beads were washed twice by adding 600 μL of 1x Tween Wash Buffer. Then the beads were equilibrated once in 100 μL 1x NEB T4 DNA ligase buffer (New England Biolabs) followed by removal of the supernatant using a magnetic rack. To repair the fragmented ends, the beads were resuspended in 100 μL of the following: 88 μL 1X NEB T4 DNA ligase buffer (New England Biolabs, B0202), 2 μL of 25 mM dNTP mix, 5 μL of 10 U/μL T4 PNK (New England Biolabs), 4 μL of 3 U/μL NEB T4 DNA Polymerase (New England Biolabs), 1 μL of 5 U/μL Klenow (New England Biolabs). The beads were incubated for 30 minutes at room temperature. The beads were washed twice by adding 600 μL of 1x Tween Wash Buffer. To add a dA-tail, the beads were resuspended in 100 μL of the following: 90 μL of 1X NEBuffer2, 5 μL of 10 mM dATP, and 5 μL of 5 U/μL Klenow (exo-) (New England Biolabs). The beads were incubated for 30 minutes at 37˚C. The beads were washed twice by adding 600 μL of 1x Tween Wash Buffer. Following the washes, the beads were equilibrated once in 100 μL 1x NEB Quick Ligation Reaction Buffer (New England Biolabs) and the supernatants were removed using a magnetic rack. The beads were then resuspended in 50 μL 1x NEB Quick Ligation Reaction Buffer. To ligate adapters, 2 μL of NEB DNA Quick Ligase (New England Biolabs) and 3 μL of Illumina Indexed adapter were added to the beads and incubated for 15 minutes at room temperature. The supernatant was removed and the beads were washed twice by adding 600 μL of 1x Tween Wash Buffer. Then the beads were resuspended once in 100 μL 10 mM Tris-HCl, pH 8.0, followed by removal of the supernatant and resuspension again in 50 μL 10 mM Tris-HCl, pH 8.0. After deciding an optimal PCR cycle number using KAPA DNA Quantification kit (Kapa Biosystems), 6 cycles of PCR amplification were performed with the following reaction mixture: 10 μL Phusion HF Buffer (New England Biolabs), 3.125 μL 10 μM TruSeq Primer 1, 3.125 μL 10 μM TruSeq Primer 2, 1 μL 10 mM dNTPs, 0.5 μL Fusion HotStartII, 20.75 μL ddH20, 11.5 μL Bead-bound Hi-C library. PCR products were subjected to a final purification using AMPure beads (Beckman-Coulter) and were eluted in 30 μL 10 mM Tris-HCl, pH 8.0. Libraries were sequenced on the Illumina HiSeq 4000 platform. Hi-C data have been deposited in the GEO repository with the following accession number: GSE136122.
Hi-C data analysis
Hi-C reads (paired end, 50 bases) were aligned against the mm9 genome using BWA-mem (16). PCR duplicate reads were removed using Picard MarkDuplicates. We used juicebox (17) to create hic file with -q 30 -f options and to visualize Hi-C data. The aggregate analysis of chromatin loops was performed using APA (17) with default parameters and 10 kb resolution. The list of chromatin loops identified in wild type CH12 were downloaded from (15).
Published next-generation experiments used in this study
ChIP-seq data for CTCF and RAD21 in K562 and HEPG2 cell lines used in the study: GSE32465 (18), GSE38163 (19), GSE30263 (20), GSE25021 (21), GSE36030 (19), supplied with 0.5% of bovine serum albumin (BSA) for 2 h at room temperature under a constant rotation. After 2 h of incubation, the beads were washed three times with PBS + 0.5% of BSA. Protein extracts of wt CH12 cells were prepared with RIPA Lysis buffer (Millipore) containing 50 mM Tris-HCl, pH 7.4, 1 % Nonidet P-40, 0.25 % sodium deoxycholate, 500 mM NaCl, 1 mM EDTA, 1× protease inhibitor cocktail (Roche Applied Science). Next, the antibody-bound beads were incubated with 1.5 mg of protein extracts in the presence of ethidium bromide (100 μg/μL) overnight at 4°C with constant rotation. Of note, the protein extracts were pre-cleared with 30 µL of DiaMag protein G-coated magnetic beads for 2 h under a constant rotation at 4°C. The immunoprecipitates were collected using a magnetic rack, washed five times with PBS+0.5 % BSA, dissolved in 1X LDS Sample buffer (Invitrogen) supplemented with DTT (50 mM final concentration), and boiled for 5 min at 90°C. Immunoprecipitated samples were resolved by SDS-PAGE, transferred to a PVDF membrane, and incubated with the indicated antibodies. Detections were performed using ECL reagents. | 4,383.4 | 2020-01-14T00:00:00.000 | [
"Biology"
] |
Prevalence and Antibiotic Resistance Pattern of Acinetobacter Isolated from Patients Admitted in ICUs in Mazandaran , Northern Iran
Background and Purpose: Antibiotic resistance rate is increasing in Acinetobacter species, especially in Acinetobacter baumannii, as the most important pathogen of hospital and ICU . This research aimed to evaluate antibiotic resistant rate of Acinetobacter spp. isolated from patients admitted to ICUs in educational hospitals affiliated with Mazandaran University of Medical Sciences. Methods: In this cross-sectional descriptive study, 50 Acinetobacter isolates were collected during 20132014. After confirming Acinetobacter species, antibacterial sensitivity test was done using disc diffusion method and minimal inhibitor concentration (MIC) was evaluated by E-test in all isolates. Results: Disc diffusion method revealed that 100% of isolates were resistant to Amikacin and Cefepim and 96% were resistant to both Meropenem and Ciprofloxacin antibiotics, 6% were sensitive, 18% were intermediate and 76% were resistant to imipenem. Also, 84% of isolates were sensitive and 16% were resistant to colistin. In E-test method, 92% of isolates were sensitive and 8% were resistant to colistin. Moreover, an isolate was sensitive, one was intermediate and the remaining isolates were resistant to ciprofloxacin, and 100% of isolates were resistant to other antibiotics in E-test. Over 96% of Acinetobacter isolates were resistant to the antibiotics frequently used in ICU (ciprofloxacin, meropenem, amikacin, and cefepim). Colistin was found as the only appropriate antibiotic that could be used for patients in ICU. Conclusion: We hope these results could change the attitude of physicians toward using antibiotics in ICUs and encourage them to follow antibiotic stewardship as the only effective strategy to somewhat control antibiotic resistances.
Introduction
Acinetobacter is a group of ever-evolving opportunist pathogens which affects various groups of people, especially patients hospitalized in ICU (Ramphal & Ambrose, 2006).This is a non-fermentative, nonmotile, gram-negative bacteria commonly found in water and soil.Acinetobacter lives as normal flora in healthy people's Oropharynx and in recent years have been reported as a major factor in hospital infection (Hanlon, 2005;Ku et al., 2000).The most important species of this organism is Acinetobacter baumannii which causes different types of infections including respiratory system infection, urinary tract infection, blood and wound infection, particularly in ICU.More over this organism forms biofilms on abiotic surfaces, a phenotype that can enucleate its ability to survive in nosocomial environments and to cause infections with devices in immunocompromised cases (He et al., 2015).In recent years, different species of Acinetobacter have become even more resistant (Coelho et al., 2004).
Acinetobacter has an acquired resistance to most available antimicrobial agents including Aminoglycosides, Quinolones and Extended-spectrum beta-lactamases.High rate of antimicrobial resistance of these bacteria, along with their endurance and survival ability, has made them a serious threat to hospital environments in developed countries and other places.Most of the species are resistant to Cephalosporins and resistance to Carbapenems is exactly growing too (Shakibaie et al., 2011).
Acinetobacter baumannii is the most widespread opportunistic pathogen responsible for hospital infection that frequently causes severe infections in high-risk populations including old people, premature children, newborns, operated patients, individuals undergoing peritoneal dialysis, patients with tracheostomy tubes, severely burned patients, those with tracheal intubation, mechanical ventilation, intravenous catheters and people who are treated with extended-spectrum antibiotics or Immunosuppressives (Zheng et al., 2013).
Based on the increasing reports of Acinetobacter species isolated from patients, especially in ICU, and growth of Acinetobacter strains' resistance to new and available antibiotics and high cost of using antibiotics, recognizing the resistance patterns is necessary in every center.On the other hand, isolating Acinetobacter species from clinical patients does not necessarily prove the presence of infection and should be evaluated according to the clinical conditions of the patients.In this situation, choosing the right method of treatment requires knowing the periodic pattern of the resistance.Therefore, in addition to studying the clinical importance of isolated species of Acinetobacter and determining the risk factors associated with clinical infection, in this study we aimed at investigating the resistance rate of isolated species in patients in four ICUs of educational hospitals of Mazandaran University of Medical Sciences.
Materials and Methods
In this study, 50 samples of Acinetobacter bacteria were isolated from ICUs in educational hospitals affiliated to Mazandaran University of Medical Sciences (Razi, Imam Khomeini, Fatemeh zahra and Bu ali sina).After transferring the clinical samples to the lab (department of microbiology, SARI Medical school), they were cultured in Bacteriological blood agar and eosin methylene blue (EMB) environments.Then, plates were incubated in aerobic condition at 37 degrees centigrade for 24 hours.Subsequently, culture environments were studied and if any growth was seen subtractive tests were done immediately.In this way, gram stain was done and gram-negative bacilli and oxidase were identified.Then using biochemical tests including Citrate test, Urea test, Indole test, movement test and oxidative-fermentation test and growth in 42-44 degrees centigrade, the presence of Acinetobacter was proved.The sensitivity of antibacterial was then determined using Disk Diffusion method based on CLSI standard on Mueller Hinton-agar environment.For this purpose microbial suspension in accordance with the density of half McFarland was prepared and spread on the aforementioned environment.For the isolated Acinetobacter, antibiogram test was conducted using antibiotic disks with Colsitin (110 µg), Cefepime (30 µg), Amikacin (30 µg), Imipenem (30 µg), Meropenem (30 µg), and Ciprofloxacin (30 µg).E-test method was used to determine MIC in isolations that exhibited high resistance in Disk Diffusion method.E-test strip films were provided from HIMEDIA, India and Liofilchem, Italy.All patients were followed for 3 to 6 weeks, and the outcome including death or recovery was studied.If patients were discharged their condition was followed via phone contacts.
Statistical Analysis
The information was collected and qualitative data was analyzed in SPSS (Ver.17) and quantitative data was analyzed applying ANOVA.
Discussion
This study aimed to evaluate the prevalence of antibiotical resistance of Acinetobacter species isolated from 50 patients hospitalized in ICUs of four University affiliated hospitals.Environmental flexibility and extended-spectrum of resistive variable, has made Acinetobactera very dangerous nosocomial pathogen (Nordmann, 2004).There are many accounts of Multi-drug resistance (MDR) Acinetobacter baumannii in the hospitals of Europe, North America, Argentina, Brazil, China, Taiwan, Hong Kong, Japan, Korea and even far regions such as Haiti and South Pole (Barbolla et al., 2003;Houang et al., 2001;Levin et al., 1996;Liu et al., 2006;Naas et al., 2005;Nishio et al., 2004).The diffusion of MDR strains often causes these epidemics at the level of cities, countries and continents (Barbolla et al., 2003;Da Silva et al., 2004;Landman et al., 2002).It is proved that MDR strains are transferred from regions with high antimicrobial resistance rate to regions with lower rates from Spain to Norway (Onarheim et al., 2000).
Our results revealed a very high rate of resistance to Cefepim (100%).
In order to evaluate such high rate of resistance, we should consider the mechanisms of drug resistance of Acinetobacter baumannii to Cephalosporins.Although it has been clear that TEM-1 beta-Lactamases occurs in Acinetobacter baumannii, recently, Extended-Spectrum Beta-Lactamases (ESBL) was also noted in Acinetobacter baumannii (Vila et al., 1993).Strains of Acinetobacter baumannii which carry PER-1 (1 ESBL) show high resistance to Extended-Spectrum Cephalosporins and Penicillins, however fortunately they do not interfere with the resistance of Acinetobacter baumannii to Carbapenems (Perez et al., 2007).All the samples were resistant to Amikacin which was confirmed by E-test method.Acinetobacter baumannii resistance to Aminoglycosides is mainly attributed to EffluxED ABC pumps and Aminoglycoside changer enzymes.These enzymes include Aminoglycoside Phosphotransferases, Aminoglycoside transferases style, and Aminoglycoside nucleotidyl transferase (Nemec et al., 2004;Vila et al., 1997).
Current study showed high rate of resistance to Ciprofloxacin too.Actually only 2 samples (4%) showed sensitivity to Ciprofloxacin.In E-test, one sample was identified to be sensitive (50%) and the other one had intermediate sesitivity (50%) to Ciprofloxacin.Resistance of Acinetobacter baumanniito Quinolones is often due to changes in DNA structure of gyrA, which is secondary to mutations in areas that determine resistance to quinolones in gyrA and parC genes (Seward & Towner, 1998;Vila, et al., 1997).These changes reduce tendency to connect quinolones to DNA-enzyme complex.Another mechanism of resistance to quinolones in created by efflux systems which decreases the aggregation of drug in intercellular spaces (Heinemann et al., 2000).
Few studies have reported the resistance of Acinetobacte rbaumannii to Colistin, which is a strong warning (Gales et al., 2001;Urban et al., 2001).Urban et al. (2001) have found a case of Acinetobacter baumannii resistance to Polymyxin B (Urban, et al., 2001).Mechanism of resistance to Colistin is probably related to changes in Acinetobacter baumannii Lipopolysaccharide (e. g.Acidification, Acylation or the presence of intermediary antigens in connecting antibiotic to cell membrane) (Peterson et al., 1987).Although in our study the highest rate of Acinetobacter baumannii sensitivity was to Colistin, but out of the 50 samples studied using Disk Diffusion, 8 samples (16%) were resistant to Colistin.However by E-test the resistance pattern was only confirmed in half of them (4 species, 8%) Resistance rates of Acinetobacter to Carbapenems were as follows: 96% of the samples were resistant to Meropenem and 76% were resistant to Imipenem.Also, 9 isolates had intermediate sensitivity to Imipenem (18%).Acinetobacter baumannii resistance to Carbapenems was significantly related to Beta-lactamases of group B and D. There is Metallo-beta-lactamase in group B, which hydrolyzes Carbapenems and other Beta-lactam antibiotics except in Aztreonam (Walsh, 2005).On the other hand, in Beta-Lactamases of group D, resistance is secondry to the exsistace of the OXA, which is very alarmingdue to deactivation of Carbapenems.
The first description of an OXA Carbapenemase in Acinetobacter baumannii was OXA-23 which was isolated from a clinical sample in Scotland before Carbapenem was discovered (Marqué et al., 2005).
In the study of Juyal et al. (2013) Inanotherstudy (Shakibaie, et al., 2011) performed on ICUs in Iran, resistance to Imipenem was 73.3% and resistance to Cefepime was 93.3%, which are similar to our findings, while the resistance rates to Ciprofloxacin and Amikacin were 66% and 53.3%, respectively which were much lower than the rate observed in our study.
In a systematic review, antibiotic resistance rate of Acinetobacter baumannii in samples of Iranian patients was evaluated (Moradi et al., 2015) among 3409 samples collecting during 2001 to 2013.The study showed a significant increase in resistance rate against Imipenem and Meropenem, while resistance to lipopeptides and Aminoglycosides did not have a considerable change during these years.Resistance to Carbapenems at the beginning of the research (2001) was low (51.1% for Imipenem and 64.3% for Meropenem) but at the end of the study (2013) it reached 76.5% for Imipenem and 81.5% for Meropenem, showing similar resistance rates to our results.That study also showed that prevalence of MDR Acinetobacter baumannii significantly increased during this period.In this study, resistance to Colistin was between 1.3% to 19% (the latter was similar to our findings).
Comparison of this review to the present study are shown in Table 3.
Nowadays, effective antibiotics for cure of Acinetobacter baumannii infections include: Aminoglycosides, Fluoroquinolones and Carbapenems (Falagas, et al., 2005) however, because of the high rate of Acinetobacter baumannii resistance to these antibiotics, they cannot be used as empirical treatment.For example, Carbapenems which are commonly prescribed for patients infected with life-threatening Acinetobacter baumannii, have the highest rate of resistance compared to other antibiotics.These facts could explain the reason of therapeutic failures following life-threatening infections with strains resistant to Carbapenem.Recently, Colistin and Tigecycline have emerged as alternative treatment choice for MDR Acintobacter infections.However, resistance to these antimicrobial agents has also been reported as a result of the increased usage of colistin and tigecycline (Capon et al., 2008).
Although resistance to Colistin is still low, but our study showed increasing resistance to this group which could be problematic in the future.The emergence of resistance has been experienced rapidly after its widespread use (Li et al., 2006;Gounden et al., 2009;Park et al., 2009).
Thus, many recent studies have investigated combined therapy of two or more agents for MDR acintobacter infections.Specially combination regimen of Tigecycline or Colistin with other antibiotics have frequently been reported .Park and coworker found a high rate of in vitro synergy and bactericidal activity and lack of antagonism in combination of colistin and Doripenem (Park et al., 2016).As well as, recent study revealed a superior activity for combination of tigecycline with cefoperazone-sulbactam against MDR acintobacter (Liu et al., 2014).
Conclusion
Our results clearly showed a very great rate of resistance against most of the antibiotics, usually prescribed for aforementioned infections.There were more than 96% rates of resistance against the antibiotics frequently used in ICU wards (Cefepime, Amikacin, Ciprofloxacin, Imipenemand Meropenem).These results prove that drug resistance, especially against Carbapenem, is increasing fast.Monotherapy should be avoided,because of rapid emerging of resistant, even with Colistin or Tigecycline.Recently combination regimen of the Colistin or Tigecycline with other antibiotics like Cefoprazon-sulbactam or Doripenem are promising in treatment of severe infection specially by MDR acintobacter.
We also hope these results could change the attitude of physicians toward using antibiotics in ICUs and encourage them to follow antibiotic stewardship as the only effective strategy to somewhat control antibiotic resistance in healthcare settings.
Table 1 .
Results of the antibiogram of cultivations using disk diffusion (for Acinetobacter)
Table 2 .
Results of the antibiogram of cultivations using E-Test (for Acinetobacter)
Table 3 .
Comparison of resistant average (%) in different studies in Iran and current research in a tertiary level hospital in India, the sensitivity rate of Acinetobacter was 73.61% to Amikacin, 68.06% to Imipenem, 36.11% to Ticarcillin/ Clavulanic acid, 29.17% to Piperacillin/ Tazobactam, 48.61% to Gentamicin, 31.94% to Cefoperazone/Sulbactam, 26.39% to Cefepime, 16.67% to Piperacillin, 18.06% to Cefoperazone and Ceftazidime, 12.5% to Ciprofloxacin, 8.33% to Aztreonam, and 23.61% to Cotrimoxazole.The sensitivity rate in this study is generally higher than what has been seen in current study, especially in the case of Amikacin and Imipenem.Strategies of antibiotic consumption and the time of study definitely have a major role in these differences. | 3,169.6 | 2016-03-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
The Pygmy Dipole Resonance – experimental studies of its structure and new developments
Several experimental studies using electromagnetic probes have found an enhancement of electric dipole strength between about 5 and 10 MeV. This phenomenon is usually denoted as Pygmy Dipole Resonance (PDR). The detailed structure of these excitations is still under debate. This manuscript will concentrate on the results of complementary experiments using hadronic probes to populate the PDR. These studies allow a first insight into the origin of the PDR. Finally, the manuscript will shortly discuss plans for future experiments.
Introduction
In a macroscopic picture, electric dipole (E1) moments of atomic nuclei are always connected to a breaking of the symmetry between protons and neutrons. The most prominent example is the Giant Dipole Resonance (GDR) which is visualized in a macroscopic picture as an out-of-phase oscillation of the proton and neutron fluids leading to an electric dipole excitation at energies of about E x = 31 A -1/3 + 21 A -1/6 MeV [1,2]. The GDR exhausts about 100% of the Energy Weighted Sum Rule (EWSR) for E1 transitions, which is approximately given by: σ Already in 1971 it has been proposed by Mohan, Danos, and Biedenharn that in a three-fluid hydrodynamical model a second collective E1 mode is generated by the oscillation of excess neutrons against an isospin-saturated proton-neutron core [3]. This mode would be located at lower energies and would carry less E1 strength compared to the GDR. First evidence for the existence of such a Pygmy Dipole Resonance (PDR) came from the observation of an enhancement of γ-ray emission after neutron capture [4]. The resulting schematic distribution of E1 strength is depicted in Figure 1 which already shows the difficulties of clearly distinguishing the PDR and GDR due to the fragmentation and fine structure. It is currently under debate if the precise knowledge of the properties of the PDR or of the complete distribution of the E1 strength has implications on the symmetry parameter in the equation of state [5][6][7][8][9]. The advent of bremsstrahlung photon sources with endpoint energies up to the particle thresholds from cw electron accelerators and the improvement of γ detector setups in the last two decades allowed high precision studies of the photoresponse of stable atomic nuclei in Nuclear Resonance Fluorescence (NRF) (γ,γ') experiments [10,11]. In these experiments, mainly dipole transitions are induced from the ground state of the nucleus and the subsequent γ decay is observed with high-resolution HPGe detectors. The final result is the detailed dipole strength distribution up to the particle threshold. Quasi-monoenergetic γ-ray beams from Laser Compton backscattering facilities can yield additional observables like parities and branching ratios of the excited states (see the contribution of Deniz Savran to these proceedings). More recently, the exchange of virtual photons between stable targets and a proton beam at energies of about 300 MeV has been used to study the dipole response below and above the particle threshold in a single experiment. This was achieved by high-resolution spectroscopy of the scattered protons using the Grand Raiden spectrometer at RCNP Osaka [12].
In very neutron-rich, unstable isotopes, one expects an enhancement of PDR-like excitations due to the larger neutron skin. A pioneering study has been performed at a setup at GSI Darmstadt. Radioactive 130,132 Sn nuclei were selected by the Fragment Separator (FRS) and exchanged virtual photons with a high-Z target. A magnet (ALADIN) behind the target chamber selected the ion of interest. Neutrons stemming from the (γ,xn) reaction were analyzed by the Large Area Neutron Detector LAND in coincidence with γ rays detected by scintillators from Crystal Ball surrounding the target chamber. These experiments showed an enhancement of E1 strength in the low energy tail of the GDR at about 10 MeV which exhausts several percent of the EWSR [13]. In a different approach, the radioactive neutron rich nucleus 68 Ni has been studied at the FRS by detecting the γ decays with the HECTOR and RISING array [14]. Again an enhancement of the E1 strength at energies of about 11 MeV is observed.
In summary, the studies using electromagnetic interaction observe an enhancement of E1 excitations at energies well below the GDR or on top of the tail of the GDR [15]. In stable nuclei, this strength exhausts about 1% of the isovector EWSR and is found below the neutron separation energy. In the few cases where radioactive nuclei have been investigated, the strength is found at higher energies and exhausts up to 5% of the isovector EWSR. Whether both observations belong to the same excitation mode, i.e., the PDR, is unclear. Obviously, more systematic studies on stable and radioactive species are mandatory to get a better understanding.
Hadronic probes
One main problem in the interpretation of the data is how to distinguish between excitations belonging to the GDR (having wave functions representing the typical out-of-phase oscillation of protons and neutrons) and excitations belonging to a different type of excitation, i.e., the PDR. One way to answer this question is to use hadronic probes as an alternative excitation mechanism for the dipole modes. Table 1 compares the two experimental approaches. One big advantage of electromagnetic probes is the selective excitation of the lowest spin modes and -if one can use HPGe detectors for the γ decay channel -the excellent energy resolution. These two advantages are lost in typical hadron scattering experiments. However, in 1992, Poelhekken, Harakeh et al. complemented the QMG/2 particle spectrometer at KVI Groningen by an array of NaI detectors to detect the γ rays from the decay in coincidence with the scattered α particles [16]. This coincidence requirement restores the selectivity to low-spin excitations which decay to the groundstate via E1, M1, or E2 transitions. However, due to the limited energy resolution of the NaI detectors, this setup allowed to study only a few selected nuclei with rather low level density. This limitation was overcome a decade later when we replaced the NaI detectors by an array of HPGe detectors which provided an excellent energy resolution [17]. In these experiments, the scattered α particles were analyzed by the new Big Bite Spectrometer. The coincidence data can be sorted into a matrix where one axis represents the energy loss of the α particle during the scattering process (which yields the excitation energy) and the other axis represents the energy of the γ ray measured in coincidence, see Fig. 2. Figure 2: α-γ coincidence matrix of the (α,α'γ) experiment on 124 Sn (adapted from [18] Such experiments allow a detailed state by state analysis of the α scattering cross section even in nuclei where the level density is rather high. The result was quite surprising: Whereas below about 7 MeV all E1 excitations seen in (γ,γ') are observed also in (α,α'), this is not the case for the E1 excitations at slightly higher energies [19][20][21][22], see Fig. 3. Referring to Table 1, this points to a different underlying nuclear structure: The E1 strength splits into a surface mode with a strong isoscalar component at lower energies and a more isovector mode at higher energies.
Very recently, a group around Angela Bracco at Milano performed 17 O scattering experiments at energies of about 20 MeV/A on 208 Pb and 124 Sn [23,24] at the Tandem-ALPI accelerator complex at INFN Legnaro. Again, the dominantly isocalar hadronic interaction can populate the states of interest. These studies reproduce the observations we made earlier in the (α,α'γ) experiments, i.e., a splitting of the E1 strength into a lower and higher lying part. These experimental observations are supported by various theoretical calculations, which predict distinctly different transition charge densities for typical 1states at lower and higher energies: Whereas the former resemble the macroscopic picture of surface neutrons oscillating against an isoscalar core, the latter have charge densities more typical for states belonging to the GDR, see, e.g., Refs [18,25,26].
Future experiments
Various theoretical models (also the simplified geometrical picture of a neutron skin oscillating against an isospin saturated core) predict a strong enhancement of the PDR with neutron excess. Therefore, it is interesting to study the isospin character of E1 strength in exotic neutron-rich nuclei as well. One way to do this would be to perform α scattering experiments in inverse kinematics using a radioactive ion beam. A European-Japanese collaboration is currently investigating the Sn isotopic chain from stable proton-magic 124 Sn to the doubly-magic radioactive nucleus 132 Sn in experiments at RIKEN. The large acceptance radioactive ion separator BigRIPS is used to select the isotopes of interest. The beam interacts with a liquid He target, the scattered heavy ions are analyzed by the ZeroDegree spectrometer which is set to collect ions without charge and mass change (i.e., elastic α scattering). The trigger is determined by the detection of coincident γ rays with NaI detectors (DALI2) and additional LaBr detectors in forward direction (see Fig. 4). Figure 4: Setup at RIKEN to study α scattering on neutron rich nuclei. The DALI2 array plus additional LaBr detectors measure the γ decay from the beam after hitting the liquid He target. The scattered beam particles are measured in the ZeroDegree particle spectrometer.
Further experiments using α or proton scattering as an experimental tool to study the PDR in stable nuclei are planned at iThembaLABS Cape Town and at RCNP Osaka, studies on unstable nuclei will be continued or are planned at GSI/FAIR Darmstadt and at RIKEN. | 2,184 | 2015-04-08T00:00:00.000 | [
"Physics"
] |
Self‐Reinforced Bimetallic Mito‐Jammer for Ca2+ Overload‐Mediated Cascade Mitochondrial Damage for Cancer Cuproptosis Sensitization
Abstract Overproduction of reactive oxygen species (ROS), metal ion accumulation, and tricarboxylic acid cycle collapse are crucial factors in mitochondria‐mediated cell death. However, the highly adaptive nature and damage‐repair capabilities of malignant tumors strongly limit the efficacy of treatments based on a single treatment mode. To address this challenge, a self‐reinforced bimetallic Mito‐Jammer is developed by incorporating doxorubicin (DOX) and calcium peroxide (CaO2) into hyaluronic acid (HA) ‐modified metal‐organic frameworks (MOF). After cellular, Mito‐Jammer dissociates into CaO2 and Cu2+ in the tumor microenvironment. The exposed CaO2 further yields hydrogen peroxide (H2O2) and Ca2+ in a weakly acidic environment to strengthen the Cu2+‐based Fenton‐like reaction. Furthermore, the combination of chemodynamic therapy and Ca2+ overload exacerbates ROS storms and mitochondrial damage, resulting in the downregulation of intracellular adenosine triphosphate (ATP) levels and blocking of Cu‐ATPase to sensitize cuproptosis. This multilevel interaction strategy also activates robust immunogenic cell death and suppresses tumor metastasis simultaneously. This study presents a multivariate model for revolutionizing mitochondria damage, relying on the continuous retention of bimetallic ions to boost cuproptosis/immunotherapy in cancer.
Introduction
The survival and proliferation of cancer cells rely heavily on the homeostasis of subcellular organelles. [1,2]These organelles participate in signaling pathways that drive malignant behavior including heightened metabolism [3,4] and anti-oxidative stress responses. [5,6]Current approaches, such as chemotherapy, [7][8][9] gene therapy, [10,11] and reactive oxygen species (ROS)-based therapies, [12][13][14][15] are commonly used to regulate cellular organelles due to their action sites within the organelles.However, the nonspecific diffusion of drugs can disrupt normal cellular ecology, necessitating the development of personalized strategies for the precise intracellular regulation of subcellular organelles. [16,17]n recent years, numerous nanomedicines have been designed to target specific subcellular organelles such as mitochondria, lysosomes, and the endoplasmic reticulum. [18,19]However, these nanotechnologies, which predominantly rely on a single treatment mode, such as photothermal therapy (PTT) or sonodynamic therapy (SDT), often fail to induce sustained organelle injury. [20]This limited efficacy can be attributed Scheme 1. A) Synthesis of the HA-CD@MOF NPs.B) Schematic illustration of a self-reinforced bimetallic Mito-Jammer for precise antitumor therapy based on ROS-enhanced Ca 2+ overload-promoted cascade mitochondrial dysfunction and cuproptosis.
to the remarkable adaptive properties and damage repair capabilities of malignant tumors. [21]Mitochondria, which serve as the primary intracellular energy source, play critical roles in regulating redox balance, the cellular tricarboxylic acid (TCA) cycle, and apoptosis in eukaryotic cells. [22,23]Furthermore, mitochondria act as the central hub for metal ion metabolism, and their destruction exacerbates ion oxidative stress for cell death. [24,25]Consequently, a multidimensional approach that targets mitochondria to amplify interference with ion regulation represents an ideal strategy for antitumor interventions. [26]hemodynamic therapy (CDT) is an effective antitumor strategy, where Fenton/Fenton-like ions, such as Cu 2+ , Fe 3+ , and Mn 2+ , can significantly enhance ROS accumulation in tumor cells, thus interfering with mitochondrial physiological activity. [27,28]In addition, a promising copper-dependent death pathway, termed "cuproptosis," has been identified recently. [29]n contrast to other known forms of cell death, cuproptosis involves an abnormal TCA cycle, which ultimately leads to proteotoxic stress. [30,31]Thus, the exogenous introduction of copper is a good candidate for exerting CDT and cuproptosis simultaneously, and this analog cuproptosis-based synergistic cancer therapy has become the main focus of current nanomedicine research.Small-molecule drugs, ultrasound, PTT, and other technologies have been used to enhance cuproptosis. [32,33]However, the high expression of the copper transporter family, including copper transporter and copper transport phosphorylating AT-Pase (Cu-ATPase), in tumor cells may prevent cuproptosis, which transfers copper into the extracellular space through ATPase hydrolysis.Therefore, reducing the copper efflux is a formidable challenge for us to sensitize tumor cuproptosis. [34]Another essential trace element, calcium ions, plays an irreplaceable role in maintaining physiological stability, which contributes to electrical signal transmission and nerve conduction. [35]Nonetheless, excessive Ca 2+ accumulation can lead to mitochondrial dysfunction, such as attenuated oxidative phosphorylation and adenosine triphosphate (ATP) production, [36,37] which cut off the energy supply of efflux channels, thereby increasing intracellular copper ion levels.
Herein, we propose a Mito-Jammer to induce a cascade of mitochondrial damage by bimetallic ions to simultaneously boost cuproptosis and immunogenic death (Scheme 1B).To construct the Mito-Jammer (also known as HA-CD@MOF), a one-pot method was employed to integrate the metal-organic framework (MOF-199), doxorubicin (DOX), and calcium peroxide (CaO 2 ).With the modification of hyaluronate acid (HA) on its surface, the Mito-Jammer can precisely target mitochondrial aerobic respiration-dependent persistent tumor cells through specific recognition of the CD44 protein. [38,39]Subsequently, the collapse of the Mito-Jammer in the tumor microenvironment (TME), where glutathione (GSH) and hyaluronidase (HAD) are overexpressed, could release CaO 2 , DOX, and Cu 2+ , [40,41] and the exposed CaO 2 and DOX could further yield hydrogen peroxide (H 2 O 2 ) and Ca 2+ in a weakly acidic environment, strengthening the Cu 2+ -based Fenton-like reaction. [42]Moreover, the occurrence of Ca 2+ overload-exacerbated ROS storms and mitochondrial damage results in the downregulation of intracellular ATP levels and the blocking of Cu-ATPase.Combined bimetallic ion therapy significantly enhanced protein lipoylation and copper content to facilitate the cuproptosis of tumor cells.[45] Finally, Cu 2+ could mediate biological tissue ultrasonic conversion and transmit radiofrequency signals to further realize photoacoustic imaging (PAI) and T1-weighted magnetic resonance imaging (MRI), which perfectly meet the needs of the "all in one" strategy to realize tumor theranostics. [46,47]Our work presents a model for the revolution of ion-triggered cell death through a cascade of mitochondrial damage based on Ca 2+ overload-sensitized cuproptosis, holding significant potential for advancing the field of targeted cancer therapies.
Preparation and Characterization of Mito-Jammer
To prepare the bimetallic Mito-Jammer, HA-CD@MOF nanoparticles (NPs) were synthesized using a simple one-pot method, as illustrated in Scheme 1A.Transmission electron microscopy (TEM) images revealed that HA-CD@MOF exhibited a regular morphology with uniform size (Figure 1A).As evidenced by the dynamic light scattering examination, the average particle size of HA-CD@MOF was 263 ± 2.138 nm (PDI: 0.097), and the zeta potential results demonstrated that the surface charge of the NPs varied according to the successful loading of DOX, CaO 2 , and HA (Figure 1B and Figure S2, Supporting Information).Modification of HA onto the surface of Mito-Jammers was confirmed, and the relative amount of HA was calculated as 5.39% after thermogravimetric analysis, as shown in Figure S3, Supporting Information.Additionally, the color changes of the different NP aqueous solutions confirmed the encapsulation of DOX and CaO 2 (Figure S4, Supporting Information).The stability of HA-CD@MOF was investigated in various media including water, phosphate-buffered saline, fetal bovine serum, and RPMI 1640.The size distributions in all of the above media were well maintained within 13 days of incubation, indicating the exceptional stability of HA-CD@MOF (Figure S5, Supporting Information).Furthermore, the corresponding elements in HA-CD@MOF, such as nitrogen, oxygen, calcium, and copper, were all present in the elemental mapping images (Figure 1C).Similarly, the results of X-ray photoelectron spectroscopy also demonstrated the existence of Ca 2+ and Cu 2+ in HA-CD@MOF (Figure 1D and Figure S6, Supporting Information).Finally, the entrapment efficiencies of DOX and CaO 2 within HA-CD@MOF were determined via UV-vis and inductively coupled plasma-mass spectrometry (ICP-MS) to be ≈88.18%and 86.38%, respectively.Altogether, by showing the efficient loading of DOX and CaO 2 , as well as the successful HA modification, the aforementioned results not only demonstrated the successful assembly of the bimetallic Mito-Jammer but also laid a foundation for its potential for subsequent TME response.
Research has shown that HAD overexpressed in tumor cells can hydrolyze HA, and that high GSH levels spontaneously interact with Cu-MOF. [48,49]Therefore, we speculated that Mito-Jammer could achieve autogenic degradation in the TME.Consequently, we first investigated the morphological changes in HA-CD@MOF dispersed in simulated TME solutions at different time points.TEM images showed that the basic morphology of HA-CD@MOF disappeared when immersed in the GSH/HAD (10 mm, 100 U mL −1 ) solution for 24 h, indicating that the simulated TME triggered the decomposition of the Mito-Jammer structure (Figure 1E).Subsequently, the tumor-specific release profiles of the chemotherapy drugs and bimetallic ions were examined in vitro.Within 48 h, the cumulative release rates of DOX, Ca 2+ , and Cu 2+ reached 83.66%, 82.78%, and 26.68%, respectively, under GSH/HAD stimulation, which were significantly higher than those in the other groups (Figure 1F,G and Figure S7, Supporting Information).Similarly, the gradual orange color change of the solution also confirmed that the dual responses accelerated drug release, consistent with the above results (Figure S8, Supporting Information).The drug release results also revealed that HAD has a limited effect on the decomposition of MOF-199.Only in TME where GSH and HAD both overexpressed the HA-CD@MOF could be degraded, thus avoiding the pre-release of drugs.Therefore, we hypothesized that TMEtriggered Mito-Jammer decomposition exacerbates the in situ aggregation of bimetallic ions for cascade mitochondrial damage to sensitize tumor cells to cuproptosis.
Simulated TME-Enhanced CDT and Targeting Abilities of the Mito-Jammer
Recent studies revealed that the Cu 2+ -based Fenton-like reaction initiates mitochondrial Ca 2+ buffer dysfunction, indicating that establishing a robust CDT system for the Mito-Jammer is crucial for implementing tumor bimetallic ion interference.However, the limited levels of H 2 O 2 within tumor cells limit Fentonlike reactions.CaO 2 , which is known for its fantastic H 2 O 2 supply ability, can react with H 2 O to produce H 2 O 2 , particularly under acidic conditions.Therefore, we evaluated the self-supplied H 2 O 2 performance of the Mito-Jammer in a slightly acidic environment.The H 2 O 2 concentration in HA-CD@MOF + GSH solutions was 5.95 and 11.45 mm within 1 and 6 h, respectively.In contrast, the H 2 O 2 production rate increased by 4.01 and 2.13 times after 1 and 6 h of additional HAD treatment, respectively (Figure 2A).This phenomenon implies that the Mito-Jammer realizes a specific self-supply of H 2 O 2 in tumor cells to promote Cu 2+ -based Fenton-like reactions.Encouraged by the preeminent H 2 O 2 production ability, we next assessed the •OH-generating activity of the Mito-Jammer via methylene blue (MB).As shown in Figure 2B,C, GSH, H 2 O 2 , or Cu-MOF alone exhibited minimal effects on MB degradation, and CD-MOF led to a remarkable •OH-generating effect, which was attributed to the self-supplied H 2 O 2 capacity of CaO 2 .However, because of the presence of the HA shell, the degree of MB degradation was suppressed in HA-CD@MOF without HAD, whereas the HA-CD@MOF + HAD group presented a downward trend similar to that of the CD-MOF group, indicating that HA modification can alleviate drug leakage from<EMAIL_ADDRESS>addition, the coexistence of H 2 O 2 and CaO 2 led to the most significant MB degradation via a Fenton-like reaction mediated by Cu 2+ (Figure S9, Supporting Information).Apart from the MB experiment, electron spin resonance spectroscopy was also employed to ascertain the •OH generation of the Mito-Jammer.The activated Mito-Jammer (GSH + HAD) showed more intense •OH signals than the untreated group (Figure 2D).Altogether, it can be summarized that the Mito-Jammer specifically exerts its CDT function under GSH and HAD activation.Since GSH is involved in Fenton-like reactions and serves as the primary intracellular reductive species, its depletion might reflect CDT and boost therapeutic efficacy.Accordingly, the GSH-depletion ability of HA-CD@MOF was determined via a 5,5′-dithiobis-(2-nitrobenzoic acid) (DTNB) indicator.The GSH content decreased as the Mito-Jammer concentration increased (Figure 2E), indicating that GSH can be depleted.These data corroborate that the Mito-Jammer can activate CDT in tumor cells in situ in a self-reinforced manner.
Highly enriched NPs within the tumor cells are a prerequisite for CDT.HA can bind specifically to CD44 receptors overexpressed on the surface of tumors, endowing the Mito-Jammer with effective tumor endocytosis.Thus, the cellular uptake of Mito-Jammer was assessed using confocal laser scanning microscopy (CLSM) and flow cytometry (FCM).Briefly, the red fluorescence from DOX in HA-CD@MOF was amplified with the extension of culture time in 4T1 cells, while the fluorescence intensity of CD@MOF was significantly weakened.Moreover, faint red fluorescence was observed in human umbilical vein endothelial cells (HUVECs) treated with HA-CD@MOF for 4 h compared to 4T1 cells (Figure 2F), indicating that HA promoted the endocytosis of the Mito-Jammer in tumor cells.In addition, similar cellular uptake performance was verified via FCM (Figure 2G).Furthermore, the lysosomal compartments were stained with a LysoTracker probe to show the intracellular distribution of free DOX and HA-CD@MOF in 4T1 cells.The red fluorescence emitted from free DOX outlined the shape of the nuclei but barely overlapped with the lysosomes (green fluorescence), whereas a large portion of the red fluorescence was co-localized in the lysosomes in the HA-CD@MOF group, signifying that the Mito-Jammer was endocytosed and distributed in the lysosomes (Figure S10, Supporting Information).In addition, biological transmission electron microscopy (bio-TEM) was used to observe the uptake and degradation of HA-CD@MOF by 4T1 cells.As shown in Figure S11, Supporting Information, HA-CD@MOF with a regular morphology was observed in the 4T1 cells, indicating that HA-CD@MOF was endocytosed by the 4T1 cells after 1 h incubation.In contrast, at 2 h, HA-CD@MOF was observed in the cytoplasm or incomplete vesicles with ambiguous boundaries, and the morphology of the NPs became incompact.In addition, HA-CD@MOF was fully degraded by 4T1 cells at 12 h.Thus, we hypothesized that the presence of HAD and GSH within the tumor cells triggered the decomposition of the Mito-Jammer.Collectively, these findings certify that the Mito-Jammer can efficiently aggregate within tumor cells to exert TME-enhanced CDT, laying a foundation for the subsequent cascade of mitochondrial damage to sensitize cuproptosis.
Self-Reinforced CDT Using Mito-Jammer in the TME
Encouraged by the simulated TME-enhanced •OH generation efficiency of the Mito-Jammer, we next explored its antitumor effects at the cellular level.First, the biocompatibility and cytotoxicity of the Mito-Jammer were measured using the CCK-8 assay.HUVECs were incubated with HA-CD@MOF at various concentrations for 12 h; as a result, even when the concentration of the Mito-Jammer increased to 100 μg mL −1 , it still presented negligible toxicity toward HUVECs (Figure 3A), showing good biosafety.
The viability of 4T1 cells sharply decreased (from 71.36% to 16.44%) as the concentration of NPs increased, which was ascribed to the high tumor affinity triggered by the TME-enhanced toxicity.Furthermore, to objectively analyze the role of the different components in Mito-Jammer, we synthesized Cu-MOF, DOX@MOF, and CD@MOF to investigate the biological functions of DOX, CaO 2 , and HA.The absence of any constituents resulted in an attenuated antitumor effect (Figure 3B), validating the reasonable assembly of Mito-Jammer to collaboratively inhibit tumor growth.Apoptosis was further detected via FCM.At the same concentration, the highest apoptotic rate of 4T1 cells was observed in the HA-CD@MOF group (up to 91.71%), significantly increasing to 10.76-fold, 1.75-fold, and 1.45-fold higher than that in the Cu-MOF, DOX@MOF, and CD@MOF groups, respectively (Figure 3C).To visually estimate the therapeutic efficacy of the NPs, we performed calcein-AM and propidium iodide staining of 4T1 cells to differentiate between live and dead cells.The images captured from this staining showed that Mito-Jammer improved the number of red spots, whereas the other groups mainly consisted of surviving green cells (Figure 3D).Overall, these findings illustrate that the Mito-Jammer fully exerted its inherent TME characteristics to amplify the superiority of bimetallic ions for tumor elimination.
Our in vitro experiments have revealed that CaO 2 is a great agent to enhance the Fenton-like reaction.DOX could also activate nicotinamide adenine dinucleotide phosphate oxidases (NOXs) to convert oxygen (O 2 ) into •O 2 − and endogenous H 2 O 2 via superoxide dismutase employing disproportionation reactions, thus strengthening the CDT capacity of Mito-Jammer. [50,51]o determine the detailed mechanism of the TME-oriented self-reinforced bimetallic antitumor effect, we investigated intracellular ROS levels using the green fluorescence of the 2,7-dichlorodihydrofluorescein diacetate (DCFH-DA) probe.As shown in Figure 3E, there was no visible green fluorescence in the control group treated with 1640 culture medium and the Cu-MOF group, while enhanced green fluorescence could be observed in the DOX@MOF and CD@MOF groups, suggesting that the H 2 O 2 provided by DOX and CaO 2 markedly promoted •OH generation.As expected, the 4T1 cells treated with HA-CD@MOF exhibited brilliant green fluorescence.As Ca 2+ overload-mediated mitochondrial damage can upregulate intracellular ROS levels, 4T1 cells were treated with BATPA-AM, a Ca 2+ chelator, and Mito-Jammer to assess ROS production.Compared to the Mito-Jammer group, the introduction of BATPA-AM downregulated green fluorescence expression, suggesting that CDT and Ca 2+ overload triggered by bimetallic ions jointly led to ROS explosion.Furthermore, the FCM results for DCFH-DA matched well with the CLSM images (Figure 3F and Figure S12, Supporting Information).Next, to show that intracellular H 2 O 2 levels are closely related to ROS synthesis, the concentration of H 2 O 2 in 4T1 cells treated with various compounds was determined.As shown in Figure 3G, the Cu-MOF group dramatically reduced the intracellular H 2 O 2 content, because the Fentonlike reaction requires the consumption of H 2 O 2 .However, benefitting from the robust self-supplying H 2 O 2 capacity of DOX and CaO 2 , the H 2 O 2 level in 4T1 cells treated with DOX@MOF, CD@MOF, and HA-CD@MOF was elevated, exhibiting the selfreinforced CDT potency of the Mito-Jammer.Accordingly, the above data indicate that the Mito-Jammer can dynamically regulate TME components to realize self-reinforced CDT.
Mito-Jammer-Mediated Cascade Mitochondrial Damage to Sensitize Cuproptosis
Recent studies have shown that cuproptosis is a copperdependent cell death driven by mitochondrial damage and subsequent mitochondrial protein stress. [52]However, high expression of GSH and the presence of the copper transporter family in tumor cells impede the occurrence of cuproptosis. [53,54]onsidering that GSH can be converted to oxidized glutathione in a Fenton-like reaction, the GSH depletion capability of the Mito-Jammer may contribute to cuproptosis. [55]Thus, intracellu-lar GSH levels were examined using the CheKine Reduced Glutathione Colorimetric Assay Kit.As shown in Figure 4A, all NPs exhibited concentration-dependent GSH depletion in 4T1 cells.Notably, HA-CD@MOF presented the most significant reduction in GSH levels among these groups, thus overcoming the first obstacle to cuproptosis.In addition, the Cu-ATPase transfers copper ions to the extracellular space to prevent their accumulation through ATPase hydrolysis. [56,57]Mitochondria are the main organelles of ATP production, and damage to these organelles can reduce the efflux ability of Cu-ATPase.Although TME-reinforced CDT elevated •OH generation,mitochondrial destruction was still limited.A persistent increase in intracellular Ca 2+ (identified as Ca 2+ overload) could reportedly trigger the opening of cyclophilin D-dependent mitochondrial membrane pores, [58] indicating impaired mitochondrial biological function.Hence, we estimated intracellular Ca 2+ concentration using a Ca 2+ probe (Fluo-8 AM).The CLSM image demonstrated that green fluorescence (representing free Ca 2+ ) within the 4T1 cells treated with CD@MOF and HA-CD@MOF was the strongest, which was evidently higher than that of the other groups (Figure 4B).Interestingly, this trend aligned with the DCFH-DA results, implying that Ca 2+ accumulation facilitated ROS production.Additionally, FCM analysis revealed a positive correlation between intracellular Ca 2+ levels and the incubation time of the NPs (Figure 4C and Figure S13, Supporting Information).To demonstrate the performance of Mito-Jammer in boosting Ca 2+ overload, Alizarin Red staining was performed to observe cellular calcification.The red calcification area in 4T1 cells increased in a time-dependent manner (Figure 4D), further confirming that the mitochondrial damage mechanism depends on Ca 2+ metabolic dysfunction and CDT.
Based on the above findings, we assessed the extent of mitochondrial damage in 4T1 cells using the 6-tetrachloro-1,1,3,3tetraethylbenzimidazolylcarbocyanine iodide (JC-1) probe.The fluorescence of JC-1 transitioning from red to green represents decreased mitochondrial membrane potential (MMP).Apparent red fluorescence was observed in the control and Cu-MOFtreated groups.In contrast, the cells treated with HA-CD@MOF emitted bright green and faint red fluorescence (Figure 4E).Similarly, treatment with the Mito-Jammer resulted in a considerable loss of cellular MMP, as evidenced by the 61.58% negative potential in FCM (Figure S14, Supporting Information), signifying that bimetallic therapy resulted in remarkable mitochondrial damage.Cytochrome C (Cyt-C) is a protein embedded in the inner membrane of the mitochondria, and its extravasation is a hallmark of early mitochondrial damage.Thus, we used immunofluorescence staining to monitor the release of Cyt-C from the mitochondria.As shown in Figure 4F, CD@MOF and HA-CD@MOF treatments induced strong green fluorescence (representing Cyt-C) in these groups, suggesting that CD@MOF coordinated with Ca 2+ overload-initiated mitochondrial apoptosis.To further visualize the morphological changes in the mitochondria, 4T1 cells subjected to Mito-Jammer were inspected using bio-TEM.Notably, a swollen morphology and an expanded cavity were observed in the mitochondria of 4T1 cells treated with Mito-Jammer, which were clearly distinct from those observed in the control sample (Figure 4G).After verifying the mitochondrial damage, we evaluated its biological function in ATP production.The intracellular ATP concentration was inhibited in the Mito-Jammer group, and the relative ATP level was reduced to 49% of that in the control group (Figure S15, Supporting Information), indicating the possibility of cutting off the energy source of Cu-ATPase to weaken copper efflux.Accordingly, the Cu-ATPase activity of the tumor cells was investigated using a Cu-ATPase assay kit.As shown in Figure 4H, the ATPase activity on the cell membrane was slightly reduced after incubation with Cu-MOF.However, the Mito-Jammer treatment caused a substantial decline in activity to 66.95% of that of the control group.The blockade of the copper transporter channel inevitably led to intracellular copper ion metabolic disorder, and the intracellular Cu 2+ concentration after various treatments was verified.The ICP-MS results showed that compared with the Cu-MOF group (362.91 ng mL −1 ), 4T1 cells treated with Mito-Jammer exhibited a much higher increase in Cu 2+ to 816.90 ng mL −1 , which was 170.54 times that of the control group (4.79 ng mL −1 , Figure 4I).In addition, a Cu 2+ probe was used to determine changes in intracellular copper levels.(Figure 4J).As expected, the Mito-Jammer group strongly initiated Cu 2+ accumulation, consistent with the ICP-MS results, demonstrating that the Mito-Jammer could block the copper efflux by cutting the energy supply of ATPase via mitochondrial dysfunction.These data reveal that the dual obstacles of GSH enrichment and copper ion homeostasis were settled by the TME-activatable Mito-Jammer for sensitizing cuproptosis.
Dihydrolipoamide S-acetyltransferase (DLAT) and ferredoxin 1 (FDX1) are the principal cuproptosis-related genes.The former combines with copper ions to form oligomers, while the latter is an upstream regulator of protein lipoylation in the mitochondria; together, they can cause proteotoxic stress, ultimately inducing cuproptosis.First, we studied DLAT oligomerization using CLMS (Figure 4K).DLAT aggregation in cells treated with Cu-MOF has already been observed, whereas cells treated with Mito-Jammer exhibited a much higher degree of DLAT foci.Furthermore, western blotting was performed to evaluate changes in FDX1.The gray value of the FDX1 protein decreased by ≈11.86% and 30.46% with the Cu-MOF and HA-CD@MOF treatments, respectively (Figure 4L,M).These results emphasize the superiority and practicality of Mito-Jammer in enhancing cuproptosis in a Cu 2+ /Ca 2+ collaborative manner.
Targeting, Biodistribution, and Bioimaging of Mito-Jammer In Vivo
Inspired by the excellent tumor cell targeting, outstanding mitochondrial damage, and significant cuproptosis-sensitizing capabilities of the Mito-Jammer, we further tested its performance in vivo.Prior to in vivo experiments, the biocompatibility and biosafety of HA-CD@MOF were evaluated.Healthy BALB/c mice were intravenously injected with Mito-Jammer on days 1, 3, 7, 14, and 28, while the control group was administered saline.Blood and major organs of the mice were collected for analysis.Biochemical and routine blood test analyses indicated no evident differences between mice at various time points (Figure 5A,B).Additionally, hematoxylin-eosin (H&E) staining of the major organs showed almost no histopathological changes (Figure S16, Supporting Information), demonstrating the high biosafety of the Mito-Jammer for subsequent in vivo treatment.
To study the biodistribution and tumor accumulation of Mito-Jammer in live mice in real time, fluorescence imaging (FLI) analysis was conducted.Notably, HA prolonged the retention time and boosted the accumulation of HA-CD@MOF in tumor tissues, and a fluorescence peak was observed at 48 h (Figure 5C,D).In ex vivo bioimaging, HA-CD@MOF showed a ≈3.05-fold increase in the fluorescence signal within the tumor compared to that of CD@MOF, and the tumor region became the main enrichment site in the body after the liver (Figure 5E).Based on these results, we concluded that the Mito-Jammer possesses superior tumor accumulation efficiency owing to the specific binding of CD44 overexpressed on the tumor cell membrane with HA.
Considering the unique degradation of the Mito-Jammer in the GSH/HAD-overexpressing TME to release Cu 2+ , the TME-activatable imaging potential in both PAI and T 1 -weighted MRI was investigated.First, the concentration-dependent linear relationship of the Mito-Jammer with both the PAI value (R 2 = 0.9989, Figure S17, Supporting Information) and the r 1 value in vitro (R 2 = 0.9911, Figure S18, Supporting Information) was calculated.HA-CD@MOF and CD@MOF were intravenously injected into 4T1 tumor-bearing BALB/c mice to study the imaging signal discrepancy within the tumor tissue.The captured photoacoustic images and statistical analysis revealed that HA-CD@MOF presented the strongest photoacoustic signal in tumor tissue at 3 h, and HA equipped it with better tumor-targeting potency than the CD@MOF group at each time point (Figure 6A,C).Furthermore, the r 1 value of Cu 2+ released from HA-CD@MOF upon GSH and HAD treatment was 1.269 s −1 •mM −1 , confirming its potential as a TME-activatable T 1 -weighted MRI agent (Figure S18, Supporting Information).Moreover, the in vivo images showed an enhanced T 1 -weighted MRI signal in the HA-CD@MOF group, which peaked at 3 h (Figure 6B,D).Based on the imaging results, the HA-modified Mito-Jammer may provide insight into the tumor monitoring of copper-based nanoagents with high sensitivity and specificity.) DOX@MOF, c) CD@MOF, d) HA-CD@MOF, n = 3. B) CLSM images of intracellular uptake of Ca 2+ in 4T1 cells after different treatments (scale bar: 50 μm).C) FCM analysis of Ca 2+ levels in 4T1 cells after incubation with HA-CD@MOF NPs at various time points.D) Identification of exocytosis products of tumor cells via Alizarin Red staining (scale bar: 500 μm).E) CLSM images of JC-1-stained 4T1 cells under different conditions (scale bars: 50 μm).F) CLSM images of Cyt-C in 4T1 cells treated with control, Cu-MOF, DOX@MOF, CD@MOF, and HA-CD@MOF NPs (scale bars: 50 μm).G) Bio-TEM of mitochondria in a) untreated 4T1 cells and b) HA-CD@MOF NP-treated 4T1 cells.H) Cu-ATPase activity after different treatments; n = 3. I) Intracellular Cu 2+ concentration after different treatments; n = 3. J) CLSM images of intracellular Cu 2+ accumulation after treatment with the control, Cu-MOF, and HA-CD@MOF NPs in 4T1 cells (scale bars: 25 μm).K) CLSM images of DLAT aggregation (red arrows) in 4T1 cells after treatment with control, Cu-MOF, and HA-CD@MOF NPs (scale bar: 50 μm).L) The corresponding quantitative analysis of FDX1 protein expression based on western blotting results, n = 3. M) Western blotting results of cuproptosis markers in 4T1 cells treated with control, Cu-MOF, and HA-CD@MOF NPs.Results are presented as means ± SD. *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001.
Antitumoral Effects of Mito-Jammer In Vivo
The available in vitro therapeutic accomplishment and biocompatibility of Mito-Jammer encouraged us to investigate its antitumor performance in vivo in 4T1 tumor-bearing mouse models (Figure 7A), which were randomly divided into five groups (n = 5) for various therapies: control, Cu-MOF, DOX@MOF, CD@MOF, and<EMAIL_ADDRESS>shown in Figure 7B, a slight discrepancy in tumor volume was observed between the control and Cu-MOF groups, indicating that introducing excessive Cu 2+ was not sufficiently effective in inhibiting tumor growth.However, benefitting from the enhanced Fenton-like reaction and bimetallic ion therapy, the DOX@MOF and CD@MOF groups showed an enhanced advantage in tumor growth inhibition (TGI) over Cu-MOF treatment.In addition, the HA-CD@MOF group exhibited considerable therapeutic effect in tumor suppression.After sacrifice at the end of the experiment, the excised tumors were collected for weighing and photography.The tumor weight in the control group was ≈0.72 g, which was 3.42 and 5.14 times higher than those in the CD@MOF and HA-CD@MOF groups, respectively (Figure 7D).The ex vivo tumor photographs confirmed these therapeutic effects (Figure 7F).In addition, TGI rates were calculated based on tumor volumes to quantitatively confirm the antineoplastic performance of Mito-Jammer.The HA-CD@MOF group exhibited the highest TGI rate (72.45%) after a 14-day treatment, outperforming the CD@MOF (62.27%),DOX@MOF (49.48%), and Cu-MOF (26.05%) groups Similarly, the TGI rates based on tumor weight showed a corresponding trend (Figure S19, Supporting Information).We also noted that all groups showed minimal variation in mouse weight during treatment (Figure 7C).Moreover, we carried out a 44-day monitoring to investigate the effect of the NPs on prolonging the survival time of the mice.The animal survival curves showed that in the HA-CD@MOF group, 60% of the mice were still alive after 44 days.In contrast, all the mice in the control group died on day 30 and only 20% of the mice in the CD@MOF lived to 44 days (Figure 7E).Furthermore, as shown in Figure 7G, immunofluorescence and H&E staining of the tumor tissues revealed that mice treated with Mito-Jammer showed the highest accumulation of ROS (red) and the most serious nuclear damage.Terminal deoxynucleotidyl transferase dUTP nick-end labeling and proliferating cell nuclear antigen immunofluorescence staining suggested that HA-CD@MOF induced the highest apoptotic rate (green spot) and the lowest proliferative rate.Moreover, H&E staining of the main organs indicated that no abnormalities were observed in any group (Figure S20, Supporting Information), validating that all treatment strategies were biocompatible and well tolerated.Consequently, in vivo treatment strongly corroborates that TME-activatable CDT and cascade mitochondrial damage-mediated cuproptosis using Mito-Jammer inhibits tumor growth.
Immunotherapy and Metastasis Suppression Performance via Mito-Jammer-Mediated Cuproptosis
Owing to the outstanding antitumor effects of the Mito-Jammer and its cuproptosis-mediated immune arousal capacity, [59,60] its immune response was further evaluated in vitro and in vivo.The transfer of calreticulin (CRT) to the cell membrane surface and the secretion of high-mobility group box protein B1 (HMGB1) outside the tumor cells are hallmarks of ICD.Immunofluorescence staining revealed that Cu-MOF and DOX@MOF treatments could not form visible CRT on the cell membrane, whereas apparent green fluorescence spots were captured in 4T1 cells treated with CD@MOF and HA-CD@MOF, suggesting that Ca 2+ overload-mediated cuproptosis triggered CRT translocation (Figure 8A).Similarly, a large amount of HMGB1 was released from the 4T1 cells in the HA-CD@MOF group compared to that in the other treatments (Figure 8B).Moreover, a transwell model was constructed to verify the performance of Mito-Jammer in inducing dendritic cell (DC) maturation in vitro.FCM results showed enhanced DC maturation in the DOX@MOF (14.80%) and CD@MOF (19.90%) groups, which may be attributed to a cascade-amplified cuproptosis response.As HA improved NP enrichment in tumor cells, HA-CD@MOF treatment resulted in the most pronounced DC maturation (66.70%) (Figure 8C and Figure S21, Supporting Information).Subsequently, enzymelinked immunosorbent assays (ELISAs) were conducted to assess the secretion levels of pro-inflammatory factors including IL-6, TNF-, and IFN-.Remarkably, the 4T1 cells co-incubated with HA-CD@MOF promoted the secretion of these cytokines to varying degrees (Figure S23, Supporting Information).These data imply that the Mito-Jammer can evoke immune efficacy by effectively inducing DAMPs release from 4T1 tumor cells and facilitating DC maturation.The corresponding ICD indices were investigated in vivo.Immunofluorescence staining of CRT in tumor slices demonstrated increased positive red fluorescence in the CD@MOF and HA-CD@MOF groups, indicating the occurrence of cuproptosis-elicited ICD in vivo (Figure 8D).Tumor FCM analysis revealed a significant increase in the proportion of DCs within the tumors of mice treated with<EMAIL_ADDRESS>the detected increase was 7.6-fold within the tumor compared to that in the control group (6.42%) (Figure 8E and Figure S22, Supporting Information).In addition, the ELISA results revealed that the IL-6, TNF-, and IFN- cytokines in the peripheral blood were the highest in the HA-CD@MOF treatment (Figure S23, Supporting Information), suggesting that the upregulation of the immune response through the cascade stimulated cuproptosis in vivo.
[63] Satisfied by the immune arousal of Mito-Jammer-mediated cuproptosis, the tumor metastasis inhibition performance was further investigated.First, a wound-healing assay was performed to evaluate the inhibitory effect of Mito-Jammer on cell invasion migration.The control group exhibited the highest wound healing rate (74.28%), and the Cu-MOF (63.10%),DOX@MOF (46.48%), and CD@MOF (38.82%) rates declined owing to their enhanced ROS-generating abilities.Under the synergistic effects of the tumor-specific ROS storm and the Ca 2+ overload-sensitized cuproptosis, the HA-CD@MOF group had the lowest wound healing rate (12.36%).Second, the transwell assay was conducted to further observe the migration inhibitory effect of Mito-Jammer on tumor cells.The images and quantitative analysis revealed that HA-CD@MOF could efficiently suppress tumor cell migration with fewer cells being observed per field, which was 3.25 times fewer than that of the control group (Figure S25, Supporting Information).Then, a mouse model of lung metastasis was established for different treatments, and the lungs were collected to observe the antitumor metastasis effect of the Mito-Jammer in vivo (Figure 8F).Bouin's staining of the lung tissue visually indicated that the control and Cu-MOF groups were still filled with metastatic nodules.Consistent with the immune response results, HA-CD@MOF elicited evident metastatic tumor inhibition with the fewest pulmonary nodules (Figure 8G).Accordingly, H&E-stained mouse lung sections further verified the antimetastatic capacity of bimetallic ions to sensitize mice to cuproptosis.Moreover, the H&E staining results of the major organs again revealed the biosafety of the Mito-Jammer by showing no abnormalities in the treated mice of DC maturation and lung metastasis experiments (Figures S25 and S26, Supporting Information).In summary, these findings suggest that the Mito-Jammer can effectively induce an immune response and establish immune advantages for suppressing tumor metastasis through bimetallic ion cascade mitochondrial dysfunction-enhanced apoptosis.
Conclusion
In conclusion, we successfully fabricated a self-reinforced bimetallic Mito-Jammer to sensitize cuproptosis and cuproptosis-related immunotherapy.The presence of an HA shell endowed Mito-Jammer with excellent tumor-targeting capabilities and minimal side effects.Upon decomposition within the GSH/HAD-overexpressing TME, Mito-Jammer released CaO 2 and Cu 2+ .The exposed CaO 2 further yielded H 2 O 2 and Ca 2+ in a weakly acidic environment to strengthen the Cu 2+ -based Fenton-like reaction.Furthermore, the combination of CDT and Ca 2+ overload initiated a ROS storm and cascade mitochondrial damage, resulting in the downregulation of intracellular ATP levels and the subsequent blocking of Cu-ATPase to trigger cuproptosis.Apart from selectively improving the ions aggregation within tumor cells, we described a promising approach for sensitizing tumor cuproptosis using Ca 2+ overload to block Cu 2+ efflux by cutting off the energy supply to Cu-ATPase.Other than studying the ultimate anti-tumor therapeutic efficacy of the Mito-Jammer, we explored its potential in immune arousal and metastasis inhibition.Therefore, this study presents a promising approach to bimetallic ion interference for boosting tumor cuproptosis and immunotherapy.Since multiple studies have revealed the correlation between mitochondria dysfunction and tumoral immune response, and cuproptosis is a mitochondria respiration-dependent cell death pathway, this enhanced cuproptosis strategy could hopefully be investigated in future research based on immunotherapy and mitochondria regulation.
Figure 2 .
Figure 2. CDT capacity and targeting abilities of HA-CD@MOF NPs.A) H 2 O 2 -generating ability of the HA-CD@MOF NPs with or without HAD in solutions with GSH (10 mm); n = 3. B) The effect of GSH (10 mm) and H 2 O 2 (10 mm) on MB degradation.C) MB degradation rate under different conditions in solutions with GSH (10 mm) and H 2 O 2 (10 mm).D) Electron spin resonance spectra of DMPO mixed with HA-CD@MOF NPs under different conditions (i, without GSH and HAD; ii, with GSH and HAD).E) The GSH-depleting ability of HA-CD@MOF NPs at different concentrations, n = 3. F) CLSM images of HUVECs and 4T1 cells after coincubation with CD@MOF NPs or HA-CD@MOF NPs for various times (scale bars: 50 μm).G) FCM analysis of the intracellular uptake of CD@MOF NPs or HA-CD@MOF NPs in 4T1 cells and HUVECs for 0.5, 1, 2, and 4 h.Results are presented as means ± SD. *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001.
Figure 3 .
Figure 3. Self-reinforced chemodynamic potency of the Mito-Jammer in the TME.A) Biosafety of HUVECs after coincubation with different concentrations of HA-CD@MOF NPs for 12 h, n = 3. B) Cell viability of 4T1 cells after treatment with a) Cu-MOF, b) DOX@MOF, c) CD@MOF, and d) HA-CD@MOF NPs at different concentrations for 6 h, n = 3. C) FCM measurement of 4T1 cell apoptosis ratios in the control, Cu-MOF, DOX@MOF, CD@MOF, and HA-CD@MOF NP groups.D) CLSM images of 4T1 cells after live and dead staining.Red fluorescence refers to dead cells, and green fluorescence refers to live cells (scale bar: 50 μm).E) CLSM images of DCFH-DA assays of 4T1 cells under different treatments (scale bars: 50 μm).F) Quantitative analysis of DCF fluorescence intensity within 4T1 cells measured via FCM.G) H 2 O 2 levels within 4T1 cells under various conditions, n = 3. Results are presented as means ± SD. *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001.
Figure 5 .
Figure 5. Biosafety and tumor-targeting assay of HA-CD@MOF in vivo.A) Routine blood (left) and B) blood biochemistry (right) analysis of mice sacrificed on certain days after HA-CD@MOF NP treatment; n = 5.C) Fluorescence images of mice treated with CD@MOF NPs and HA-CD@MOF NPs at different time points in vivo and ex vivo fluorescence images of the main organs and tumors harvested from mice at 72 h.D) Quantitative analysis of fluorescence intensity at the tumor site in vivo; n = 3. E) Quantitative analysis of fluorescence intensity in the main organs and harvested tumors ex vivo; n = 3.The results are presented as means ± SD. *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001.
Figure 6 .
Figure 6.PAI/MRI ability of HA-CD@MOF NPs.A) PAI images of the tumor after treatment with CD@MOF NPs and HA-CD@MOF NPs.B) T 1weighted images of the tumor at various time points after intravenous injection of CD@MOF NPs and HA-CD@MOF NPs.C) Quantitative analysis of PAI intensity of the tumor; n = 3. D) T1-weighted MRI signal intensity of the tumor; n = 3. Results are presented as means ± SD. *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001.
Figure 8 .
Figure 8. Immunotherapy and metastasis suppression performance of HA-CD@MOF NPs.CLMS images of A) CRT expression and B) HMGB1 release profiles of 4T1 cells (scale bars: 50 μm).C) FCM analysis of DC maturation stimulated by HA-CD@MOF NPs in vitro.D) Immunofluorescence staining of CRT expressed in tumor tissues from mice (scale bars: 100 μm).E) DC maturation in vivo measured via FCM.F) Schematic illustration of the lung metastasis suppression experimental process.G) Bouin's tri-chrome fixed lung tissue of mice after sacrifice and the corresponding H&E staining images (scale bars: 200 μm). | 8,811.6 | 2024-02-11T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
UHPLC-HRMS and GC-MS Screening of a Selection of Synthetic Cannabinoids and Metabolites in Urine of Consumers
Background and Objectives: The use of synthetic cannabinoids has increased around the world. As a result, the implementation of accurate analysis in human biological matrices is relevant and fundamental. Two different analytical technologies, ultra-high-performance liquid chromatography-high-resolution mass spectrometry (UHPLC-HRMS) and high-sensitivity gas chromatography-mass spectrometry (GC-MS) were used for the determination of three synthetic cannabinoids JWH-122, JWH 210, UR-144 and their metabolites in urine of consumers. Materials and Methods: Sample preparation included an initial hydrolysis with β-glucuronidase and liquid-liquid extraction. The UHPLC-HRMS method included a Kinetex 2.6 u Biphenyl 100A (100 × 2.1 mm, 2.6 μm) (Phenomenex, Italy) column with a gradient mobile phase consisting of mobile phase A (ammonium formate 2mM in water, 0.1% formic acid) and mobile phase B (ammonium formate 2mM in methanol/acetonitrile 50:50 (v/v), 0.1% formic acid) and a full-scan data-dependent MS2 (ddMS2) mode was used (mass range 100–1000 m/z). The GC-MS method employed an ultra-Inert Intuvo GC column (HP-5MS UI, 30 m × 250 µm i.d, film thickness 0.25 µm; Agilent Technologies, Santa Clara, CA, USA) and electron-impact (EI) mass spectra were recorded in total ion monitoring mode (scan range 40–550 m/z). Results: Both methods have been successfully used for screening of parent synthetic cannabinoids and their metabolites in urine samples of consumers. Conclusions: The screening method applied JWH-122, JWH-210, UR-144 and their metabolites in urine of consumers can be applied to other compounds of the JWH family.
Introduction
Over the last few years, synthetic cannabinoids, also called synthetic cannabinoid receptor agonists, have been introduced on the illicit market to evade psychotropic drugs legislation and increase the onset and the duration of action of cannabis effects [1,2].
The rapid synthesis of such compounds and their rising popularity in illegal markets have become a challenge for clinical and forensic laboratories, since the development of analytical methods cannot keep up with the rapid change of chemical structures. Indeed, 14 chemical families of synthetic cannabinoid receptor agonists are presently recognized and not all are included in current banning laws [1,3]. Similar to the other new psychoactive substances, the analytical detection of synthetic cannabinoids presents several limitations, the most important being the unavailability of adequate reference standards for parent drugs and metabolites and the lack of analytical methods to readily detect these substances in cases of intoxication and fatalities [4]. Differently from cannabis products, synthetic cannabinoids can cause severe toxicity in active consumers and the offspring of pregnant or breastfeeding mothers [5,6], causing outbreaks of intoxication and fatalities [7,8]. Therefore, the analytical challenge involves not only the large range of different compounds and/or metabolites to identify, but also the variety of biological matrices to investigate including non-conventional matrices, which have gained major interest in recent years for information provided, detection window and minimal sample collection invasiveness [9].
A variety of rapid test kits have been developed and marketed over the past few decades, most of which are only intended for rapid and presumptive identification of traditional drugs of abuse. However, the specificity of these methods depends on the affinity and the cross-reactivity of the antibodies used for the parent drug, its analogues and its metabolites. Initial screening of synthetic cannabinoids and their metabolites in biological matrices is difficult since the few onsite or immunochemical tests available on the market are specific for single molecules and often only for parent drugs [10]. The rapid and continuous release of novel molecules and the need to detect not only parent drugs but also and especially drug metabolites require more specific and selective methodologies such as gas or liquid chromatography coupled to mass spectrometry or tandem mass spectrometry [11][12][13][14][15][16]. We hereby propose a screening method for urinalysis of synthetic cannabinoids and principal metabolites using a fast sample extraction and two different analytical techniques: ultra-high-performance liquid chromatography-high-resolution mass spectrometry (UHPLC-HRMS) and high-sensitivity gas chromatography-mass spectrometry (GC-MS). The developed methodology has been used for the rapid screening of three synthetic cannabinoids JWH-122, JWH-210 UR-144 and their respective metabolites JWH-122 N-(4-hydroxypentyl), JWH-122 N-(5-hydroxypentyl), JWH-210 N-(4-hydroxypentyl), JWH-210 N-(5-hydroxypentyl), UR-144 N-(4-hydroxypentyl) and UR-144 N-(5-hydroxypentyl) in urine of Spanish consumers. We focused on these particular substances due to their recent spread on the Spanish illegal market, as reported by the same consumers in web fora and to a Spanish Non Governmental Organization dealing with drug risk reduction.
Sample Preparation
Urine samples collected between 29 March 2019 and 10 May 2019 were donated by synthetic cannabinoid consumers, who attended at a private club. Each participant self-administered a cigarette containing the synthetic cannabinoid they selected which was obtained from an unknown source, but analysed for content by a Drug Checking Service performed by a Spanish ONG (Energy Control). At the time of the study, none of these synthetic cannabinoids were illegal in Spain and personal use was allowed in private clubs for cannabis and synthetic cannabinoid users. Sample collection was authorized by the local Human Research Ethics Committee (CEI-HUGTiP ref. PI-18-267, Badalona, Spain). Samples were stored at −20 • C until analysis.
Analytes were retrieved by liquid-liquid extraction. Since it has been shown that synthetic cannabinoid hydroxymetabolites are mainly present as glucuronides in urine samples, urine hydrolysis was performed as reported in previous studies [12,14,15].
After cooling, samples were extracted twice with 6 mL hexane/ethyl acetate (9:1). After centrifugation, the organic layer was divided in two aliquots of 3 mL each.
The first aliquot was evaporated to dryness at 40 • C under a nitrogen stream and derivatized with 25 µL Bis(trimethylsilyl)trifluoroacetamide (BSTFA) containing 1% trimethylchlorosilane (TMCS) at 70 • C for 30 min. A volume of 1 µL was injected into the GC-MS system.
The second aliquot was evaporated to dryness under a nitrogen stream and then dissolved in 50 µL mixture of mobile phase A (ammonium formate 2 mM in water, 0.1% formic acid) and B (ammonium formate 2 mM in methanol/acetonitrile 50/50, 0.1% formic acid) (50:50, v/v). A volume of 10 µL was injected into UHPLC-HRMS.
Gas Chromatography-Mass Spectrometry (GC-MS) Instrumentation
The GC-MS instrument consisted of an Intuvo 9000 GC System coupled with 5977 B MSD (Agilent Technologies, Palo Alto, CA, USA). The Ultra-Inert Intuvo GC column (HP-5MS UI, 30 m × 250 µm i.d, film thickness 0.25 µm; Agilent Technologies) was used for separation. The GC-MS conditions for the screening procedures were as follows: split-less injection mode; helium (purity 99%) carrier gas flow 1.2 mL/min; injection port, ion source, quadrupole, and transfer line temperatures were 260, 230, 150 and 320 • C, respectively; column temperature was 70 • C for 2 min and increased to 190 • C at 30 • C/min and then increased to 290 • C at 5 • C/min for 10 min. Subsequently, the temperature was increased to 340 • C at 40 • C/min to eliminate impurities from the column. The electron-impact (EI) mass spectra were recorded in total ion monitoring mode (scan range 40-550 m/z).
Full scan data files were processed by Agilent MassHunter Workstation-Unknowns Analysis (Agilent Technologies). The mass spectra international libraries used for peak identification were NIST Research Library (National Institute of Standards and Technology) and SWGDRUG Library version 3.6 (Scientific Working Group for the Analysis of Seized DrugsWebsiteswgdrug.org).
Ultra-High-Performance Liquid Chromatography-High-Resolution Accurate Masses Spectrometry (UHPLC-HRMS) Instrumentation
The UHPLC/ESI Q-Orbitrap system consisted of an Ultimate 3000 LC pump and an Ultimate 3000 autosampler coupled with a Q Exactive Focus mass spectrometer equipped with a heated electrospray ionization (HESI) probe operating in positive ionization mode, and the system was controlled by Trace finder 4.0 software (ThermoFisher Scientific, Bremen, Germany).
Separation was performed on a Kinetex Biphenyl 100A (100 × 2.1 mm, 2.6 µm) (Phenomenex, Italy). The run time was 18 min with a gradient mobile phase composed of ammonium formate 2 mM in water with 0.1% formic acid (mobile phase A) and ammonium formate 2 mM in methanol/acetonitrile 50:50 (v/v) with 0.1% formic acid (mobile phase B) at a flow rate of 0.6 mL/min. Initial conditions were 20% B, held for 2 min, increased to 81.4% B within 9 min, increased to 100% B within 0.2 min, held for 4.3 min, returned to 20% B within 0.1min, and then held for 2.4 min. LC flow was directed to waste for the first 4.5 min and after 13.5 min. Autosampler and column oven temperatures were 4 • C and 40 • C, respectively.
MS parameters were as follows: ionization voltage was 3.0 kV; sheath gas and auxiliary gas were 35 and 15 arbitrary units, respectively; S-lens radio frequency RF level was 60; vaporizer temperature and capillary temperature were 320 • C. Nitrogen was used for spray stabilization, for collision induced dissociation experiments in the higher-energy collisional dissociation (HCD) cell and as the damping gas in the C-trap. The instrument was calibrated in positive and negative modes every week.
Data were acquired in full-scan data-dependent MS2 (ddMS2) mode with an inclusion list containing the exact masses of over 1400 compounds including parent compounds and their metabolites.
Full scan data acquisition was conducted as follows: resolution of 70,000, micro-scans of 1, maximum injection time of 120 ms and a scan range of 100-1000 m/z.
The following settings for the dd-MS 2 mode were used: resolution of 17,500, isolation window of 1.0 and HCD cell with stepped normalized collision energy of 17.5, 35.0, 52.5 V.
The MS and fragmentation data acquired were processed by Thermo Scientific TraceFinder software. This specific software performs a thorough interrogation of the database by making use of the built-in database and mass spectral library of over 1400 compounds, retention times, isotope pattern matching, and elemental composition determinations to identify drugs and metabolites.
Short Methods Validation
Although a complete methods validation was not carried out, since proposed methodologies were only intended for initial screening of urine samples, relevant validation parameters were determined following the most recent criteria for method development and validation in analytical toxicology [17,18]. Limits of Detection (LODs) and Limits of Quantification (LOQs) were estimated by analyzing a pool of blank urine samples with decreasing concentrations of the spiked analytes and thereafter calculating the signal to noise ratio. LOD was defined as the lowest concentration with good chromatography that yielded a signal-to-noise ratio higher than 3 and LOQ the lowest concentration with a signal-to-noise ratio higher than 10. Limits of detection and quantification, carry over, and selectivity were also calculated, as they are essential parameters for a screening methodology.
After sample pretreatment, the first aliquot of extracted samples is immediately injected into the UHPLC-HRMS system, for a 20-min run. The second aliquot undergoes a 30-min derivatization followed by a 20-min GC/MS run. Extracted ion chromatograms from the extraction of 1-mL drug-free urine spiked with 100 ng UR-144, JWH-122, JWH-210 and their metabolites, and three positive urine samples screened with the two different instruments, are shown in Figures 1 and 2.
Retention times monitored m/z ions, and limits of detection (LOD) and quantification (LOQ) of the analytes under investigations applying the two different methodologies are reported in Table 1.
Although a proper method validation was not carried out, essential parameters were evaluated. LOD and LOQ obtained for all the analytes under investigation fitted the purpose of the study. No additional peaks due to endogenous substances and carryover interfering with analytes were detected. Furthermore, even if the total analysis time was not short, this methodology was not set up only to screen the compounds reported in this study, which were only an example. Indeed, preliminary experiments showed that the same methodology could be applied for all JWH families (e.g., JWH 018, JWH 073, JWH 200, JWH 250) tested as pure standards at the time of publication. In this concern it has to be said that two different analytical methodologies with different features were applied to screen the reported compounds. On one hand, UHPLC-HRMS provided a faster run time and it can be applied also in cases where pure standards of parent substances and metabolites under investigation are not available, since molecular ion exact mass measurement and an extended spectra library allowed a good recognition of several synthetic cannabinoids and principal metabolites. On the other, a last generation GC-MS assay provided same sensitivity (in terms of LOD and LOQ) and specificity of UHPLC-HRMS, showing that a more traditional, cheaper, simpler and widespread methodology can also be applied to screen these new psychoactive substances and metabolites in biological fluid.
This evidence is important in clinical and forensic cases involving intoxication or fatalities especially when the consumer is unaware of consumed substances due to surreptitious product substitution or adulteration [19][20][21][22].
Conclusions
The limited availability of screening tests for the detection of synthetic cannabinoids and/or metabolites in urine of consumers [9,14] prompted us to propose a screening method for urinalysis of JWH-122, JWH-210, UR-144 and their metabolites. The method can be applied to other compounds of the JWH family and successfully coupled UHPLC-HRMS and GC/MS assays.
Author Contributions: Conceptualization, M.P., E.M. and S.Z.; methodology, E.P., M.F. and E.M.; data curation, all authors; writing-original draft preparation, S.Z. and S.P.; writing-review and editing, all authors. All authors have read and agreed to the published version of the manuscript. | 3,047.8 | 2020-08-01T00:00:00.000 | [
"Medicine",
"Chemistry"
] |
Hints of a new leptophilic Higgs sector?
We show that a new leptophilic Higgs sector can resolve some intriguing anomalies in current experimental data across multiple energy ranges. Motivated by the recent CMS excess in the resonant $e\mu$ channel at 146 GeV, we focus on a leptophilic two-Higgs-doublet model, and propose a resonant production mechanism for the neutral components of the second Higgs doublet at the LHC using the lepton content of the proton. Interestingly, the same Yukawa coupling $Y_{e\mu}\sim 0.6-0.8$ that explains the CMS excess also addresses the muon $(g-2)$ anomaly. Moreover, the new Higgs doublet also resolves the recent CDF $W$-boson mass anomaly. The relevant model parameter space will be completely probed by future LHC data.
I. INTRODUCTION
Using the Higgs boson as the keystone for new physics searches is well-motivated [1], as an extended Higgs sector could potentially address some of the pressing issues plaguing the Standard Model (SM), including the gauge hierarchy problem, stability of the electroweak vacuum, mechanism of electroweak symmetry breaking, origin of the fermion masses and mixing, matter-antimatter asymmetry, and the nature of dark matter.Therefore, even though the measured properties of the 125-GeV Higgs boson discovered at the LHC [2,3] are thus far consistent with the SM expectations [4,5], further precision Higgs studies, as well as direct searches for additional Higgs bosons, must continue.
An interesting aspect of beyond-the-SM (BSM) physics is lepton flavor violation (LFV), which is forbidden in the SM by an accidental global symmetry.In fact, the observation of neutrino oscillations [6][7][8][9][10] necessarily implies LFV.However, despite intense experimental efforts, no corresponding LFV in the charged lepton sector has been observed [11].Therefore, alternative searches for LFV involving exotic Higgs decays (h → eµ, eτ, µτ ) could be powerful probes of BSM physics [12][13][14][15][16][17][18].Both ATLAS and CMS Collaborations have performed such LFV Higgs searches with the √ s = 13 TeV LHC Run-2 data [19][20][21][22][23].Although no evidence for LFV decays of the 125 GeV Higgs boson was found, CMS has reported an intriguing 3.8σ local (2.8σglobal) excess in the resonant eµ search around 146 GeV, with a preferred cross section of σ(pp → H → eµ) = 3.89 +1. 25 −1.13 fb [23].If confirmed, this would be a clear sign of BSM physics.In this letter, we take the CMS eµ excess at face value and provide the simplest possible interpretation in terms of leptophilic neutral scalars within a two-Higgs-doublet model (2HDM).In this context, we propose a novel resonant production channel for the leptophilic neutral (pseudo)scalars at the LHC using the lepton parton distribution function (PDF) of the proton [24][25][26][27]; see Fig. 1.We show that this scenario can explain the CMS excess with a Yukawa coupling Y eµ ∼ 0.55−0.81,while being consistent with all existing constraints.Another interesting feature of our solution is its intimate connection to two other outstanding anomalies in current experimental data, namely, the (g − 2) µ anomaly [28][29][30] and the CDF W -mass anomaly [31].We emphasize that the prospects of probing a leptophilic light Higgs sector at the energy and intensity frontiers is a worthwhile study in its own right, irrespective of the future status of these anomalies.
II. MODEL SETUP
Here we propose an economical scenario with a leptophilic 2HDM to explain the CMS excess.We work in the Higgs basis [32], where only one neutral Higgs acquires a nonzero vacuum expectation value, v.In this basis, the scalars fields can be parameterized as where (G + , G 0 ) are the Goldstone modes, eaten up by W and Z after electroweak symmetry breaking, (H 0 1 , H 0 2 ) and A are the neutral CP-even and CP-odd scalars respectively, and H + is a charged scalar field.In the alignment/decoupling limit [33][34][35][36], we identify H the observed 125 GeV SM-like Higgs boson, whereas the H 2 -sector does not couple to the SM gauge bosons.This is in agreement with the LHC data [37][38][39].We assume the mixing angle θ between the CP-even scalar H 0 2 ≡ H and the SM Higgs boson is small, and the only relevant production mechanism for H (and A) at colliders is via its leptonic Yukawa interactions: For either Y eµ ̸ = 0 or Y µe ̸ = 0, with all other Y αβ involving electrons or muons assumed to be small, the dominant contribution to the pp → H/A → eµ signal comes from the s-channel Feynman diagram shown in Fig. 1, where the H/A is produced resonantly using the lepton PDF of the proton, and then decays to e ∓ µ ± final states with a branching ratio (BR) determined by the structure of the Yukawa coupling matrix Y in Eq. ( 1).There is a sub-dominant contribution to the same final-state from a t-channel exchange of H/A, not shown in Fig. 1, but included in our calculation.We estimate the signal cross section numerically using MadGraph5 aMC@NLO [40] at leading order (LO) parton-level with the LUXlep-NNPDF31 PDF (82400) [25,[41][42][43].The default MadGraph5 cuts are applied at parton-level, and the default LO dynamical scale is used, which is the transverse mass calculated by a k t -clustering of the final-state partons [44].The cross section result including both H and A contributions is shown by the blue curve in Fig. 2 left panel as a function of |Y eµ | (also applicable for |Y µe |) for m H/A = 146 GeV and assuming BR(H/A → eµ) = 70% (explained below), where the thickness accounts for the theory uncertainty due to scale ( +39.4% −30.3% ) and PDF (±4.5%) variation.The horizontal green (yellow) shaded region explains the CMS excess at 1σ (2σ).The corresponding ATLAS search [19] is not directly comparable with the CMS analysis, but a back-of-the-envelope calculation from the sideband data mildly disfavors a narrow-width excess at 146 GeV, and a rough scaling of background gives a ballpark upper limit of about 3.0 fb on the cross section [45], as shown by the horizontal dashed line in Fig. 2. We find that Y eµ ∼ 0.55 − 0.81 can explain the CMS excess at 2σ.For such values of the leptonic Yukawa coupling, any quark Yukawa couplings of the second Higgs doublet H 2 must be small; otherwise, it will be ruled out by the chirality enhanced meson decays, such as π + → e + ν.Thus our proposal is different from other scalar interpretations of the CMS excess [46,47], which used quark couplings to enhance the production cross section.
III. CONSTRAINTS
The large Y eµ/µe couplings of the neutral components, as well as the charged component, of the leptophilic Higgs doublet, are subject to a number of other constraints, and also give rise to other interesting phenomena, as discussed below.
The same Y eµ (µe) coupling gives an additional contribution to the e + e − → µ + µ − cross section via t-channel H/A exchange, and therefore, is constrained by LEP measurements, which are in good agreement with the SM prediction [51,52].Naively, the contact interaction bounds from LEP data would kill the parameter space for O(1) Yukawa couplings [48].However, this bound is not directly applicable, if neutral scalars are lighter than the LEP center-of-mass energy √ s = 209 GeV.A dedicated analysis [53] comparing the 2HDM cross section, which includes the interference between the H/A-mediated diagrams with the SM processes, against the LEP dimuon data imposes the constraint Y eµ < 0.8, thus ruling out the parameter space shown by the brown-shaded region in Fig. 2. The same bounds are also applicable to the Y µe coupling; see Fig. 4 for different masses.The LEP limit can be significantly improved at future lepton colliders, such as the √ s = 1 TeV ILC [54] with integrated luminosity L = 500 fb −1 (cf. the dashed curve in Fig. 4), which can probe Y eµ (or Y µe ) up to 0.1 [53,55,56].
As for the hadron collider constraints on light neutral scalars, most of the Tevatron/LHC searches are done in the context of either MSSM or general 2HDM, and rely on the gluon fusion or vector boson fusion production mechanisms.None of these searches are applicable for us, because the leptophilic H/A does not directly couple to the quarks, and in the alignment limit (θ → 0), also does not couple to the W/Z bosons.This also suppresses other production channels like pair-production of HA.
The MACS experiment at PSI puts an upper bound on the oscillation probability P (M µ ↔ M µ ) < 8.2×10 −11 at 90% CL [61], while a sensitivity at the level of O(10 −14 ) is expected at the proposed MACE experiment [62].In our 2HDM setup, the oscillation probability gets contribution from both H and A [60,63]; see Appendix A. If H and A are highly non-degenerate, i.e.only either H or A dominantly contributes, the MACS bound requires Y eµ < 0.18 for m H/A = 146 GeV, as shown (for illustration only) by the vertical purple line in Fig. 2 left panel, which rules out the LFV coupling needed to explain the CMS excess with a single scalar/pseudoscalar.However, for m H ≃ m A , there is a cancellation in the M µ ↔ M µ amplitude which allows for either Y eµ or Y µe to be large, but not both.This is depicted by the gray-shaded region in Fig. 2 right panel for m H ≃ m A = 146 GeV.In this limit, even the future MACE sensitivity cannot rule out the CMS excess region.
Thus far, it seems either Y eµ or Y µe coupling can be taken to be large for explaining the CMS excess, while being consistent with the current constraints.However, as discussed below, a combination of the LHC charged Higgs constraints and the global fit to non-standard neutrino interactions (NSI), preclude the possibility of a large Y µe coupling, as shown by the horizontal purple-shaded region in Fig. 2 right panel.Therefore, the only viable possibility is to have a large Y eµ coupling and small Y µe coupling (the lower right band of the CMS excess region in Fig. 2 right panel).
B. Charged sector
At LEP, H ± can be pair produced through either schannel Drell-Yan process via γ/Z, or t-channel via light neutrino.It can also be singly produced either in association with a W boson or through the Drell-Yan channel in association with the leptons [48].Once produced, the charged scalar decays into ν α ℓ β,R through the Yukawa coupling Y αβ , which has the same signature as the righthanded slepton decay into lepton plus massless neutralino in SUSY models: We can therefore reinterpret the LEP slepton searches [64][65][66][67][68] to derive a bound on light charged scalars.Depending on the branching ratio BR(H + → ℓ + ν) the LEP limit on the charged scalar varies from 80 − 100 GeV [48].
Similarly at the LHC, a pair of charged scalars can be produced through s-channel Drell-Yan process via γ/Z, followed by decays into ν α ℓ β,R .By reinterpreting the LHC searches for right-handed sleptons, one can therefore put bounds on the charged scalar mass as a function of BR in the massless neutralino limit.From an ATLAS analysis of the LHC Run-2 data [69],we obtain a lower bound of m H + > 425 GeV at 90% CL for BR(H + → µ + ν e ) = 1.As we will see below, for m H = m A = 146 GeV, the charged Higgs boson cannot be too much heavier due to the electroweak precision data (EWPD) constraints.Therefore, we would need additional decay channels in order to make BR(H + → µ + ν e ) < 1 and relax the LHC constraints.
IV. RESOLVING THE W -BOSON MASS ANOMALY
The mass splitting between the neutral and charged components of the SU (2) L doublet H 2 breaks the custodial symmetry of the SM at the loop level.The change in the relationship between the W and Z boson masses can be used to accommodate the recent CDF W -mass anomaly, which currently stands at 7σ [31].This effect can be parameterized by the oblique parameters S and T [70,71], which modifies [72] where θ w is the electroweak mixing angle.We incorporate the global electroweak fit [73] with the new CDF data to show allowed ranges for the scalar masses (m A , m H + ) with the choice of m H = 146 GeV in Fig. 3 (blue band).In spite of explaining the CDF W mass shift, the model is mildly consistent with the PDG global fit [74], as can be seen from the red region in Fig. 3.We find that the CDF anomaly prefers significant splitting between m A and m H + .For m H = m A = 146 GeV, we require m H + ≃ 228-234 GeV to explain the CDF anomaly at 2σ.To reconcile the CDF-preferred m H + region with the LHC constraint m H + > 425 GeV, we reinterpret the slepton search limit as a function of the charged Higgs mass and BR(H + → µ + ν e ), using the publicly available cross section limits given as a function of the slepton mass from the auxiliary material of Ref. [69], as well as from an earlier ATLAS analysis [75].We find that to lower the m H + bound to ∼ 230 GeV, as required by the CDF anomaly, we need BR(H + → µ + ν e ) < 0.7 (0.95) according to the cross section limits reported in Ref. [75] ( [69]).We therefore fix BR(H + → µ + ν e ) = 0.7 for our analysis of the CMS excess in Fig. 2.
For the purpose of our discussion here, we are agnostic about the detailed structure of the Yukawa coupling matrix, which could account for the remaining 30% BR.Additional nonzero entries in the Yukawa matrix are viable, albeit requiring potential adjustments to suppress LFV.One example texture that fits our branching ratio requirement is Y eµ = 0.71, Y τ τ = 0.46, and all other Yukawa entries negligible.This choice does not lead to trilepton LFV decays but does induce the radiative LFV decay µ → eγ via a two-loop process involving the tauon in the Barr-Zee diagram [15,76].However, it is also important to consider other diagrams such as the two-loop Barr-Zee diagram from the charged Higgs, which depends on the quartic coupling λ(H † 2 H 2 )(H † 1 H 2 ), and depending on the sign of λ, can destructively interfere with the tau-loop-induced diagram.We find that the LFV constraints can be satisfied for the above choice of Yukawa couplings for a relatively small quartic coupling of order O(10 −3 ).
We note here that instead of a large Y eµ coupling, if we had allowed a large Y µe coupling, it would imply the coupling of charged Higgs H − to electrons and muon neutrinos.This leads to a ν µ − e coherent scattering in matter via t-channel exchange of the charged Higgs, and hence, generates an NSI of the type [48].From a recent global analysis of NSI constraints, we get a 90% CL bound of ε µµ < 0.015 [77]. 1 For m H + ∼ 230 GeV, this gives an upper bound of Y µe ≃ 0.23, which is shown by the purple-shaded region in Fig. 2 right panel.
V. MUON ANOMALOUS MAGNETIC MOMENT
The same Y eµ coupling also contributes to the (g − 2) µ via the neutral and charged Higgs loops [79,80]; see Appendix B. The combined result of the Brookhaven [28] and Fermilab [29] (g − 2) µ experiments is 4.2σ away from the 2020 global average of the SM prediction [81]: ∆a µ (WP) = (251 ± 59) × 10 −11 . 2 This discrepancy is however reduced to only 1.5σ, if we use the ab-initio lattice calculation from the BMW collaboration [82] 3 , which gives ∆a µ (BMW) = (107 ± 70) × 10 −11 [87].The extra contribution from the neutral Higgs sector in our 2HDM scenario can explain the (g − 2) µ anomaly at 1σ, as shown by the red (orange) shaded region in Fig. 2, using the BMW (WP) value for the SM prediction.We find that the 1σ WP-preferred region is excluded by LEP constraint on Y eµ for m H ≃ m A = 146 GeV, whereas part of the 1σ BMW-preferred region is still allowed, while simultaneously explaining the CMS excess and the CDF W -mass anomaly.
Fig. 4 shows the range of the (g−2) µ anomaly-preferred region at 1σ in the neutral Higgs mass-coupling plane.For comparison, the green bar at 146 GeV shows the CMS excess region, whereas the purple shaded region around it is the exclusion region derived from CMS data [23].The gray-shaded region shows the LEP exclusion from e + e − → µ + µ − data [53].The magenta region is excluded at 2σ from the precision Z-width measurements [74], because for m H/A < m Z , an additional decay mode 1 This is derived from the bound on εττ − εµµ [77] (see also Ref. [78]), which is stronger than the individual bound on εµµ.
In our model, both εµµ and εττ cannot be simultaneously large due to strong charged LFV constraints; therefore, the bound on εττ − εµµ is also applicable for εµµ. 2 This was recently updated to ∆aµ(WP) = (249±48)×10 −11 [30], but there is no noticeable change in our results. 3Other lattice calculations now agree with the BMW result in the "intermediate distance regime" [83][84][85][86], but a more thorough and complete analysis is ongoing.line is the indirect lower bound on the neutral Higgs mass, derived using a combination of the electroweak precision constraint on the mass splitting between the neutral and charged Higgs sectors using the CDF (PDG) value of m W , and the LEP lower limit of ∼ 100 GeV on the charged Higgs mass.From Fig. 4, we find that if we use the WP value for g − 2, only a narrow band around m H/A ≃ 25 GeV can explain the g − 2 anomaly at 1σ.On the other hand, if we use the BMW value, most of the parameter space for m H/A > 25 GeV is currently allowed.Future sensitivity projections from HL-LHC [88] and ILC [54] can cover most of the remaining allowed parameter space, irrespective of the status of the CMS excess.In general, a dedicated neutral scalar search in the LFV dilepton channels beyond 160 GeV could completely probe the (g − 2) µ -allowed region.
VI. DISCUSSION AND CONCLUSION
Both ATLAS and the CMS collaborations searched for new bosons decaying into opposite-sign and different flavor light leptons (e ± µ ∓ ) [19,23].In the CMS analysis, machine-learning techniques are used to enhance the sensitivity where an excess is observed.ATLAS, on the other hand, did not perform such a dedicated, BDToptimized resonance search, and did not interpret the results for masses which are different than the SM value of ∼ 125 GeV.Therefore, naively, it could be that the CMS analysis is sensitive to a signal hypothesis which was not reachable by ATLAS.Although a similar excess at 146 GeV is disfavored by ATLAS at 1σ (as shown in our Fig.2) [45], it is a ballpark estimate only and not entirely conclusive; a dedicated interpretation of the AT-LAS results is required.
Both analyses generated signal samples with two mechanisms: gluon-fusion (ggH) and vector-boson-fusion (VBF).The contribution of the ggH mechanism to the total cross section is significantly higher [23], and therefore it has the dominant effect on the results.In order to validate the use of the results by simply comparing cross sections, we compared the kinematic distributions of the leptons between the ggH mechanism and a direct production with leptons from the proton, and found good agreement.
In conclusion, the leptophilic 2HDM provides the simplest explanation for the CMS eµ excess at 146 GeV.It also simultaneously resolves the CDF W -mass and the (g − 2) µ anomalies.A minimal extension of this 2HDM by a singlet charged scalar leads to the Zee model of radiative neutrino mass generation [96].Should the CMS excess be confirmed, a detailed neutrino oscillation fit (similar to what was done in Ref. [48]) with large Y eµ entry could be performed, which might also lead to concrete predictions in the neutrino sector, including NSI, as well as for charged LFV decays.muon system, τ µ is the muon lifetime, and G M M is the Wilson coefficient which, in our 2HDM scenario, is given by [ , (A2) with the following coefficients in the alignment limit: We find that for m H ≃ m A , there is a cancellation in the G 45 amplitude (at the level of 6%), while the G 3 amplitude vanishes if we consider only Y eµ (or Y µe ).
Appendix B: Lepton anomalous magnetic moment The expression for one-loop contribution of neutral and charged scalars to (g − 2) µ is given by In the limit of m H ≃ m A , the terms proportional to m e m µ cancel.These terms also vanish in the limit of Y µe → 0, or if the Yukawa couplings are real.For complex Yukawa couplings, there will be additional strong constraints from electron electric dipole moment [97].For our scenario with small Y µe , Eq. (B1) reduces to the simple expression The same Yukawa coupling Y eµ also contributes to (g − 2) e , and ∆a e is given by Eq. (B2) with the re-placement m µ ↔ m e .Due to the m 2 e suppression, the corresponding bound on Y eµ is much weaker.Moreover, it is not clear whether the (g − 2) e result is anomalous.Although the experimental value of a e has been measured very precisely [98], the SM prediction [99] relies on the measurement of the fine-structure constant, and currently there is a 5.5σ discrepancy between the Paris Rb determination of α [100] and the Berkeley Cs determination [101].The recent Northwestern result sits in between [98].Until the discrepant α measurements are resolved, we cannot draw any meaningful constraints from (g − 2) e .
FIG. 1 .
FIG. 1.A representative Feynman diagram for resonant production of leptophilic scalar fields at hadron colliders through lepton PDF.
FIG. 2 .
FIG. 2. Left: Total eµ production cross section from H/A (blue band) at √ s = 13 TeV LHC as a function of the Yukawa coupling Yeµ (or Yµe) in our leptophilic 2HDM with mH ≃ mA = 146 GeV.Right: Same as left panel but in the Yeµ − Yµe plane.See text for details.
FIG. 4 .
FIG.4.The CMS excess at 1σ (green) and 95% CL exclusion (purple) in the mass-coupling plane, contrasted with the 1σ regions preferred by (g − 2)µ.Also shown are the constraints from LEP dilepton, Z → 4ℓ, EWPD, and the future ILC and HL-LHC sensitivities. | 5,334 | 2023-05-30T00:00:00.000 | [
"Physics"
] |
Serious Game to Training Focus for Children with Attention Deficit Hyperactivity Disorder: “Tanji Adventure to the Diamond Temple ”
-Attention Deficit Hyperactivity Disorder (ADHD) affects the academic performance of youngsters. Children with ADHD struggle to remain focused during learning due to decreased attention and concentration. They are highly active and have trouble remembering teachers' instructions. Attention difficulties, focus disorders, and hyperactivity might hinder learning. This work aims to observe the impact of serious games with a platformer genre and puzzles titled "Tanji Adventure to the Diamond Temple" on the learning activities of kids with ADHD. The goal is to create a fun and engaging learning environment to boost the motivation and focus of those with ADHD. Game development process utilizes iterative prototyping. Each iteration yields a prototype that refines in the next iteration. The game was tested on children with ADHD by examining their behavior before and after playing to evaluate whether new game mechanics were necessary. The review procedure includes observing youngsters and interviewing teachers and involves specialists to evaluate its contents. The study confirms that the concentration of children with ADHD increase after playing the game. The game incorporates elements that help youngsters with ADHD concentrate and increase their attention span.
Introduction
ADHD (attention deficit hyperactivity disorder) is one of the most common behavioral disorders in childhood [1], [2]. ADHD is a disorder that makes sufferers very active; other complaints experienced by people with ADHD are restlessness, cannot stay still, and lack of attention to activities or activities being carried out [3], [4]. ADHD may fall into one of three subtypes: ADHD subtype 1 is attention disorder without hyperactivity and impulsivity; ADHD subtype 2 is hyperactivity disorder and impulsivity without attention deficit disorder; and ADHD subtype 3 is a combination of attention deficit disorder, hyperactivity, and impulsivity.
The prevalence of children with ADHD globally ranges from 2% to 7%. Based on the DSM-IV diagnosis, there are 5% to 7% of children with ADHD [5]. According to [6], ADHD is not a rare thing experienced by children. ADHD cases affect up to 6% to 9% of children, and in 60% of cases, significant ADHD symptoms can persist into adulthood. There is no accurate data regarding the prevalence of ADHD in Indonesian children. Based on research [7] in 2016 in Yogyakarta City and Sleman Regency, approximately 8.09% of school-age children with ADHD.
The condition of ADHD in children makes learning performance decline because ADHD also impacts children's memory abilities. Examples include ADHD children who find it challenging to sit still while studying, are very active, and find it difficult to remember instructions from their teachers because of lack of attention [8], [9]. During the learning process, of course, substantial attention is needed to capture and digest the information provided by the teacher [9]. According to [10], the learning ability of children with ADHD lags far behind their peers due to attention and concentration disorders and hyperactivity that can hinder their learning process.
From the problems raised, an idea emerged to develop learning media in serious games for ADHD children with the platformer genre accompanied by puzzles named "Tanji Adventure to The Diamond Temple." The game utilizes platformer and puzzle genres because the puzzle genre is one of the game genres with designs and playing rules proven to improve the player's brain Jurnal Ilmu Komputer dan Informatika ability [11]. The platformer and puzzle genres also have mechanisms that can help ADHD children learn to understand and remember shapes, colors, and concepts. In addition, children with ADHD who have trouble focusing on one task at a time may benefit from playing puzzle games because their bright colors and exciting forms are more likely to hold their attention [12], [13].
The purpose of developing a serious game, "Tanji Adventure to The Diamond Temple," for ADHD children is to provide a fun and engaging new way of learning to increase the motivation and attention of ADHD children [14], [15]. Practicing this focus is essential because children with ADHD are too active and cannot stay still [16], causing them to have difficulty concentrating or focusing on the subject. Therefore, the serious game "Tanji Adventure to The Diamond Temple" is designed to train the attention skills of ADHD children because attention skills are the most important thing to improve so that they can study well in school. In addition, video games can be a powerful tool to train ADHD children to focus on daily activities [17], [18]. With the serious game "Tanji Adventure to The Diamond Temple," it is hoped that ADHD children will get exciting, fun, and enjoyable learning with this fun learning; hopefully, it can be a solution to make learning more effective. So that children with ADHD can focus more on learning at school and their daily activities.
The development of learning games for children with special needs is a current trend in education. In addition, to help teachers educate children with special needs, this learning media effectively attracts students' attention. The success of several video games in helping the learning process of children with special needs has caused many developers to start making game applications as educational support. Each piece of teaching media for kids with special needs is made to fit the lessons taught in special schools. Therefore, in addition to creating new applications, developing teaching media is also carried out by developers to improve existing teaching media.
The previous research on developing serious games to increase the focus of ADHD children is COMAC. This serious game is a platformer genre designed to improve the focusing ability of ADHD children [19]. In the development of COMAC, there are six design strategies applied, namely: 1) Clear instructions to players, 2) Positive feedback, 3) Specific goals, 4) Encouraging clear thinking, 5) displaying player statistics on the screen, and 6) encouraging organized behavior in children. The design strategy is implemented in COMAC by providing clear instructions. Giving clear instructions is considered essential to improve the ability to work memory (WM), visual memory (VM), auditory memory (AM), and visual-spatial memory (VSM) [18]. COMAC teaches players to focus on the character being played to pick up bombs according to the orders given to the player. Examples of commands include "Take a bomb with the number 27"; the player will lose one life if one takes the bomb.
The platformer genre game was also developed in [15]. This study developed a game that aims to improve academic ability by applying successive goals and subgoals. The goal is to improve academic ability with repetitive activities, complete levels, collect as many points as possible, and complete tasks to get rewards. Another serious game developed to improve focus in ADHD children is Plan-it Commander [17]. In this game, players take on the role of captain of a spaceship whose job is to collect rare minerals in the universe. Players are given various missions with different criteria. Each mission has a specific learning objective; for example, on a mission to collect minerals, players must stay focused on collecting minerals and avoid the distractions that arise. In addition to training focus, the game also trains the player's time management because the player must complete the mission within the allotted time. Children with ADHD have difficulty organizing the work they must do, so practicing player time management can help ADHD children to encourage organized behavior in children [19].
Previous studies that developed serious games to improve ADHD focus had puzzle and platformer gameplay. Games with puzzle and platformer genres allow players to explore game stages to complete tasks. Players can design the right strategy and take time to achieve goals. By designing the right strategy, this game helps players practice their thinking straight [19]. It aims to train players to stay focused on their goals. Each game developed has a story and missions to complete to advance to the next level. The game can only be continued if the mission has been completed. If the player makes a mistake, the player must repeat the game from the beginning. In addition to being given missions to complete, players are also trained to control themselves from being distracted by game objects that players must avoid. Another element in games included in developing serious games to train ADHD children's focus is the narrative element. The narrative element in the game can make it more visually appealing so that children are interested in completing it [20].
Based on previous studies that developed serious games to train the focus of ADHD children, several features must be in the game. Firstly, the game must provide clear instructions to the player. Instructions can be written or voice-directing the player. Players can improve their organization and attention skills while listening to instructions [18]. Secondly, the game must have clear goals to reach a specific level. For example, to complete level 1, the player must get 100 points and defeat two enemies. Thirdly, the game must show statistics on the player's abilities, such as level, points, and player scores, to know their abilities in the game [19]. Children with ADHD tend to feel insecure when they must complete a job, so displaying players' levels, speed, points, and scores can increase their confidence [21].
The game developed applies these three principles and adds unique mechanics to train the player's focus by combining the player's eye and hand focus. Each game mechanic will involve the player's eye and hand coordination to complete the mission. In addition, the feedback given is positive to increase players' enthusiasm and confidence. Players are also given time to recall what has been achieved and plan strategies for the next level. The games that will be developed will also pay attention to ADHD children who can feel depressed if they find it challenging to complete tasks. In games developed, players will be allowed to try the levels in the game to learn from their mistakes so that players can finish the game well.
Methods
The development method used in making this game is using the prototype method. The stages of game development consist of need assessment, design, development, playtesting, and evaluation. The development process is iterative, where after reaching the playtesting stage, it is possible to repeat the Design process. Each first step will generate a prototype, which will improve in the subsequent iteration.
Requirement Analysis
The development of learning games to train the focus of ADHD children begins with conducting a requirement analysis to discover the general characteristics of ADHD children to adapt to their learning styles through Literature Studies. Literature studies were carried out to develop focus training games for ADHD children. The literature study was carried out by reviewing previous studies that discussed games used as teaching media for children with ADHD and articles discussing the characteristics of children with ADHD. Literature studies have shown that the platformer genre is in great demand by children with ADHD [22]. In addition to literature studies, needs analysis is also done through interviews. Interviews were conducted with experts to obtain the learning characteristics of ADHD children. The interview process was carried out with two experts through remote discussions using the Zoom application (remote interviews were conducted because it was impossible to have face-to-face meetings during the pandemic). The interview was conducted with an Education for Children with Special Needs (ABK) expert and an Inclusive Media Development Expert. This interview explores the fundamental needs needed to produce learning games for children with ADHD. In addition, this meeting also explored the scope of children with ADHD, such as general characteristics, habits, and several theories about handling children with ADHD.
The observation was also carried out to get a complete picture of children's learning with ADHD. Observations were made to observe the school atmosphere and learning styles of ADHD children. Direct and limited observations with ADHD children were carried out in 2 Public and Inclusive Schools in West Java and Jakarta. In addition, observations were made on 7 ADHD children in the two schools. In this field observation process, interviews were also conducted with teacher educators regarding the obstacles needed to get a complete picture of the behavior and habits of children with ADHD in learning. Observations were carried out for one week in each school. Based on the needs analysis results, some of the primary things used as a reference in developing this game are: a) Ensure that children with ADHD do not have other congenital special needs such as autism or other academic ability barriers. b) ADHD children quickly lose their attention, so avoid using a too-flashy interface. c) Do not use noisy music because it can interfere with focus. d) Provide positive feedback to motivate. e) Use language that is easy to understand. f) Instructions for completing the level should be clear. g) Give variations in giving instructions, for example, using voice instructions.
Design
The Design stage is the process of producing designs in the form of concepts, content, and storyboards that describe the game's flow and is developed to plan game implementation procedures-the design stage results from the accumulation and design based on the needs analysis results. Figure 1 shows a highlevel outline of the upcoming storyboard. The game begins with a cutscene when Tanji discovers his grandfather's treasure map. The scene then changes to Tanji attempting to locate a secret shrine. In addition, several challenges await within the Tanji temple. This barrier aims to assist youngsters with ADHD in developing their capacity to focus. A Boss defends each level with various skills.
Finally, Tanji discovers a DarkerStone prize at the end of the level. Table 1 shows a summary of the resulting design process.
Development
The Development stage is where primary assets, such as game objects and scripts, are added to the game. Figure 2 shows the game object Tanji (the main character to be played) and the game object as a Temple (temple). In the game "Tanji Adventure to The Diamond Temple," various scripts help control player movements, level changes, enemy movements, object movements, boss movements, etc. After the primary development stage is completed, improvements are made to various aspects of the game, starting from the visuals, such as the background, game objects, and enemy AI, and adding audio.
Playtesting
The playtesting stage is the testing stage and the implementation, which is carried out directly by the child. At this stage, children with ADHD are asked to play games developed at the development stage to make observations regarding the game's mechanics and then analyze whether it is necessary to add other mechanics or change the existing mechanics to suit the child's needs. If several needs for improvement are found at this stage, the research stage will return to the design stage. A more detailed explanation regarding the playtesting stage is found in chapter 4. A serious game that teaches children with ADHD to be more focused on current given tasks.
How the story relates to targeted behavior change During their missions' players are confronted with assignments requiring skill to be focused and remember the environment to solve the mission problems. Game components Player's game goal/objective(s) The players aim to collect diamonds and emblems hidden inside treasure boxes. Each mission will ask the players to find specific items to complete the mission and continue to the next level. In addition, two mini-games have specific goals: ''Dodge the Trap,'' motor skill, and ''Catch the Diamond,'' eye coordination.
Game Element
Genre Adventure, educational, platformer Story The player takes the role of Tanji, the adventurer who continues his grandfather's journey to find the legendary jewel Darkenstone in the mysterious Temple of Diamond. The player will be assigned various missions throughout the epic journey. mechanics Multi-levels, visual guidance to complete a task, audio guidance to complete a task, search the treasure boxes, collect items, and various challenging enemies (artificial intelligence/ AI). Setting A natural atmosphere. The game world with beautiful mountain background and a mysterious temple to be explored. Avatar Adventurer, male, nickname (Tanji), explores the game world, opens the treasure box, and collects items in the game world.
Evaluation
The evaluation stage measures the implementation results of the child after the child has played the game that was developed. The evaluation stage involves observing children, interviewing teachers, and evaluating game content experts. A more detailed explanation regarding the evaluation stage is found in chapter 4.
Result
In the results section, we will discuss the results of the games that have been completed. This discussion will cover essential game factors to train focus on ADHD children. As shown in Figure 3(a), the player must complete a mission to complete a level. Next is Tanji's gameplay on an adventure to find the eagle emblem hidden in the chest. As the game progresses, the player can distinguish the shape of the right chest with a chest containing traps. Players must focus on finding the appropriate emblem because taking the wrong emblem will cause the cave to collapse. Once the correct emblem is found, the player must return to open the door and proceed to the next level. The gameplay can be seen in Figure 3 (b). Figure 3 (c) is a scene where the player has completed part of the whole level. It can be seen in Figure 3 (c) that this game provides positive feedback to increase the enthusiasm and confidence of players. This "Tanji Adventure to Diamond Temple" game applies a Trial-and-Error system where players must learn from their previous mistakes to complete the game's levels. If the player is not focused and takes the wrong item requested by the game, a page will appear that does not drop the player's morale and gives words to keep trying, an example of which can be seen in Figure 3 (d). The game becomes increasingly difficult as the player advances through its levels. For example, at levels 6 and 7, players will be given a command via voice; for instance, at level 6, it will sound like "Find the Red Gem." Commands such as "Find the Red Gem" will be repeated with a pause of approximately 1-2 minutes until the player finds the requested item, as shown in Figure 3 (e). At Levels 8, 9, and 10, players will only get a one-time command to get the appropriate item to win that level. In Figure 3 (f), the player must find two emblems according to the one on the temple door to complete the level. There is also a different mechanism from the primary game mechanism that aims to train the focus ability of ADHD children. The focus is very easily distracted. Very often not paying attention. Need to repeat the instructions on the given task many times.
Super active, running after his friends. Can't queue when checking assignments When sitting can not be still hands and feet. Likes to scream.
Ra6f4
Male 11 Can understand orders/instructions. Can read and write Focus is easily distracted. Sometimes, you need to be reminded to stay focused. When given the original question answered.
Unable to stay still when sitting. Nosy to his friends. When punished, take a walk to sit next to the teacher's desk/stand in front for 1 minute.
Names were withheld to protect respondents' privacy The game "Tanji Adventure to Diamond Temple" also has a minigame that teaches hand-eye coordination, where players must avoid traps and catch items that fall from above. The gameplay of the minigame mechanism can be seen in Figures 3 (g) and (h). In addition, this game has a feature that gives players time to breathe to remember what they have done during the game. In Figure 3 (i), the player goes through a long dark alley with no obstacles, either traps or enemies. Long dark path with no barriers aims to make the player feel relaxed without new information. Thus, the player can reflect on the focused training carried out at the previous levels. Figure 3 (j) is a screenshot of the boss battle gameplay in "Tanji Adventure to Diamond Temple." The boss in this game requires players to focus on paying attention to attacks and training eye-hand coordination. Players are asked to respond quickly to avoid boss attacks and counterattacks to defeat the boss.
Discussion
The playtesting stage of the educational game "Tanji Adventure to Diamond Temple" to train focus on ADHD children was conducted in inclusive elementary schools that accept children with special needs such as ADHD and autism in West Java. During the Covid-19 pandemic, all academics must wear masks, wash their hands, and comply with other regulations to avoid the spread of Covid-19. Respondents/children with ADHD in this study were identified and selected by the principal. The participants and their parents received an information letter that contained their consent to participate in this study. The inclusion criteria used were (a) having a DSM-V clinical diagnosis of ADHD defined by a certified health professional/institution and (b) having an age between 6 to 12 years. The exclusion criteria chosen were having physical or cognitive disabilities (i.e., deafness, blindness) because they were predicted to have great difficulty playing the resulting game. Playtesting game "Tanji Adventure to Diamond Temple" was carried out by three children with ADHD (hyperactivity) indications. The profile of the respondents can be seen in Table 2.
The children played playtesting games at school three times in 2 weeks. The school only allows one child per day to participate in the playtesting session. The first and second playtesting were conducted to assess whether mechanics were too tricky for children to understand when playing. Meanwhile, the third playtesting carried out an observation and evaluation process. The children were asked to play the game for 30 to 45 minutes in each playtesting session. This game has three main bosses, each with a different difficulty level. However, playtesting is only played until the second boss because the game still needs to be finished. Some note improvements in the first and second playtesting include lighting improvements, fixing instructions that are not understood, sound enhancements, and adjusting the difficulty level when facing each boss (such as the number of boss attacks and the duration of the boss issuing attacks). When the first and second playtesting occurred, a recording was carried out to observe the respondents' interest in playing games. Before and after playing, respondents must clean their hands using the hand sanitizer provided. Despite wearing a mask, it can be seen the enthusiasm and interest in playing games in respondents Ad1l0 and Fr2d0 (Figure 4). 1x. Respondents were able to pay attention to the visual commands given well.
0x. Respondents were able to pay attention to the visual commands given well. 6 Audio instructions during the level (the distance between instructions is approximately 1-2 minutes).
6x. When there is a change in the order, the respondent has difficulty listening to the command given and forgets the order.
5x. When there is a change in the order, the respondent has difficulty listening to the command given and forgets the order.
2x. When there is a change in orders, respondents have difficulty listening to the commands given. 7 Audio instructions are only found at the beginning of the level.
2x. Respondents can learn from previous mistakes and pay more attention to the voice commands.
2x. Respondents can learn from previous mistakes but rush to take the object wrong. 1x. Respondents have been able to learn from previous mistakes. 8 Visual instructions are only available at the beginning of the level.
5x. When there is a change in the order, the respondent has difficulty because the instruction is only given once.
9x. When there is a change in the order, the respondent has difficulty because the instruction is only given once.
4x. When there is a change in the order, the respondent has difficulty because the instruction is only given once.
9
Visual instructions are only available at the beginning of the level.
5x. When there is a change in the order, the respondent has difficulty because the instruction is only given once.
3x. When there is a change in the order, the respondent has difficulty because the instruction is only given once.
2x. When there is a change in the order, the respondent has difficulty because the instruction is only given once.
10
Visual instructions are only available at the beginning of the level.
1x. Respondents have learned from previous mistakes and are more able to focus on remembering the instructions given.
1x. Respondents have learned from previous mistakes and are more able to focus on remembering the instructions given.
0x, Respondents have learned from previous mistakes and are more able to focus on remembering the instructions given. The game is pretty fun, but FreeFire is more fun. (SA) SD = Strongly Disagree; D = Disagree; N = Neutral; A = Agree; SA = Strongly Agree.
The behavior of respondent Ra6f4 was notably distinct from that of other respondents. Ra6f4, the first respondent in Figure 5 (left image), lacked the same passion as Ad1l0 and Fr2d0 because Ra6f4 believed that the game's challenges were too easy, underestimated the difficulty of the game (played with one hand), and caused them to be unmotivated to play. However, the disinterest of the respondent Ra6f4 was brief. In the video game "Tanji Adventure to Diamond Temple," when he encounters the first boss, RockGollem, his posture (opening the hoodie, good body, not leaning back, and getting closer to the screen) and the focus of his gaze begins to alter. This initial boss needs precise timing and concentration on avoiding stone-throwing strikes from the RockGollem boss or the previous boss ( Figure 5 right).
Evaluation of respondents' performance while playing
All game sessions were recorded in the third playtesting and related to the children's performance during play were recorded. Each respondent has a different playing performance and style, and playing speed. Table 3 shows the respondents' performance for each level in the game "Tanji Adventure to Diamond Temple." Table 3 shows that each respondent can adjust to learning from mistakes at each level because they do not focus on the previous level. The ability to adapt at the next level shows that the games developed have mechanics that can motivate children to focus more on instructions.
Evaluation of respondents' performance while playing
The perceived playing experience varies from one respondent to another. However, the three respondents showed a sense of pleasure and satisfaction after playing this game. Respondent Ad1l0 had a very expressive reaction indicating that he was happy. This expression can be seen in Figure 6(a), where respondent Ad1l0 smiled and showed joy. Respondent Fr2d0 has a reaction that looks satisfied and happy. This expression can be seen in Figure 6(b), where respondent Ad1l0 smiles behind his mask. Finally, respondent Ra6f4 had a delighted reaction with a satisfied smile. This expression can be seen in Figure 6(c), where respondent Ra6f4 smiled while using a hand sanitizer, seeing his friend trying to play the game, Tanji. In addition to evaluating the respondent's reaction after playing, an assessment was also carried out using a questionnaire to objectively assess the "Tanji Adventure to Diamond Temple" game. Table 4 shows the results of filling out the questionnaire to the respondents selected in this study. The respondents' outcome was different, but overall, Ad1l0, Fr2d0, and Ra6f4 developed positively.
Ad1l0 development. The development of respondent Ad1l0 can be seen in how he can respond to instructions more quickly, although he still needs to be reminded several times by the teacher. Still, the tasks given by Ad1l0 can focus on completing assignments without having to do homework. Table 5 shows the difference in the ability of Ad1l0 respondents to focus before and after using the game at least three times. The following is the documentation of Ad1l0 activities at school before playtesting. In figure 7 (left), Ad1l0 is running around in place of ablution even though he has been reminded many times. In figure 7 (right), Ad1l0 needs to be more focused while doing the task. Ad110 walked towards his friend even though he was being given writing cursive letters. The focus is easily distracted, hindering the work of the task Even though they are still not focused, they can complete their assignments at school. 2 Often do not pay attention, cool alone.
He still likes to be alone but can answer the teacher's questions appropriately. 3 Need to repeat the instructions on the given task several times.
Ad1l0 can already do the task without having to be reminded continuously.
The development that can be seen after playing the game "Tanji Adventure to Diamond Temple" is the habit of Ad1l0 respondents to focus on video games which unconsciously gets carried away by getting used to focusing on the real world. It can be seen in the documentation of Ad1l0 activities at school after the test. The picture on the left shows Ad1l0 (batik shirt, red pants) neatly carrying out Duha prayer without being reminded many times. Even Ad1l0 can be neat compared to his friends, who are still tidying the prayer rug. Figure 8 on the right shows an increase in Ad1l0's ability to focus. Respondents pay close attention to the primary ethical material given by their teacher.
Ad1l0 really cannot focus all the time like children in general. For example, in Figure 9, Ad1l0 loses focus and plays with his study table. However, Ad1l0 is usually reminded to be calm and attend the lesson. There is an increase in the period between 5-10 minutes to focus on the material given. With the development of the ability to focus on AD1l0, respondents are expected to improve their academic achievement at school. The focus is very easily distracted.
Still not focused but has been able to complete assignments at school. 2 Very often, not paying attention and not paying attention to the lesson.
Although sometimes disturbed by small things, Fr2d0 actively asks about the assignments given at school. Actively asking about assignments indicates that Fr2d0 pays attention to the task given by the teacher. 3 Very indifferent to the task. More attention is needed to repeat the instructions on the given task many times to work on it.
Fr2d0 already wants to do the task without being persuaded and can do the task without being reminded continuously (if there is a new interruption, he must be reminded again).
Fr2d0 development. The development of Fr2d0 respondents is evident significantly because the hyperactivity level of Fr2d0 is noticeably higher than the other two respondents. For example, Fr2d0 often runs without paying attention several times, dropping his friend's drinking bowl and once dropping his study table. When learning occurs, Fr2d0 usually does not pay attention, even to eat, when the lesson has started. His hyperactive behavior in Fr2d0 often punished him for reading Istighfar and Asmaul Husna, accompanied by a religious teacher in the prayer room. When the teacher helped Fr2d0 do a math task on number patterns, Fr2d0 did not listen or fill in the answers. However, during implementation and testing, it turned out that Fr2d0 was enthusiastic about playing the game, Tanji. One of the unique things that made Fr2d0 interested was the design of the enemy sprite in the form of a mouse that Fr2d0 felt was like his cat at home. Starting from the enthusiasm to play the game without giving up even though he fell many times, Fr2d0 kept trying and trying without getting bored. From this uncompromising attitude, Fr2d0 understands that it is necessary to focus on the orders given and cannot do it arbitrarily to win the game. The development of the Fr2d0 focus can be seen in Table 6.
The following is the documentation of Fr2d0 activities in schools before the test. Figure 10 shows the picture on the left of Fr2d0 eating snacks because he did not notice that class had started and received a warning from his class teacher. Figure 10 on the right during class time, Fr2d0 cannot sit still and stands up, walking around his desk. A significant development by respondent Fr2d0 that can be documented after playing the game "Tanji Adventure to Diamond Temple" is the emergence of an attitude of commitment to focus on the given task. Respondent Fr2d0, usually indifferent to his task, concentrates on stringing green beans and coriander to color the turtles ( figure 11). This task requires considerable accuracy, focus, and commitment with a time limit of around 30-45 min. However, it is undeniable that Fr2d0 was seen throwing mung bean seeds at his friends several times. However, it is indisputable that Fr2d0 was seen throwing mung bean seeds at his friends several times. Significant developments by respondents Fr2d0 prove that proper training, easy-to-understand instructions, and giving positive feedback in the educational game "Tanji Adventure to Diamond Temple" can build children's mental toughness and commitment to focus on completing their tasks.
The documentation in Figure 12 shows that Fr2d0 can also focus on doing Islamic religious exam questions. Figure 12 on the left shows Fr2d0 paying attention to the teacher's instructions. While in Figure 12, the right side of Fr2d0 tries his best to do the questions, although sometimes Fr2d0 likes to walk around his desk area and is warned to sit back if confused about his task.
Ra6f4 development. The development that can be seen from the last respondent, Ra6f4, after playing the game "Tanji Adventure to Diamond Temple," is that Ra6f4 is less likely to answer questions asked by the teacher at random because Ra6f4 can focus more carefully on what questions are asked. The tendency of respondent Ra6f4 is not to focus because sometimes his hands need help to stay still, teasing his friends. Ra6f4's impulsive behavior often causes Ra6f4 to forget to bring an existing textbook or assignment. Ra6f4's rash behavior is directly proportional to Ra6f4's playing style in the game "Tanji Adventure to Diamond Temple," which rushes, not paying attention to the surroundings. While playing Tanji, this impulsive behavior begins to disappear when Ra6f4 repeats the same level several times for picking up inappropriate items. Ra6f4 begins to show patience which is the key to success when dealing with the first boss, whose way to defeat him is not just to shoot forward, but to defeat the boss must wait for the right time. The table of the development of the Ra6f4 focus can be seen in Table 7. He was sometimes interrupted by friends or the view outside the window. The distraction lessened when Ra6f4 was moved to sit in front of the teacher's desk. 3 Need to repeat the instructions on the given task until due to lack of focus causes failure to understand.
The repetition of instructions by the teacher has decreased. The key is to get Ra6f4's attention first, pay attention to his eye contact, and give clear instructions.
The following is the documentation of Ra6f4 activities in schools before testing (Figures 13 & 14). In Figure 13, respondent Ra6f4, who wears a red short-sleeved Muslim shirt, is sentenced to sit beside the teacher's desk for not paying attention and going around behind the teacher's desk. The development that can be seen after playing the Tanji game is reduced unfocused behavior due to rushing during activities. Respondent Ra6f4 is closest to the teacher, paying close attention to the lesson.
Evaluation of respondents' performance while playing
Game testing is also done in consultation with an ADHD expert. Experts consist of 3 people with the following criteria: Have experience handling ADHD children's education for at least five years; Have an educational background in education for children with special needs; Understand the use of learning technology for children with special needs, especially games. Testing is done by showing game demos to experts and conducting interviews related to games that have been developed. According to experts, to train ADHD children's focus, two critical points must be considered, namely, honing their memory and listening skills. The ability to remember in children with ADHD can be sharpened using visual memory. While the listening ability of children with ADHD can be sharpened using auditory memory. The game developed can hone listening and remembering skills because players must recognize the map in the game. Players must listen to instructions that explain the game map images. Coordination of motor and visual abilities of ADHD children can also be trained with this game because to be able to complete game missions. Players must be able to avoid objects that require eyehand coordination skills. The game developed has also accommodated players to train their focus by recognizing objects that players must look for to complete game missions. Testing was also carried out using a questionnaire with a 10-point Likert scale (1 = "strongly disagree," and 10 = "strongly agree") on five questions. Experts use questions to assess game content refers to research [23]. Table 8 shows the results of the expert's assessment of the developed game.
Based on the conclusions from the expert testing results, this game can be developed further by paying attention to the background music and visual appearance of the game. The music in the game should use a slow tempo at the beginning and then increase as the level of the game increases so as not to distract the player's focus. In addition, in terms of visual appearance, it is better to use contrasting color combinations at the initial level and use simple object images.
Conclusion
We have developed the game "Tanji Adventure to Diamond Temple" and tested it on ADHD children. It aims to be one of the educational tools that helps students with ADHD learn to focus more on what they are studying in class and their daily lives. The game applies essential ideas that can improve a child's ability to concentrate by enhancing their hearing, memory, visual skills, and eye-hand coordination. In addition, this game offers encouraging comments and motivates players to keep playing till the end. Players can also reflect on achievements and strategize to complete the next mission. The game review involves experts and parents whose children have been diagnosed with ADHD. Testing with several respondents gives acceptable results. Each of the three respondents offered a better chance than before playing the game. A review of ADHD experts suggests that this game has included elements that help hone children's attention and is suitable as an alternative learning medium to increase the focus of ADHD children. This research still has several limitations. Children's exposure to video games is only three times, making it difficult to relate the results to the use of games. The testing environment is only at one school, and other schools may have different constraints. These limitations leave gaps for further research to improve game effectiveness by observing more children in more locations. | 9,016 | 2023-04-10T00:00:00.000 | [
"Education",
"Psychology",
"Computer Science"
] |
HYDRODYNAMIC ANALYSIS OF BIONIC CHIMERICAL WING PLANFORMS INSPIRED BY MANTA RAY EIDONOMY
In this paper, inspired by the external morphology of a manta ray (Mobula alfredi), four chimerical wing planforms are designed to assess its gliding performance. The planforms possess an arbitrary combination of extra hydrodynamic features like tubercles at the leading edge (L.E.) and trailing edge (T.E.) inspired by humpback whale's flippers and flukes, respectively, as longitudinal ridges inspired by whale shark's economy. In addition, another planform is designed to investigate the possible effects of manta ray's injuries (geometric deficiency) generated by predator's attacks or boat strikes on its locomotion (gliding) performance. In this regard, turbulent flow physics involved in the problem is numerically simulated at different angles of attack (AoA). High Reynolds number, 10, corresponding to the swimming of a juvenile manta ray at an average speed equals one m/s. The results show that the manta ray-inspired planform with L.E. undulations exhibits a superior performance at high AoAs than its other counterpart variants. In addition, the results demonstrate that injuries on the manta ray's body can noticeably modify hydrodynamics and, as a result corresponding hydrodynamical forces and moments acting on the swimming animal in the gliding phase.
INTRODUCTION
The external morphology of aquatic creatures plays a crucial role in the hydrodynamics of locomotion [1]. As an example, the presence of tubercles on the leading edge of the humpback whale's flippers leads to the formation of streamwise vortices, which ultimately delays the separation onset and development [2] [3]. In general, for all marine creatures, their external geometry along with the presence of special geometrical features are majorly responsible for some favorable hydrodynamic characteristics in swimming like denticles on the sharkskin [4], ridges on the leatherback turtle's carapace [5], ridges on the whale shark's body [6] and ventral pleats on the belly of a humpback whale [7], to name a few.
The Manta ray, Mobula alfredi, is one of the most charismatic swimming animals in oceanic islands and reefs (Fig.1). These species belong to the Mobulidae family and can be typically found in Indo-west Pacific oceans with a confirmed range of distribution in Indonesia coasts and the red sea, extended to an expected range in the 'Persian Gulf' at the south of Iran [8]. Their mature size can reach 2.7-3.5 meters on average, depending on the male/female categories. Its life span is about 40 years, with a weight reaching 0.7 tons [8]. In contrast to fishes, including sharks, these filter-feeding swimmers have developed a disproportionately large brain to their body weight similar to mammals, which gives them a higher level of Indonesian Journal of Engineering and Science, Vol. 2, No. 3, 2021 Taheri https://doi.org/10.51630/ijes.v2i3. 25 12 capabilities in functionality and behavior [8]. In general, the swimming of a manta ray is performed via a combination of two modes [9]: flapping of its left and right pectoral fins [10] [11] and gliding mode provided by its large and extended planform or sometimes just as a pure gliding mode [12] [13]. As aforementioned, the external geometry of a manta ray, including wing planform shape and the shape of hydrofoil ribs, directly affects the swimming performance of the animal in both flapping and gliding modes. For instance, Luo et al. has successfully used an optimization technique to optimize the shape of the hydrofoil sections for a given manta ray-inspired planform in the pure gliding mode [9]. Here, we implement some biomimetic changes in the overall economy (planform shape) of a manta ray-inspired wing to grasp the underlying physics of the problem. In this regard, non-uniform bionic L.E. and T.E. undulations are applied to the manta-ray inspired wing planform to investigate hydrodynamic consequences towards an optimum design strategy. Furthermore, hydrodynamic effects of typical injuries on the manta ray's gliding performance are also assessed by considering an injured body shape of a Maldives best-known manta ray, so-called 'Babaganoush,' dated May 2019 [14]. In the following sections, details are presented.
CHIMERICAL MANTA RAY BODY MODELS
To construct the chimerical wings based on the mana ray's body planform in the present paper, a set of hydrofoils (i.e., eleven cross-sections of the Mobula alfredi body model) at selected planes perpendicular to the lateral axis are numerically generated by high resolution in MATLAB and imported into the SolidWorks CAD environment [17]. In this regard, an optimized hydrofoil section shape introduced by Luo et al. is adopted here [9]. Then, a total number of 2 guide-curves representing L.E. and T.E. of the manta ray planform (in the midplane defined by y=0) are fed to the SolidWorks CAD environment. Finally, the final 3D model variants of the manta ray planform models are constructed by applying a 'Lofting process,' as shown in Fig. 2 and Fig. 3. The latter process is performed by stitching successive cross-section curves of the geometry guided by the guide curves. As shown in Fig. 2 and Fig. 3, The wing planforms possess an arbitrary combination of some selected hydrodynamic features like tubercles at L.E. and T.E. inspired by humpback whale's flippers [3] and flukes, respectively, as well as longitudinal ridges inspired by whale shark's economy [6].
Considering aforementioned hydrodynamic specific features, a total number of 4 bioinspired wings without a dihedral angle is designed corresponding to planforms exhibiting: smooth LE-smooth TE (i.e., real Mobula alfredi planform), wavy LE-wavy TE, wavy LEsmooth TE, smooth LE-wavy TE, as summarized in Table 1 (illustrated in Fig. 3). With the aid 13 of these hydro-planforms, hydrodynamic effects of these prominent geometric features such as the evolution of vortical structures, flow separation evolutions, and post-stall behavior of these 'swimming wings' can be assessed, as discussed in the present paper. In general, geometric deficiencies can be present in the manta ray's external morphology (economy) due to injuries or bites on T.E. of their pectoral fins caused by predator's attacks (e.g. sharks/mammals), boat strikes while feeding or swimming close to the oceanic water surface. As explained in detail by Stevens et al., manta rays exhibit a noticeable capability to quickly heal these body wounds and regenerate large portions of the missing flesh [8]. Many examples of manta rays with these types of injuries have been observed by Stevens et al. over the years in the Maldives and Mozambique manta ray populations [8]. As mentioned earlier, one of the famous examples with a 'geometric deficiency' at T.E. is a reef manta ray (Mobula alfredi), namely 'Babaganoush.' The manta ray has been injured by a speedboat strike and has been observed for the first time in the Maldives by 'The Manta Trust' research team having large and deep injuries at T.E. on November 2018 [14]. However, the manta was incredibly healed in the next show-up in May 2019, as observed by' The Manta Trust.' In the present paper, the body planform of 'Babaganoush' in May 2019 is considered to assess hydrodynamic consequences of these kinds of 'geometric deficiencies' (Fig. 3.c).
NUMERICAL METHODOLOGY
To pursue the goals of the present study, a numerical computation campaign consisting of 90 individual flow simulations is planned, including simulations of the designed planforms (Table 1) and validation test cases (section 3.3). For manta ray simulations, the inflow velocity is set based on a prescribed Reynolds number of 10 6 . In general, manta rays are among relatively slow swimmers in oceans and reefs. For instance, an oceanic manta ray (Mobula birostris) has been recorded with a swimming speed of 0.46-2.51 m/s while turning [18] and about 0.25-0.47 m/s in the 'foraging phase' [19]. On the other hand, giant mantas (Mobula tarapacana) can dive with up to a six m/s descending speed to the depth of 2000 m with a low temperature [20] by keeping the brain warm enough compared to its surrounding tissues provided by their particular 'network of blood vessels [8]. It was also reported that a reef manta ray (Mobula alfredi) could reach a 432 m depth [21]. More recently, reef manta rays have reached a deeper depth of 672 m during nighttime [22].
On the other hand, water stream currents in oceans and reefs are another essential factor that should be considered. In general, current ocean speed is fastest near the ocean surface with about 2.5 m/s, and gulf current speed is about 1.8 m/s [23]. As an example, merging of the Pacific and Indian ocean waters generate one of the strongest reef currents on the planet around Indonesian Journal of Engineering and Science, Vol. 2, No. 3, 2021 Taheri https://doi.org/10.51630/ijes.v2i3. 25 15 the Indonesian islands with the speed of about four m/s, which makes it difficult to swim and hover (keeping a position unchanged in 3D space) for marine creatures like mantas and also human divers. As a result, oceanic and reef mantas can typically experience a wide range of AoA ( ) generated by a vector summation of their swimming speed and natural stream current speeds. It is worth mentioning that manta rays are also facing a broad range of AoAs in performing a vast range of maneuvers and feeding strategies such as 'cyclone' and 'somersault' feedings, to name a few.
Here to perform the planned simulations at different AoAs ( ), ranging from 0 to 40 degrees, including deep-stall region, the model is placed at a fixed position in space, and the relevant AoAs are implemented via setting of the freestream blowing angle for the streamwise x-axis (Fig. 4). For the simulations with a given AoA ( ), the inflow velocity field is applied by the following formula: The coordinate system adopted for the upcoming manta ray gliding flow simulations
Computational Domain and Grid Generation
Computational grids are generated in the SolidWorks meshing tool environment [17] [24]. It takes advantage of an adaptive local grid generation on a global Cartesian mesh to capture fine details of the bionic wing's geometries and its corresponding boundary layers [24]. After applying grid convergence tests, a well-converged grid with about the ultimate 2 million elements is utilized for all upcoming flow simulations over all variants of the manta ray geometry. As an example, Fig. 5 shows the grid generated around the bionic planform with wavy LE-wavy TE. It exhibits a well-resolved body-fitted mesh clustering to the wall, capable of capturing the geometry's delicate features.
As one can see in the figure, to minimize the boundary effects, the computational domain is extended about three times of the body length in all directions, except in the downstream, which is extended four times the body length to capture wakes and tip vortices more clearly. Fig. 6 depicts top-views of the computational grids generated around all variants of the bionic planforms. As shown in Fig. 5 and Fig. 6, applying three levels of clustering and an adaptive meshing technique, a smooth transition between a relatively coarse mesh in the outer-flow region and the fine mesh in the near-body zone is achieved.
Turbulent Flow Simulations and Settings
Incompressible Navier-Stokes equations govern Single-phase turbulent flows over juvenile and mature manta rays. Here, the equations are solved 6 10 Re using Reynold Averaged Navier-Stokes (RANS) technique [24]. In this regard, turbulence is treated with Lam-Bremhorst low-Reynolds (LB-LRN, hereafter) by SolidWorks Flow Simulation (SFS) solver [24] [25]. The name of 'low-Reynolds number' shows the capability of the method to treat both low-speed boundary layer (B.L.) and high-speed regions in a single framework without wall-modeling. This is achieved in the SFS solver by applying at least ten nodes in the B.L. and considering wall distances in the turbulence model calculations [24]. In general, Navier-Stokes equations can be expressed in the tensor notation, as below [24]: where i S and * is defined as: t Furthermore, k is turbulent eddy viscosity and kinetic energy, respectively, while ij denotes the Kronecker delta. In LB-LRN model, k and are coupled as [24,25]: where the parameters are adjusted, as below: To calculate t , the following formula is adopted [26]: In the above expressions, where the parameters are calculated as: 2 ,.
where y is the wall distance. Finally turbulence time and length scales can be computed as the following [26]: In SFS, fluid flow equations are solved using the finite-volume method by applying operator-splitting, conjugate gradient, and multigrid and SIMPLE computing techniques [17] [24]. Here for turbulent flow simulations over the bionic planforms, inlet velocity is adjusted with equation (1), and other boundaries are set as 'outflows.' Inflow turbulence intensity and length are set as 0.1% and 1.210 -4 m, respectively. Forces and moments, including: {Fx, Fy, Fz, Mx, My, and Mz} acting on the bionic planforms, are continuously monitored to ensure the satisfaction of all convergence criteria. SFS solver automatically applies the measures based on the selected global and local goal functions (e.g., defined as velocity, pressure, forces, and moments) to achieve the lowest residual errors [17].
Validation Test Cases
To investigate the performance of the proposed numerical strategy, two preliminary cases are considered first before jumping to the turbulent flow simulations over the manta rayinspired planforms, including a low-aspect-ratio wing ( 1 AR = ) and an initial manta planform.
Low-Aspect Ratio Wings
Low-aspect ratio wings possess different characteristics compared to the high-aspect-ratio ones with long spans. By decreasing the aspect ratio (defined as 2 / AR b c = , where b is the wingspan and c is the averaged chord of the wing), trailing/tip vortices become more influential over a large portion of the wing and modify the aero/hydrodynamic performance of the wing. As a result, by decreasing AR , the lift slope decreases [27]. Here, turbulent flows over a low-aspect-ratio wing ( 1 AR = ) with NACA 0012 airfoil sections are simulated at a high Reynolds number 5 Re 1.5 10 (Fig. 7). In this regard, a well-converged grid with about 1 million elements is utilized. The computational domain is extended to 4 and 10 times the wingspan in x − and x + directions, respectively. In addition, the domain is extended 3 times of the wing span in the lateral directions to minimize the boundary effects. Fig. 7. Formation of the tip vortices on a low-aspect-ratio wing with NACA 0012 airfoil section, visualized by tracer particle dynamics and vorticity iso-contours at 12 = Fig. 7 shows the formation of the wingtip vortices over the low-aspect-ratio wing at AoA equals to12 . In this regard, a tracer particle study has been performed. To do so, the tracer particles, about 300 water spherical particles with a diameter of 0.0001 m, are continually released from the wing surface and convected downstream by the background flow field
19
(having the same velocity as the local flow field). For the calculations, ideal reflection has been applied for fluid particle-solid interactions, e.g., in the separation zones on the wing. As shown in Fig. 7, tip vortices extend far downstream with a spiral motion visualized by tracer particles colored by axial velocity. Intersections of the counter-rotating tip vortices are also visible on a plane positioned /4 xb = downstream of the wing visualized by iso-contours of vorticity, 100 x = 1/s. Fig. 8 depicts the lift coefficient curve versus AoA for the adopted wing compared to the experimental curve measured by Chen et al. [28] and the theoretical curve by Lowry and Polhamus [29], which exhibits a perfect agreement.
Initial Manta-Ray Planform (Comparative Test Case)
In this section, turbulent flows over a manta ray-inspired 'test case' planform with an optimized airfoil cross-section proposed by Luo et al. [9] are simulated. In this regard, the test case geometry is constructed by the lofting process via providing L.E. and T.E. guide curves and a total number of eleven airfoil cross-sections between left and right wingtips similar to the strategy aforementioned in section (2) for the designed bionic chimerical planforms. Here, to perform numerical simulations of the turbulent flows, a well-converged grid with a maximum of 1.5 million elements is utilized. Fig. 9 (top) shows grid generation around the test case geometry. As one can see in the figure, all detail features of the geometry are captured by the generated body-fitted grid via three levels of refinements. 20 Similar to the adopted setting explained in sub-section (3.3.1), a tracer particle study was performed to capture tip vortex topology. Fig. 9 (Bottom) shows the topology of the counterrotating tip/trailing vortices 10 = . As one can see in the figure, the platform generates a relatively large vorticity region ( 1 x = 1/s) downstream, induced by tip/trailing vortices 10 = . Fig. 10 depicts lift coefficient versus AoA obtained by the present study, which exhibits a good agreement over a full range of AoA, 0 10 , with the Luo et al. results [9].
RESULTS AND DISCUSSIONS
In this section, complete results of the turbulent flow simulations for different variants of the designed chimerical manta ray-inspired planforms (Table 1
Chimerical Planforms (Comparative Study)
As discussed in detail in section (2), in the present study, the main layout of the platforms has been designed based on a reef manta ray (Mobula alfredi) eidonomy. The final geometries are constructed by adding some extra topological features like L.E. and T.E. undulations inspired by humpback whales [2] [3] and longitudinal ridges inspired by whale sharks [6]. In this regard, geometrical features of the related species, for example, T.E. undulations of the humpback whale's fluke, are digitalized, normalized, and finally rescaled and implemented at T.E. of the designed manta ray-inspired planform (Fig. 2). A similar procedure is also applied for L.E. undulations inspired by humpback whale's flippers (Fig. 2). The effects of AoA variations on the evolution of tip/trailing vortices for these planforms are shown in Fig. 11 via tracer particle dynamics. As one can see in the figure, by increasing AoA, tip vortices are getting more distorted and tilted upward. Furthermore, Iso-contours of vorticity defined by the axial vorticity, 1 x = 1/s, at a plane positioned at /2 xb = also show that by increasing AoA, cross-sections of tip/trailing vortices are getting larger and intensified.
For example, Fig. 12 shows the formation and evolution of the vortical structures by increasing AoA for the bio-inspired wing planforms with smooth LE-smooth TE (I) compared to wavy LE-wavy TE (II). The vortical structures are captured here by λ2 -criterion (λ2=-1). As one can see in the figure, a pair of wall-bounded horseshoe vortices with a relatively large scale (comparable to the planform chord) is generated over the wing at α = 20°. By increasing AoA, the vortex pairs become more significant and finally merge at α = 36° and produce a large dominant vortical structure. As it is also evident in the figure, the presence of L.E. and T.E. undulations modify formation, pattern, and evolution of the minor and significant vortical structures at all AoAs. Taheri https://doi.org/10.51630/ijes.v2i3. 25 22 Fig. 13 shows mean separation zones at different AoAs for all bio-inspired planforms introduced in Table 1. As shown in Fig. 12, L.E. and T.E. undulations with their particular peak and trough topology (here based on the humpback whale's flipper and fluke, respectively) considerably affect fluid dynamic characteristics over the wing in the gliding phase. As a result, the topology of separation zones is modified, as shown in Fig. 13. It is worth mentioning that an intensive back-flow is induced and embraced by the dominant horseshoe vortex pairs, which leads to the formation of two significant disconnects (not merged) separation zones on the left and right sides of the bio-inspired swimming wings. For example, in the planform with smooth LE-smooth TE at α = 24°, two disconnect separation zones are generated at the centers of the horseshoe vortex pairs, visible in Fig. 12 Fig. 13 23 By comparing the pattern and topology of the separation zones generated on the bioinspired planforms at all AoAs in Fig.13, one can conclude that a minor separation zone is generated in the planform with wavy LE-smooth TE Fig. 14
An Injured Manta Ray (Babaganoush on May 2019)
In section (2), Babaganoush, a well-known injured reef manta ray, was introduced. In this section, hydrodynamic consequences of its injury are assessed by CFD simulations. Fig. 15-a showed the injured body of Babaganoush in May 2019. As one can see in the figure, the aft Taheri https://doi.org/10.51630/ijes.v2i3. 25 24 part of the body near the centerline was lost in the boat strike accident in 2018. In the present calculations, a 3D model of the body is constructed by digitalizing the aft body pattern and exporting it to the SolidWorks CAD environment. Fig. 15-b shows an isometric view of the 3D model adopted for the upcoming simulations. Fig. 15-c also shows mesh generation around the body. Fine details of the 'geometric deficiency' are captured by appropriate three levels of refinements as before, which optimizes the accuracy of the computations and computational cost. Fig. 16 depicts streamlines over the body of Babaganoush at α = 12°. As one can see in the figure, a recirculation zone exhibiting chaotic dynamics is generated right after the injured part of the body, similar to the objects with blunt aft-body experiencing dominant pressure drag, in addition to the separation zones forming on the wing planform at this AoA. This feature is translated to higher drag coefficients almost at all AoAs than the healthy Babaganoush (base), as shown in Fig. 17. Lift coefficient variations in AoA for the injured and healthy Babaganoush are also depicted in Fig. 17. As shown in Fig. 17, lift coefficients of the injured body are modified and exhibit lower/higher values over specific intervals of AoA. Fig. 18 shows intersections of trailing/tip vortices on a plane positioned at x/b=2 visualized by iso-contours of vorticity, ωx=±1 1/s, at different AoAs from the back-side of the manta. At α = 24°, dissymmetry of the counterrotating vortices is more pronounced. Increasing AoA cross-sections of tip and trailing vortices merger are getting larger and intensified (Fig. 18). The final important topic is the nonequilibrium state, corresponding to an unbalance generation of forces and moments by the swimming animal. An injured manta, such as Babaganoush, experiences unbalance forces and moments exerted on the body, induced by the 'geometric deficiency.' This makes it more difficult for the animal to maintain a so-called 'trimmed swimming,' similar to the concept of a 'trimmed flight' for an airplane. For example, Fig. 19 shows the absolute values of rolling moments versus AoA exerted on the injured Babaganoush body compared to a healthy one. As one can see, the injured animal experiences higher levels of rolling moments at all AoAs. Therefore, the injured manta should perform asymmetric flapping and hold asymmetric curvatures to the left and right pectoral fins, e.g., dihedral angle, in the gliding and maneuvering phases to maintain the 'trimmed' swimming state.
CONCLUSION
The present paper assessed the effects of L.E. and T.E. geometric undulations on a manta ray-inspired planform. In this regard, four chimerical wings without applying dihedral angles were designed with optional combinations of tubercles at L.E. and T.E. extracted from humpback whale's flipper and fluke, respectively. The results showed that the designed wing planform with wavy LE-smooth TE exhibits a superior separation control and hydrodynamic performance (lift generation) in the gliding phase, especially at the post-stall region. The results also confirmed that the relative position of peaks and troughs in the L.E. and T.E. curves should be designed and optimized in a single framework (not individually) to achieve maximum performance. It is worth mentioning that the main benefits of implementing undulations at T.E. are achieved in the 'flapping' mode rather than the 'gliding' mode, as observed in the oscillatory motion of humpback whale's flukes in the vertical direction (up/down oscillatory motions). Numerical turbulent flow simulations of an injured reef manta ray, Babaganoush, showed that the 'geometric deficiency' leads to a higher drag coefficient and brings a higher level of Indonesian Journal of Engineering and Science, Vol. 2, No. 3, 2021 Taheri https://doi.org/10.51630/ijes.v2i3. 25 27 disequilibrium of forces and moments, making it more difficult for the animal to maintain the so-called 'trimmed swimming.'
ACKNOWLEDGEMENT
The author would like to sincerely acknowledge every effort by institutes, organizations, and individuals to protect 'Manta Rays' worldwide. | 5,612.6 | 2021-09-08T00:00:00.000 | [
"Physics"
] |
Physics mechanisms underlying the optimization of coherent heat transfer across width-modulated nanowaveguides with calculations and machine learning
Optimization of heat transfer at the nanoscale is necessary for efficient modern technology applications in nanoelectronics, energy conversion, and quantum technologies. In such applications, phonons dominate thermal transport and optimal performance requires minimum phonon conduction. Coherent phonon conduction is minimized by maximum disorder in the aperiodic modulation profile of width-modulated nanowaveguides, according to a physics rule. It is minimized for moderate disorder against physics intuition in composite nanostructures. Such counter behaviors call for a better understanding of the optimization of phonon transport in non-uniform nanostructures. We have explored mechanisms underlying the optimization of width-modulated nanowaveguides with calculations and machine learning, and we report on generic behavior. We show that the distribution of the thermal conductance among the aperiodic width-modulation configurations is controlled by the modulation degree irrespective of choices of constituent material, width-modulation-geometry, and composition constraints. The efficiency of Bayesian optimization is evaluated against increasing temperature and sample size. It is found that it decreases with increasing temperature due to thermal broadening of the thermal conductance distribution. It shows weak dependence on temperature in samples with high discreteness in the distribution spectrum. Our work provides new physics insight and indicates research pathways to optimize heat transfer in non-uniform nanostructures.
Introduction
Lattice heat conduction is the major contribution to the parasitic heat flow across nanostructures of semiconductors.It must be controlled in many applications, such as highdensity nanoelectronics circuits, sensing, thermal insulation, and thermoelectric energy conversion.Thermoelectric generators (TEGs) are devices that convert thermal energy into electric power with no moving parts, they are highly reliable and have long lifetimes.Techno-economic analysis showed that TEGs have the full potential to compete with conventional power sources and power the internet of things [1].They can also serve as thermoelectric coolers (TECs) and cope with the problem of overheating of nanoelectronic circuits.Micro-TEGs and micro-TECs ideally serve these purposes because they are compatible and can be integrated on microelectronic boards.Thermoelectric metamaterials such as width-modulated nanowaveguides (nWVGs) would be suitable for commercial thermoelectric applications if optimized for minimum thermal conduction and maximum thermoelectric efficiency [2].Width modulation has been recently optimized for minimum thermal conduction [3].This work aims to shed light on the physics mechanisms underlying the optimization and indicate design pathways to control heat transfer across nWVGs.
Quantum confinement and nanoscale periodicity modify phonon dispersion and affect the material's thermal, optical, electrical, and mechanical properties [4][5][6][7][8][9].These effects drastically limit parasitic coherent heat conduction and remarkably enhance thermoelectric energy conversion efficiency [10][11][12][13] in metamaterials with high-quality interfaces and boundaries and characteristic dimensions shorter than the dominant phonon wavelength [14][15][16].Recent studies on thermal conduction in metamaterials such as superlattices, nanomeshes, holey nanobeams, and width-modulated nWVGs confirm the dominance of coherent phonon transport at low temperatures [17,18] and at elevated temperatures [19][20][21][22][23][24][25][26][27][28].Coherent phonon transport can be geometrically tuned in width-modulated nWVGs by designing aperiodicity in the modulation profile [29][30][31].Optimization of geometrical aperiodicity can be addressed with the aid of machine learning (ML) because of the large number of aperiodic configurations and the high degree of complexity.ML has been rapidly established as the 'fourth paradigm for scientific research' that could complement the first three paradigms, theory, experiment, and simulation [32].It can be ideally combined with physics models and calculations to accelerate the understanding, selection, and design of materials and their structures [33][34][35][36][37][38][39].This is particularly important in dealing with problems with a high degree of complexity, competing mechanisms, and interdependencies.Such problems are very demanding and, often, cannot be addressed at an affordable cost by the other paradigms.Thermal conduction in the presence of geometrical nanoscale non-uniformity is a representative example.In the last years, there has been an explosion of machine-learning-assisted research activity in thermal science.This involves material selection [40][41][42][43] and nanostructure design for optimal thermal properties [44][45][46].Recently, efficient Bayesian optimization of geometrical aperiodicity for minimum coherent thermal conduction was demonstrated in width-modulated nWVGs [3].Calculations and ML showed that aperiodic width modulation is optimized for minimum thermal conductance by maximum disorder in the modulation profile of nWVGs according to a physics rule.On the other hand, optimal aperiodicity was found for moderate disorder in the case of heterostructures against physics intuition [44,[47][48][49][50][51][52].Such counterevidence indicates the non-trivial role of disorder in minimizing lattice heat conduction and the need for further understanding of the physical processes underlying the optimization of non-uniform nanostructures and stimulated the present work.
The main objective of this paper is to provide physics insight into the role of disorder in decreasing thermal conductance and elucidate why thermal conductance is minimized for maximum disorder in width-modulated nWVGs.For this, calculations on full sets of nWVGs are analyzed to interpret the distribution of the thermal conductance among the aperiodic configurations.It is revealed that the thermal conductance values are ordered in zones with an increasing degree of disorder in the modulation profile of the corresponding configurations.Additional objectives are to explore the impact of composition, modulation geometry, and sample size on the thermal conductance distribution and the optimization efficiency.Calculations for nWVGs of different constituent materials and width-modulated geometries of interest [2] show that the dominant mechanism determining the distribution of thermal conductance is the degree of modulation and support the general validity of the physics optimization rule.The present work also explores the effect of thermal broadening on the distribution of thermal conductance, the Bayesian optimization efficiency, and the minimization of thermal conductance in connection with the sample size and the discreteness of the distribution spectrum.The system and the methodology are detailed in section 2. The results are presented and discussed in section 3. Section 4 is devoted to remarks and conclusions.
Methodology
The structures of interest are three-dimensional nWVGs with two-dimensional confinement and one propagation direction.The width of the nWVGs is modulated along the propagation direction (figure 1).
The modulated nWVGs consist of arrays of layers of the same material with different widths and lengths.The arrays can be periodic or aperiodic.Layers are either wide ('openings') or thin ('constrictions').Wide leads are assumed at the two endings of the nWVGS.The optimization objective is to find the optimal array of layers, the one with minimum thermal conductance.This is referred to as the 'optimal modulation profile'.To perform the optimization, we need to fix the material and the number N of layers.Calculations and optimization are discussed for two reference materials, GaAs and Si, COMBO [57,58] and two width-modulation geometries (figure 1(b)).Thermal conductance was calculated within elastic wave transmission theory [53,54] (table 1) as detailed in [30,31,55,56].The dimensions and material properties are continuous parameters.Calculations are shown for nWVGs with constriction layers width of 10 nm and opening layers width of 100 nm.The selected dimensions are suitable for experimental realization and applications of technological interest.
The ML optimization was performed using the open-source Bayesian optimization library COMBO [57,58] (table 1), which has been tested for optimizations in various energy transport problems [45][46][47].The objective function is the value of the thermal conductance.The ML algorithm is integrated with the property calculator in a closed-loop iterative optimization (figure 2).The process we followed is illustrated in figure 2.
Optimization requires digital representation of the nWVGs.For this, binary flag numbers are chosen as suitable descriptors to represent the arrays of layers.In the binary representation, the binary flag '0' denotes constriction layers and the binary flag '1' denotes opening layers (figure 2).A quantum dot (QD) modulation unit is defined by a sequence of opening layers n between two constriction endings.Thus, the digital representation of a QD is a sequence of '1 s' surrounded by '0' layers at opposite endings.In this definition, QDs of different sizes are formed by sequences of layers '1' of different lengths, i.e. different numbers of layers.Subsequent layers '0' of different numbers represent constrictions of different lengths.In this representation, the descriptor of a periodic SL with N = 12 layers is 010101010101 and the descriptor for an aperiodic SL with 12 layers is 011101111100.
As illustrated in the flowchart of figure 2, the thermal conductances of n candidates randomly selected are calculated first.These values of thermal conductance and the corresponding n descriptors are then used to train a Bayesian regression function.Next, the thermal conductance values for each of the remaining candidates are estimated by a Bayesian posterior probability distribution derived with a Gaussian process.The acquisition function uses the expected improvement criterion to indicate the best candidate to proceed with.The process continues with the calculation of the thermal conductance of this candidate and the inclusion of its value in the training set.The loop is repeated until the process converges to the candidate with minimum thermal conductance, referred to as the optimal width-modulated nWVG.The efficiency of the optimization is the higher the smaller the number of training candidates until convergence is achieved.
Decrease of thermal conductance and aperiodicity/disorder
The thermal conductance of width-modulated nWVGs depends on the shape of the modulation profile, and the presence of order or disorder [30,31].It is smaller than that of the corresponding uniform nWVG.Even a single constriction significantly decreases the thermal conductance of coherent phonons.The addition of more constrictions decreases it further down to the periodic superlattice (p-SL) value when the modulation units are identical and below this limit when the modulation units are non-identical (figure 3).
The maximum decrease occurs for the maximum number of non-identical modulation units in the modulation profile [3,30,31].In the quantum confinement regime, the underlying mechanism for the decrease of thermal conductance is the reduction of the phonon transmission probability due to quantum interference between phonon waves scattered at the width-discontinuities of the nWVG.Quantum interference is sensitive to the geometrical arrangement of discontinuities and thus the values of the thermal conductance of the various configurations of the modulation profile are different.Calculations on full sets of width-mismatch modulation profiles showed that the distribution of the thermal conductance among the different configurations has a peaked structure with a long tail and a well-defined minimum (figure 3).To better understand the origin of this distribution and the role of the aperiodicity, we explored the characteristics of configurations in the different parts of the distribution.
Distribution of thermal conductance and degree of disorder
For a modulation length of N layers, there are 2 N configurations.The number of configurations increases rapidly with increasing N; it is 256, 4096, and 16 384 for N = 8,12, and 14 respectively.We consider the configurations for N = 8, the smallest yet representative statistical set.The corresponding distribution histogram is shown in figure 4. The inset shows the distribution of the full set of 256 configurations including the uniform nWVG for which thermal conductance is maximum.
There is a large gap between this maximum and the thermal conductance of any of the modulated nWVGs.The main panel zooms in on the modulated nWVGs, without the uniform nWVG.Thermal conductance is plotted relative to that of the p-SL.The thermal conductance values of the modulated nWVGs appear to be distributed into three zones: zone Athe highest values zone, zone B-the medium values zone, and zone C-the lowest values zone.We analysed the modulation profile of the configurations in each zone to determine each zone's identity.Representative configurations are shown in figure 5. Zone A includes configurations with a single constriction and zero QDs.The constriction length of candidates in this zone ranges from 1 to 8 layers.Zone A is a low-modulationdegree zone.Zones B and C include modulated nWVGs with more than one constriction and at least one QD.Zone B includes periodic and aperiodic configurations.Among the periodic ones is the p-SL 01010101 with relative thermal conductance equal to 1.This is the periodic configuration with the highest modulation degree of four identical QDs (010).Other periodic configurations have profiles modulated by a smaller Imposing the 50% composition constraint to the N = 8 set reduces the number of configurations from 256 to 70.The effect of the composition constraint in the thermal conductance distribution is illustrated in figure 5 where the distribution histograms of the thermal conductance among the candidates of the full and the reduced sets are plotted together.
It can be noticed that although the histogram of the reduced set is sparser, the candidates are still well distributed in the three zones A, B, and C.These zones are now more discrete and even more distinguishable.The thermal conductance decreases monotonically with increasing modulation degree from zones A to C. The optimal configuration for the reduced set follows the physics rule.The statistical distribution has the same characteristics under the composition constraint.
The distribution histogram of the thermal conductance for a larger set of 3432 configurations for N = 14 and 50% composition, is shown in the inset of figure 3.In the case of this larger statistical sample, the distribution is more continuous and shows a single dominant peak instead of sequences of distinct intermediate peaks as in the case of the smaller sample of figure 5. Zone A has one band of configurations with a single constriction of 7 layers.Zones A and B are grouped by a rather smooth envelope to an extended zone with a peak and a long tail.The p-SL lies inside the tail.Analysis of the distribution of candidates shows that thermal conductance decreases gradually with increasing modulation degree.At first, increasing the modulation degree decreases thermal conductance down to the p-SL limit.Thermal conductance decreases below the p-SL value for increasing aperiodicity.Around the peak are distributed configurations with identical and non-identical QDs in their modulation profile, with a medium degree of modulation.The peak belongs to zone B because most configurations have a medium modulation degree.Thermal conductance decreases rapidly with increasing modulation degree after the peak, in zone C. Most configurations in this zone are arrays with a maximum number of non-identical QDs.The optimal configuration with minimum thermal conductance has been identified at the low edge of this zone.Its descriptor is 00110100101110.It is degenerate with its reverse.Its schematic profile is also shown in figure 3. It is composed of four QDs, three of which are non-identical: 010, 0110, and 01110.We found the same optimal configuration when we performed the optimization for a different geometry with a modulation profile symmetric around the nWVG central axis (figure 3).The same optimal configuration was found for GaAs and Si.The optimal configuration is in all cases the one with maximum modulation degree due to the distribution of the thermal conductance among the aperiodic width-modulation configurations: the decrease of thermal conductance is controlled by the modulation degree irrespective of choices of constituent material, width-modulation-geometry, and composition constraints.
Underlying physics mechanism
The generic behavior of the distribution of the thermal conductance is interpreted by the underlying physics mechanism: destructive phonon wave interference at the width-modulation discontinuities that becomes more significant with increasing modulation degree [30].Analysis of the distribution histograms shows that thermal conductance gradually decreases with an increasing degree of modulation, from the singleconstriction value towards the p-SL value, reaching its minimum at the optimal aperiodic array.The gradual decrease corresponds to the gradual perturbation of the step-like transmission coefficient of the uniform nWVG by more extended destructive interference upon increasing modulation degree as illustrated in figure 6.
The perfect propagation steps of the uniform nWVG are slightly distorted by a spectrum of shallow wave interference patterns in the case of a single constriction modulation.The distortion deepens and gets more severe with increasing modulation degree upon increasing the number of non-identical modulation units.Zooming in on the first propagation channel shows that even long wavelength phonons 'see' modulation and their propagation is considerably affected.In the case of a single-constriction modulation, resolved propagation peaks appear first, followed by a continuum of propagation states with fluctuating transmission probability.The modification is enhanced in the case of the p-SL, where distinct propagation zones are formed by destructive interference due to periodicity.Propagation zones get gradually narrower as modulation deviates more from the periodic one.They shrink by extended destructive interference due to the absence of periodicity in the optimal aperiodic configuration.The transmission coefficient for the optimal configuration is smaller than for the single-constriction modulation as well as for the p-SL modulation.Thermal broadening screens effects of interference.This explains the convergence of the three curves of thermal conductance with increasing temperature (figure 3).The effects of thermal broadening on the thermal conductance distribution and the Bayesian optimization efficiency are further discussed in the next subsection.
Thermal broadening and optimization efficiency
Thermal broadening screens quantum confinement effects in transport when temperature increases.The distribution histogram of the thermal conductance broadens with increasing temperature as illustrated in figure 7 where the distribution histograms for the N = 14 configuration set are plotted for three temperatures 2 K, 5 K, and 10 K.
Increasing thermal energy broadens and lowers the peak of the distribution.Such broadening is typical for transport dominated by quantum effects, wave interference in the present case.The peaked structure of the distribution and the welldefined minimum threshold remain clear with increasing temperature.The existence of a minimum thermal conductance at all temperatures confirms that the problem is suitable for optimization in the whole temperature range considered here.The efficiency of the optimization is shown in figure 8 for the three temperatures.
In all cases, we started the optimization by randomly selecting a group of initial candidates out of the pool of 3432 candidates.We then performed rounds of optimization with different sets of group candidates until all content of our pool was used.We repeated the procedure for several different choices of a random initial set of candidates.We show representative results for two initial random sets and rounds of groups of 20 candidates.At 2 K, the optimal configuration was identified after calculating the thermal conductance of 3%-8% of the candidates (minimum of ∼100 candidates).At 5 K, the optimal configuration was identified after calculating the thermal conductance of 7%-30% of the candidates (minimum of ∼250 candidates).At 10 K, the optimal configuration was identified after calculating the thermal conductance of 15%-20% of the candidates (minimum of ∼500 candidates).The efficiency of the optimization drops with increasing temperature.This can be understood by that thermal broadening reduces the spreading of thermal conductance among candidates and candidates with similar degrees of modulation are less easily distinguishable.Convergence is thereby delayed.Remarkably, at all temperatures optimization follows the physics optimization rule from the very early steps, irrespective of the time it takes it to converge as shown in figure 8 where the optimization evolution is detailed for the three temperatures.At all temperatures, optimization evolves through candidates that are arrays of permutations of the same set of non-identical QDs.The optimal configuration is the optimal permutation of the actual set of nonidentical QDs at a given temperature.Notably, the efficiency of the optimization remains high despite the thermal broadening of the distribution.This is attributed to the existence of a well-defined low threshold of the distribution envelope with a steep ending at all temperatures.
Effects of thermal broadening are less pronounced in smaller statistical samples where the distribution is sparser.To make this evident we show in figure 9, the distribution histograms for the N = 12 set of 924 configurations with 50% composition for comparison with the corresponding data in figures 7 and 9 for N = 14.
In the case of the smaller set, the distribution histogram shows discrete peaks at all temperatures.The range of values of thermal conductance increases with increasing temperature but the heights of the peaks do not show the systematic decrease shown in figure 7.At the low threshold of the distribution, the spectrum is characterized by well-resolved peaks at all temperatures.The efficiency of the Bayesian optimization is comparable at the three temperatures indicating that thermal broadening does not decrease in this case the efficiency of the optimization.This is attributed to the discreteness of the distribution spectrum.It can be concluded that the effect of thermal broadening on the efficiency of Bayesian optimization depends on the discreteness of the distribution spectrum.Thermal broadening of quantum transport effects and discreteness of the thermal conductance statistical distribution act in opposite directions in the efficiency of the optimization.
Concluding remarks
This work addresses open questions on the heat transfer across width-modulated nWVGs that are currently considered promising metamaterials for efficient micro-TEGs and micro-TECs.It explores the physics mechanisms underlying the optimization of the thermal conductance of width-modulated nWVGs with calculations and ML.Analysis of the distribution of thermal conductance reveals distinct zones of values where configurations have different degrees of disorder in their modulation profile.These zones are ordered in an increasing degree of disorder.Calculations on samples of different compositions and geometries indicate that this ordering is generic.It is shown that the distribution of the thermal conductance among the aperiodic width-modulation configurations is controlled by their modulation degree irrespective of choices of constituent material, width-modulation geometry, and composition constraints.This implies that the width-modulation degree is the dominant mechanism in the optimization process of width-modulated nWVGs and is responsible for optimization according to physics.The same behavior is expected in other types of nanostructures due to geometrical modulation.Different behavior should though hold in nanostructures due to compositional modulation as in hetero-SLs where optimization is for moderate disorder against physics intuition.Analysis like the present one could reveal the dominant mechanism in the thermal conductance distribution and the optimization of hetero-SLs.
The present work also addresses the effect of thermal broadening on the thermal conductance distribution and the optimization efficiency.The efficiency of the optimization is evaluated against thermal broadening in connection with the discreteness of the distribution of the thermal conductance among the aperiodic configurations.Optimization of samples of different sizes performed at different temperatures shows that the effect of thermal broadening on the efficiency of the optimization depends on the discreteness of the distribution of the thermal conductance among the aperiodic configurations.It decreases with increasing temperature due to the thermal broadening of the distribution.It remains high with increasing temperature when the distribution is characterized by high discreteness.
The outcomes of this study hold for coherent phonon transport.Deviations are to be expected when the assumption of coherency breaks due to phonon scattering at imperfect boundaries, by impurities and/or phonon-phonon scattering that get increasingly important as temperature increases [14].The range of validity of coherent phonon transport in phononic metamaterials is an open question, a matter of ongoing research and scientific debate [17,20].It is still unclear when experimental evidence should be attributed to coherent and when to incoherent phonon transport [18].This work contributes to resolving this issue pointing out characteristic behavior as a signature of coherent phonon transport in geometrymodulated metamaterials.The outcomes are directly relevant to the scientific research in the field of quantum technologies that are of growing importance for our society.It is broadly known that thermal transport effects have a major impact on deteriorating the efficiency of the operation of quantum devices.These devices operate at low temperatures where coherent phonons dominate thermal transport and our outcomes can be reliably used for designing them, controlling heat transfer, and enhancing their efficiency.A challenging future scope of the research is to extend the formalism to accommodate phonon scattering and study the distribution of the thermal conductance among the aperiodic configurations of width-modulated nWVGs, perform the optimization of width-modulation for minimum thermal conduction, and interpret the optimization.This would clarify whether the identified physics optimization rule is also valid for incoherent phonon transport that dominates in applications at elevated temperatures.
In conclusion, this work provides new physics insight that could help to control heat transmission in nanodevices.It elucidates the role of disorder in decreasing thermal conduction in the quantum confinement regime.It opens design pathways to optimize metamaterials for efficient heat management and thermoelectric energy conversion at the nanoscale.
Figure 2 .
Figure 2. Flowchart of the optimization methodology described in the main text.
Figure 3 .
Figure 3. Thermal conductance relative to the periodic superlattice (SL) versus temperature for single constriction (grey), periodic SLs (green), and optimal aperiodic SLs (orange) for two width-modulation geometries.The inset shows the distribution of thermal conductance among the width-modulation configurations.
Figure 5 .
Figure 5. Thermal conductance distribution histogram for N = 8, for the full set of 256 candidates (hollow columns), and the 70 candidates with 50% composition (orange columns).Representative modulation patterns for zones (A), (B), and (C) are also shown.
Figure 6 .
Figure 6.Energy dependence of the transmission coefficient for single-constriction-(grey), periodic-(green), and optimal aperiodic (orange) modulations.The main figure zooms in the first uniform-nWVG subband.
Figure 8 .
Figure 8.The effect of thermal broadening on the performance of the Bayesian optimization for two paths starting with different sets of random candidates (continuous and dashed lines respectively) and for three temperatures, T = 2 K, 5 K, and 10 K.The sequences of candidates for evolution paths at different temperatures are shown.
Figure 9 .
Figure 9. Performance of the Bayesian optimization for sparse thermal conductance distribution for two paths starting with different sets of random candidates (continuous and dashed lines respectively) and for three temperatures, T = 2 K, 5 K, and 10 K.
Table 1 .
Theoretical and numerical approaches. | 5,726 | 2024-03-08T00:00:00.000 | [
"Physics",
"Computer Science"
] |
Competitive exclusion of Salmonella enteritidis by Salmonella gallinarum in poultry.
Salmonella Enteritidis emerged as a major egg-associated pathogen in the late 20th century. Epidemiologic data from England, Wales, and the United States indicate that S. Enteritidis filled the ecologic niche vacated by eradication of S. Gallinarum from poultry, leading to an epidemic increase in human infections. We tested this hypothesis by retrospective analysis of epidemiologic surveys in Germany and demonstrated that the number of human S. Enteritidis cases is inversely related to the prevalence of S. Gallinarum in poultry. Mathematical models combining epidemiology with population biology suggest that S. Gallinarum competitively excluded S. Enteritidis from poultry flocks early in the 20th century.
outbreaks in Europe and the United States are associated with foods containing undercooked eggs (7)(8)(9)(10). Eggs can become contaminated with S. Enteritidis through cracks in the shell after contact with chicken feces or by transovarian infection (11). Thus, laying hens were the likely source of the S. Enteritidis epidemic in Europe and the Americas.
The inverse relationship between the incidence of S. Gallinarum infection in chickens and egg-associated S. Enteritidis infections in humans prompted the hypothesis that S. Enteritidis filled the ecologic niche vacated by eradication of S. Gallinarum from domestic fowl (12). The hypothesis suggests that the epidemic increase in human S. Enteritidis cases in several geographic areas can be traced to the same origin, accounting for the simultaneous emergence of S. Enteritidis as a major egg-associated pathogen on three continents (5). A connection between the epidemics in Western Europe and the United States was not apparent from analysis of epidemic isolates. Although most human cases from England and Wales result from infection with S. Enteritidis phage type 4 (PT4), most cases in the United States are due to infections with PT8 and PT13a (13,14). The PT4 clone is genetically distinct from PT8 and 13a, as shown Perspective Figure. (A) S. Gallinarum infections in chickens in England and Wales (closed squares) (2,27) and the Federal Republic of Germany (open squares) (28). (B) Human cases of S. Enteritidis infections per year reported from England and Wales (closed circles) (3,29) and the Federal Republic of Germany (open circles) (Zentrales Überwachungsprogram Salmonella, ZÜPSALM). by IS200 profiling, ribotyping, and restriction fragment length polymorphism of genomic DNA fragments separated by pulsed-field gel electrophoresis (15). The reasons for the differing clonal isolates in the United States and Western Europe are unknown. S. Enteritidis was likely introduced into poultry flocks from its rodent reservoir (12). The geographic differences in predominant phage types may reflect the fact that at the time of introduction into poultry flocks, different S. Enteritidis strains were endemic in rodent populations in Europe and the United States. Subsequently, S. Enteritidis strains with the highest transmissibility may have become predominant in poultry flocks on each continent. An alternative explanation for the predominance of PT4 in England and Wales is its introduction into poultry breeding lines in the early 1980s (16), which may have accelerated the epidemic spread of PT4 in laying hens and resulted in its dominance in human isolates from England and Wales. However, factors responsible for the beginning of the S. Enteritidis epidemic should be considered separately from those important for its subsequent spread within the poultry industry. These factors were not specific to PT4 but rather allowed different phage types to emerge as egg-associated pathogens on different continents at the same time (5).
One such factor could be the eradication of S. Gallinarum from poultry, which would facilitate circulation of S. Enteritidis strains within this animal reservoir regardless of phage type. Experimental evidence indicates that immunization with one Salmonella serovar can generate cross-immunity against a second serovar if both organisms have the same immunodominant O-antigen on their cell surface (17)(18)(19). The immunodominant epitope of the lipopolysaccharide of S. Gallinarum and S. Enteritidis is the O9antigen, a tyvelose residue of the O-antigen repeat (20). Immunization of chickens with S. Gallinarum protects against colonization with S. Enteritidis (21,22) but not S. Typhimurium, a serovar expressing a different immunodominant determinant, the O4-antigen (23). Theory indicates that coexistence of S. Gallinarum and S. Enteritidis in an animal population prompts competition as a result of the shared immunodominant O9-antigen, which generates crossimmunity. Mathematical models predict that the most likely outcome of this competition between serovars is that the serovar with the higher transmission success will competitively exclude the other from the host population (24)(25)(26). S. Gallinarum may have generated populationwide immunity (flock immunity) against the O9antigen at the beginning of the 20th century, thereby excluding S. Enteritidis strains from circulation in poultry flocks (12). This proposal is based on analysis of epidemiologic data from the United States, England, and Wales. To formally test this hypothesis, we analyzed epidemiologic data from Germany to determine whether the numbers of human S. Enteritidis cases are inversely related to those of S. Gallinarum cases reported in poultry. We used mathematical models to determine whether our hypothesis is consistent with theoretical considerations regarding transmissibility and flock immunity.
Inverse Relationship of S. Enteritidis and S. Gallinarum Isolations in Germany
In West Germany, the number of human S. Enteritidis cases was monitored by a national surveillance program ( Figure) (Zentrales Überwachungsprogram Salmonella, ZÜPSALM) Perspective from 1973 to 1982. In 1975, the number of human infections began to increase, indicating the beginning of the S. Enteritidis epidemic in West Germany. In 1983 the ZÜPSALM program was replaced by a national program for surveillance of foodborne disease outbreaks (Zentrale Erfassung von Ausbrüchen lebensmittelbedingter Infektionen, ZEVALI), implemented by the Department of Public Health (Bundesgesundheitsamt). In the first year of this program, S. Enteritidis was responsible for 62 outbreaks, most of which were traced to raw eggs. By 1988, the number of disease outbreaks caused by S. Enteritidis had increased to 1,365.
In 1967 in England and Wales, poultry, particularly chickens, became the main human food source of S. Enteritidis (3). Before that date, the organism had only sporadically been isolated from poultry (3). A continuous increase in human S. Enteritidis cases was recorded from 1968 until the epidemic peaked in 1994 (12,16). Thus, the human S. Enteritidis epidemic in England and Wales probably began in 1968 after this organism became associated with a human food source, chickens. The rapid increase in the number of human cases from 1982 to 1988 was probably due to the introduction of PT4 into poultry breeding lines in England and Wales (16). Comparison of data from England and Wales (3,29) showed that S. Enteritidis emerged somewhat later in West Germany (Figure).
Eradication of S. Gallinarum was among the factors contributing to the emergence of S. Enteritidis as a foodborne pathogen (12). To determine whether delayed elimination of avianadapted Salmonella serovars from commercial flocks contributed to the late start of the human epidemic in Germany, we compared the results of surveys performed in poultry flocks in Germany with those from the United Kingdom and the United States. Control programs in the 1930s triggered a steady decline in the incidence of S. Gallinarum in poultry flocks in the United States, England, and Wales (1,2,12). By the early 70s, only a few cases of S. Gallinarum were reported each year to veterinary investigation centers in England and Wales (27). In Germany, the first national survey performed by the Department of Public Health (Reichsgesundheitsamt) in 1929 showed that 16.3% of birds were seropositive for S. Gallinarum (30). Blood-testing performed 20 years later with 6,313 birds in a province (Südbaden) of West Germany still detected 19.5% reactors (31). This high prevalence of S. Gallinarum in 1949 likely reflects the fact that after World War II available resources were directed toward rebuilding the poultry industry rather than improving disease control. The comparatively slow decline in the prevalence of S. Gallinarum in West Germany is illustrated further by data for cases of disease reported from poultry. The number of S. Gallinarum isolations from chicken carcasses received by veterinary laboratories in West Germany was reported by a surveillance program from 1963 to 1981 (28). During this period, the rate of decrease in numbers of S. Gallinarum cases in England and Wales was considerably higher than that reported from West Germany (Figure). In each country the numbers of S. Gallinarum cases were inversely related to the numbers of human S. Enteritidis cases. These data are consistent with the concept that the relative delay in eradicating S. Gallinarum from poultry may have contributed to delayed onset of the S. Enteritidis epidemic in West Germany.
Competitive exclusion of S. Enteritidis by S. Gallinarum
To calculate whether the prevalence of S. Gallinarum in chickens was high enough to generate flock immunity against S. Enteritidis, we analyzed epidemiologic data by mathematical models combining epidemiology with population biology (24)(25)(26). The transmission success of a pathogen is measured by the basic casereproductive number, R 0 , which is defined as the average number of secondary cases of infection from a primary case in a susceptible host population (32). In direct transmission, the basic case-reproductive number of a pathogen is directly proportional to the duration, D, for which an infected host can transmit the disease before it is either killed or clear of infection; the probability, ß, by which the disease is transmitted from an infected animal to a susceptible host; and the density of susceptible hosts, X (24). After a pathogen is introduced into a susceptible host population, the reproductive rate of the infection declines as a consequence of the removal of a fraction, y, of the susceptible population, X, either by disease-induced death or Perspective acquisition of immunity. That is, the effective case-reproductive number, R, will be smaller than the basic case-reproductive number R 0 .
R = ßD (X-Xy) = R 0 -R 0 y (equation 2)
In an endemic state, each primary case of infection produces, on average, one secondary case. Thus, the effective case-reproductive number in a steady endemic-state situation is R=1. By solving equation 2 for R 0 , we obtain (33) Since S. Gallinarum was endemic in poultry populations at the beginning of the 20th century, its basic case-reproductive number, R 0 , can be calculated on the basis of epidemiologic data collected before control measures were implemented, by estimating the fraction, y, of birds removed from the susceptible population.
The first method developed for detecting anti-S. Gallinarum antibodies was a macroscopic tube agglutination test introduced in 1913 (34). In 1931, the tube agglutination test was partially replaced by the simpler whole-blood test for slide agglutination of stained antigen (35). Initial surveys performed from 1914 to 1929 revealed that on average 9.8% to 23.8% of poultry in Europe and the United States were positive by the tube agglutination test (1,30,36). These data do not provide a direct estimate of the number of immune animals, since both serologic tests are relatively insensitive (37). However, the number of susceptible birds can be estimated by comparing results of serologic surveys with data from vaccination experiments. Immunization with S. Gallinarum vaccine strain 9R produces antibody levels high enough to be detected by the whole-blood tube or slide agglutination tests in only a small number of birds (approximately 10%) (20,23). The number of birds protected against challenge with virulent S. Gallinarum after a single oral or subcutaneous vaccination is considerably higher (approximately 60%) (23,38). The tube or slide agglutination test results (9.8% and 23.8% of birds, respectively, tested positive) at the beginning of this century suggest that at least 60% were immune to S. Gallinarum. In addition to acquired immunity, deaths, which likely occurred in most chicken flocks since S. Gallinarum reactors were present on most farms at the time, also reduced the density of susceptible hosts. For instance, only 9 of 144 farms surveyed in Hungary in the 1930s had no S. Gallinarum-positive birds (39). The death rates reported from natural outbreaks are 10% to 50%, although higher rates are occasionally reported (40). By the conservative estimate that 90% of birds in a flock will survive an outbreak and approximately 60% of the survivors will have protective immunity, the basic case-reproductive number, R 0 , of S. Gallinarum is estimated to be 2.8.
S. Enteritidis does not substantially reduce the density of susceptible animals by causing death. Thus, its basic case-reproductive number can be estimated from the number of birds that remained susceptible during the peak of the S. Enteritidis epidemic. Antibody titers in S. Enteritidis-infected flocks are generally too low to be detected by the tube or the slide agglutination tests (37,41), presumably because this serovar commonly colonizes birds without causing disease and consequently without triggering a marked immune response. Live attenuated S. Enteritidis aroA vaccine does not produce antibody titers detectable by the tube or the slide agglutination tests (42), and oral immunization with this vaccine does not protect against organ colonization with wild-type S. Enteritidis (43). Hence, exposure to S. Enteritidis does not protect at levels found in birds with previous exposure to S. Gallinarum. Indeed, in a survey of flocks naturally infected with S. Enteritidis, only one of 114 birds tested strongly positive by the slide agglutination test (37). Experimental evidence indicates that birds exposed to S. Gallinarum have strong cross-immunity against colonization with S. Enteritidis. For instance, immunization of chickens with a single dose of S. Gallinarum vaccine strain 9R causes similar levels of protection against challenge with S. Gallinarum (23,38) and S. Enteritidis (22,44). The high degree of cross-immunity suggests that the antibody titers detected by the tube agglutination test are predictive of protection against lethal S. Gallinarum infection and of immunity to colonization by S. Enteritidis. Applying the criteria used to calculate R 0 for S. Gallinarum (10% reactors are indicative of 60% protection) to the S. Enteritidis data (37) suggests that approximately 5% of birds had protective immunity against this pathogen. From these data, the basic case-reproductive number of S. Enteritidis (R 0 =1.05) is estimated to be considerably lower than that of S. Gallinarum.
Perspective
Several factors should be considered in interpreting these data. Our estimate of the R 0 value for S. Enteritidis is based on epidemiologic data from the late 1980s. The intensive husbandry of chickens in the latter part of the 20th century has increased the density, X, of susceptible hosts and therefore R 0 (equation 1). Furthermore, information on the number of birds in S. Enteritidis-infected flocks with positive reactions in the tube agglutination test is sparse, and data from the peak of the epidemic in 1994 are not available. The prevalence of S. Enteritidis in poultry has been documented by a survey performed in Lower Saxony, Germany, in 1993, a time when flocks were heavily infected. This study showed that 7.6% of 2,112 laying hens were culture positive at slaughter (45). Although this low prevalence is consistent with a low basic casereproductive number of S. Enteritidis at the peak of the epidemic, these data cannot be used to derive a reliable estimate for the basic casereproductive number of S. Enteritidis at the beginning of the 20th century. Given these limitations, the available epidemiologic evidence appears to be consistent with our hypothesis. From equation 2 (R=R 0 -R 0 y), we estimate that early in the century the number of susceptible birds killed by S. Gallinarum (assuming 100% cross-immunity and y = 0.65) reduced the effective case-reproductive number of S. Enteritidis to < 1 (R = 0.37). These estimates support the idea that at the beginning of the 20th century S. Gallinarum reduced the density of susceptible hosts sufficiently to competitively exclude S. Enteritidis from circulation in poultry flocks.
S. Enteritidis is unlikely to be eliminated from poultry by relying solely on the test-andslaughter method of disease control because, unlike S. Gallinarum, S. Enteritidis can be reintroduced into flocks from its rodent reservoir. Instead, vaccination would be effective in excluding S. Enteritidis from domestic fowl because it would eliminate one of the risk factors (loss of flock immunity against the O9-antigen), which likely contributed to the emergence of S. Enteritidis as a foodborne pathogen. In fact, much of the decline in human S. Enteritidis cases in England and Wales since 1994 has been attributed to the use of an S. Enteritidis vaccine in poultry (16). However, serologic evidence that S. Gallinarum is more immunogenic than S. Enteritidis suggests that a more effective approach for eliciting protection in chickens would be immunization with a live attenuated S. Gallinarum vaccine. This approach would restore the natural balance (exclusion of S. Enteritidis by a natural competitor) that existed before human intervention strategies were implemented early in the 20th century. | 3,845.8 | 0001-01-01T00:00:00.000 | [
"Environmental Science",
"Biology",
"Medicine"
] |
Refocusing distance of a standard plenoptic camera
Recent developments in computational photography enabled variation of the optical focus of a plenoptic camera after image exposure, also known as refocusing. Existing ray models in the field simplify the camera’s complexity for the purpose of image and depth map enhancement, but fail to satisfyingly predict the distance to which a photograph is refocused. By treating a pair of light rays as a system of linear functions, it will be shown in this paper that its solution yields an intersection indicating the distance to a refocused object plane. Experimental work is conducted with different lenses and focus settings while comparing distance estimates with a stack of refocused photographs for which a blur metric has been devised. Quantitative assessments over a 24 m distance range suggest that predictions deviate by less than 0.35 % in comparison to an optical design software. The proposed refocusing estimator assists in predicting object distances just as in the prototyping stage of plenoptic cameras and will be an essential feature in applications demanding high precision in synthetic focus or where depth map recovery is done by analyzing a stack of refocused photographs. © 2016 Optical Society of America OCIS codes: (080.3620) Lens system design; (110.5200) Photography; (110.3010) Image reconstruction techniques; (110.1758) Computational imaging. References and links 1. F. E. Ives, “Parallax stereogram and process of making same,” US patent 725,567 (1903). 2. G. Lippmann, “Épreuves réversibles donnant la sensation du relief,” J. Phys. Théor. Appl. 7, 821–825 (1908). 3. E. H. Adelson and J. Y. Wang, “Single lens stereo with a plenoptic camera,” IEEE Transactions on Pattern Analysis and Machine Intelligence 14(2), 99–106 (1992). 4. M. Levoy and P. Hanrahan, “Lightfield rendering,” in Proceedings of ACM SIGGRAPH, 31–42 (1996). 5. A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in Proceedings of ACM SIGGRAPH, 297–306 (2000). 6. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Lightfield photography with a hand-held plenoptic camera,” in Stanford Tech. Report, 1–11 (CTSR, 2005). 7. R. Ng, “Digital light field photography,” (Stanford University, 2006). 8. T. Georgiev and K. C. Zheng, B. Curless, D. Salesin, S. Nayar, C. Intwala, “Spatio-angular resolution tradeoff in integral photography,” in Proceedings of Eurographics Symposium on Rendering, (2006). 9. A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography, 1–8 (2009). 10. C. Perwass and L. Wietzke, “Single-lens 3D camera with extended depth-of-field,” in Human Vision and Electronic Imaging XVII, Proc. SPIE 8291, 829108 (2012). 11. C. Hahne, A. Aggoun, S. Haxha, V. Velisavljevic, and J. C. J. Fernández “Light field geometry of a standard plenoptic camera,” Opt. Express 22(22), 26659–26673 (2014). 12. C. Hahne, A. Aggoun, and V. Velisavljevic “The refocusing distance of a standard plenoptic photograph,” in 3DTV-Conference: The True Vision Capture, Transmission and Display of 3D Video (3DTV-CON), (2015). 13. Radiant ZEMAX LLC, “Optical design program,” version 110711 (2011). 14. D. G. Dansereau, “Plenoptic signal processing for robust vision in field robotics,” (Univ. of Sydney, 2014). 15. D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 1027–1034 (2013). 16. E. Hecht, Optics, Fourth Edition (Addison Wesley, 2001). 17. E. F. Schubert, Light-Emitting Diodes (Cambridge University, 2006). 18. D. Cho, M. Lee, S. Kim and Y.-W. Tai, “Modeling the calibration pipeline of the lytro camera for high quality light-field image reconstruction,” in IEEE International Conference on Computer Vision (ICCV), 3280–3287 (2013). 19. C. Hahne, “Matlab implementation of proposed refocusing distance estimator,” figshare (2016) [retrieved 6 September 2016] http://dx.doi.org/10.6084/m9.figshare.3383797. 20. B. Caldwell, “Fast wide-range zoom for 35 mm format,” Opt. Photon. News 11(7), 49–51 (2000). 21. M. Yanagisawa, “Optical system having a variable out-of-focus state,” US Patent 4,908,639 (1990). 22. C. Hahne, “Zemax archive file containing plenoptic camera design,” figshare (2016) [retrieved 6 September 2016] http://dx.doi.org/10.6084/m9.figshare.3381082. 23. TRIOPTICS, “MTF measurement and further parameters,” (2015), [retrieved 3 October 2015] http://www.trioptics.com/knowledge-base/mtf-and-image-quality/. 24. E. Mavridaki, V. Mezaris “No-reference blur assessment in natural images using fourier transform and spatial pyramids,” in IEEE International Conference on Image Processing (ICIP), 566–570 (2014). 25. M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi “Depth from combining defocus and correspondence using light-field cameras,” in IEEE International Conference on Computer Vision (ICCV), 673–680 (2014). 26. Z. Xu, J. Ke, and E. Y. Lam “High-resolution lightfield photography using two masks,” Opt. Express 20(10), 10971–10983 (2012). 27. C. Hahne, “Raw image data taken by a standard plenoptic camera,” figshare (2016) [retrieved 6 September 2016] http://dx.doi.org/10.6084/m9.figshare.3362152.
Introduction
With a conventional camera, angular information of light rays is lost at the moment of image acquisition, since the irradiance of all rays striking a sensor element is averaged regardless of the rays' incident angle.Light rays originating from an object point that is out of focus will be scattered across many sensor elements.This becomes visible as a blurred region and cannot be satisfyingly resolved afterwards.To overcome this problem, an optical imaging system is required to enable detection of the light rays' direction.Plenoptic cameras achieve this by capturing each spatial point from multiple perspectives.
The first stages in the development of the plenoptic camera can be traced back to the beginning of the previous century [1,2].At that time, just as today, it was the goal to recover image depth by attaching a light transmitting sampling array, i.e. made from pinholes or micro lenses, to an imaging device of an otherwise traditional camera [3].One attempt to adequately describe light rays traveling through these optical hardware components is the 4-Dimensional (4-D) light field notation [4] which gained popularity among image scientists.In principle, a captured 4-D light field is characterized by rays piercing two planes with respective coordinate space (s, t) and (u, v) that are placed behind one another.Provided with the distance between these planes, the four coordinates (u, v, s, t) of a single ray give indication about its angle and, if combined with other rays in the light field, allows depth information to be inferred.Another fundamental breakthrough in the field was the discovery of a synthetic focus variation after image acquisition [5].This can be thought of as layering and shifting viewpoint images taken by an array of cameras and merging their pixel intensities.Subsequently, this conceptual idea was transferred to the plenoptic camera [6].It has been pointed out that the maximal depth resolution is achieved when positioning the Micro Lens Array (MLA) one focal length away from the sensor [7].More recently, research has investigated different MLA focus settings offering a resolution trade-off in angular and spatial domain [8] and new related image rendering techniques [9].To distinguish between camera types, the term Standard Plenoptic Camera (SPC) was coined in [10] to describe a setup where an image sensor is placed at the MLA's focal plane as presented by [6].
While the SPC has made its way to the consumer photography market, our research group proposed ray models aiming to estimate distances which have been computationally brought to focus [11,12].These articles laid the groundwork for estimating the refocusing distance by regarding specific light rays as a system of linear functions.The system's solution yields an intersection in object space indicating the distance from which rays have been propagated.The experimental results supplied in recent work showed matching estimates for far distances, but incorrect approximations for objects close to the SPC [12].A benchmark comparison of the previous distance estimator [11] with a real ray simulation software [13] has revealed errors of up to 11 %.This was due to an approach inaccurate at locating micro image center positions.
It is demonstrated in this study that deviations in refocusing distance predictions remain below 0.35 % for different lens designs and focus settings.Accuracy improvements rely on the assumption that chief rays impinging on Micro Image Centers (MICs) arise from the exit pupil center.The proposed solution offers an instant computation and will prove to be useful in professional photography and motion picture arts which require precise synthetic focus measures.
This paper is outlined as follows.Section 2 derives an efficient image synthesis to reconstruct photographs with a varying optical focus from an SPC.Based on the developed model, Section 3 aims at representing light rays as functions and shows how the refocusing distance can be located.Following this, Section 4 is concerned with evaluating claims made about the synthetic focusing distance by using real images from our customized SPC and a benchmark assessment with a real ray simulation software [13].Conclusions are drawn in Section 5 presenting achievements and an outlook for future work.
Standard plenoptic ray model
As a starting point, we deploy the well known thin lens equation which can be written as where f s denotes the focal length, b s the image distance and a s the object distance in respect of a micro lens s.Since micro lenses are placed at a stationary distance f s in front of the image sensor of an SPC, f s equals the micro lens image distance ( f s = b s ).Therefore, f s may be substituted for b s in Eq. ( 1) which yields a s → ∞ after subtracting the term 1/ f s .This means that rays converging on a distance f s behind the lens have emanated from a point at an infinitely far distance a s .Rays coming from infinity travel parallel to each other which is known as the effect of collimation.To support this, it is assumed that image spots focusing at a distance f s are infinitesimally small.In addition, we regard micro image sampling positions u to be discrete from which light rays are traced back through lens components.Figure 1 shows collimated light rays entering a micro lens and leaving main lens elements.At the micro image plane, an MIC operates as a reference point c = ( M − 1) /2 where M denotes the one-dimensional (1-D) micro image resolution which is seen to be consistent.Horizontal micro image samples are then indexed by c +i where i ∈ [−c, c].Horizontal micro image positions are given as u c+i, j where j denotes the 1-D index of the respective micro lens s j .A plenoptic micro lens illuminates several pixels u c+i, j and requires its lens pitch, denoted as ∆s, to be greater than the pixel pitch ∆u.Each chief ray arriving at any u c+i, j exhibits a specific slope m c+i, j .For example, micro lens chief rays which focus at u c−1, j have a slope m −1, j in common.Hence, all chief rays m −1, j form a collimated light beam in front of the MLA.
In our previous model [11], it is assumed that each MIC lies on the optical axis of its corresponding micro lens.It was mentioned that this hypothesis would only be true where the main lens is at an infinite distance from the MLA [14].Because of the finite separation distance between the main lens and the MLA, the centers of micro images deviate from their micro lens optical axes.A more realistic attempt to approximate MIC positions is to trace chief rays through optical centers of micro and main lenses [15].An extension of this assumption is proposed in Fig. 1(b) where the center of the aperture's exit pupil A is seen to be the MIC chief rays origin.It is of particular importance to detect MICs correctly since they are taken as reference origins in the image synthesis process.Contrary to previous approaches [11,12], all chief rays impinging on the MIC positions originate from the exit pupil center which, for simplicity, coincides with the main lens optical center in Fig. 2. All chief ray positions that are adjacent to MICs can be ascertained by a corresponding multiple of the pixel pitch ∆u.
Realistic SPC ray model.The refined model considers more accurate MICs obtained from chief rays crossing the micro lens optical centers and the exit pupil center of the main lens (yellow colored rays).For convenience, the main lens is depicted as a thin lens where aperture pupils and principal planes coincide.
It has been stated in [6] that the irradiance I b U at a film plane (s, t) of a conventional camera is obtained by where A(•) denotes the aperture, (U, V ) the main lens plane coordinate space and b U the separation between the main lens and the film plane (s, t).The factor 1/b U 2 is often referred to as the inverse-square law [16].If θ is the incident ray angle, the roll-off factor cos 4 θ describes the gradual decline in irradiance from object points at an oblique angle impinging on the film plane, also known as natural vignetting.It is implied that coordinates (s, t) represent the spatial domain in horizontal and vertical dimensions while (U, V ) denote the angular light field domain.To simplify Eq. ( 2), a horizontal cross-section of the light field is regarded hereafter so that L b U (s, t, U, V ) becomes L b U (s, U).Thereby, subsequent declarations build on the assumption that camera parameters are equally specified in horizontal and vertical dimensions allowing propositions to be applied to both dimensions in the same manner.Since the overall measured irradiance I b U is scalable (e.g. on electronic devices) without affecting the light field information, the inverse-square factor 1/b U 2 may be omitted at this stage.On the supposition that the main lens aperture is seen to be completely open, the aperture term becomes A(•) = 1.To further simplify, cos 4 θ will be neglected given that pictures do not expose natural vignetting.Provided these assumptions, Eq. ( 2) can be shortened yielding Suppose that the entire light field L b U (s, U) is located at plane U in the form of I U (s, U) since all rays of a potentially captured light field travel through U. From this it follows that as it preserves a distinction between spatial and angular information.Figure 3 where ∝ designates the equality up to scale.When ignoring the scale factor in Eq. ( 5), which simply lowers the overall irradiance, I f s (s, u) and I U (s, U) become equal.From this it follows that Eq. ( 3) can be written as 3. Irradiance planes.If light rays emanate from an arbitrary point in object space, the measured energy I U (s, U) at the main lens' aperture is seen to be concentrated on a focused point I b U (s) at the MLA and distributed over the sensor area I f s (s, u).Neglecting the presence of light absorptions and reflections, I f s (s, u) is proportional to I U (s, U) which may be proven by comparing similar triangles.
Due to the human visual perception, photosensitive sensors limit the irradiance signal spectrum to the visible wavelength range.For this purpose, bandpass filters are placed in the optical path of present-day cameras which prevents infrared and ultraviolet radiation from being captured.Therefore, Eq. ( 6) will be rewritten as in order that photometric illuminances E b U and E f s substitute irradiances I b U and I f s in accordance with the luminosity function [17].Besides, it is assumed that E f s (s, u) is a monochromatic signal being represented as a gray scale image.Recalling index notations of the derived model, a discrete equivalent of Eq. ( 7) may be given by provided that the sample width ∆u is neglected here as it simply scales the overall illuminance E b U s j while preserving relative brightness levels.It is further implied that indices in the vertical domain are constant meaning that only a single horizontal row of sampled s j and u c+i is regarded in the following.Nonetheless, subsequent formulas can be applied in the vertical direction under the assumption that indices are interchangeable and thus of the same size.Equation ( 8) serves as a basis for refocusing syntheses in spatial domain.
Invoking the Lambertian reflectance, an object point scatters light in all directions uniformly, meaning that each ray coming from that point carries the same energy.With this, an object placed at plane a = 0 reflects light with a luminous emittance M a .An example which highlights the rays' path starting from a spatial point s at object plane M 0 is shown in Fig. 4. Closer inspection of Fig. 4 reveals that the luminous emittance M 0 at a discrete point s 0 may be seen as projected onto a micro lens s 0 and scattered across micro image pixels u.In the absence of reflection and absorption at the lens material, a synthesized image E a s j at the MLA plane (a = 0) is recovered by integrating all illuminance values u c+i for each s j .Taking E 0 [s 0 ] as an example, this is mathematically given by Similarly, an adjacent spatial point s 1 in E 0 can be retrieved by Developing this concept further makes it obvious that reconstructs an image E 0 s j as it appeared on the MLA by summing up all pixels within each micro image to form a respective spatial point of that particular plane.As claimed, refocusing allows more than only one focused image plane to be recovered.Figure 5 depicts rays emitted from an object point located closer to the camera device (a = 1).For comprehensibility, light rays have been extended on the image side in Fig. 5 yielding an intersection at a distance where the corresponding image point would have focused without the MLA and image sensor.The presence of both, however, enables the illuminance of an image point to be retrieved as it would have appeared with a conventional sensor at E 1 .Further analysis of light rays in Fig. 5 unveils coordinate pairs s j , u c+i that have to be considered in an integration process synthesizing E 1 .Accordingly, the illuminance E 1 at point s 0 can be obtained as follows The adjacent image point s 1 is formed by calculating The observation that the index j has simply been incremented by 1 from Eq. ( 12) to Eq. ( 13) allows conclusions to be drawn about the final refocusing synthesis equation which reads and satisfies any plane a to be recovered.In Eq. ( 14) it is assumed that synthesized intensities E a s j ignore clipping which occurs when quantized values exceed the maximum amplitude of the given bit depth range.Thus, Eq. ( 14) only applies to underexposed plenoptic camera images on condition that peaks in E a s j do not surpass the quantization limit.To prevent clipping during the refocusing process, one can simply average intensities E f s prior to summing them up as provided by which, on the downside, requires an additional computation step to perform the division.Letting a ∈ Q involves an interpolation of micro images which increases the spatial and angular resolution at the same time.In such a scenario, a denominator in a fraction number a represents the upsampling factor for the number of micro images.Note that our implementation of the image synthesis employs an algorithmic MIC detection with sub-pixel precision as suggested by Cho et al. [18] and resamples the angular domain u c+i, j accordingly to suppress image artifacts in reconstructed photographs.
Focus range estimation
In geometrical optics, light rays are viewed as straight lines with a certain angle in a given interval.These lines can be represented by linear functions of z possessing a slope m.By regarding the rays' emission as an intersection of ray functions, it may be viable to pinpoint their local origin.This position is seen to indicate the focusing distance of a refocused photograph.In order for it to function, the proposed concept requires the geometry and thus the parameters of the camera system to be known.This section develops a theoretical approach based on the realistic SPC model to estimate the distance and Depth of Field (DoF) that has been computationally brought into focus.A Matlab implementation of the proposed distance estimator can be found online (see Code 1, [19]).
Refocusing distance
In previous studies [11,12], the refocusing distance has been found by geometrically tracing light rays through the lenses and finding their intersection in object space.Alternatively, rays can be seen as intersecting behind the sensor which is illustrated in Fig. 5.The convergence of a selected image-side ray pair indicates where the respective image point would have focused in the absence of MLA and sensor.Locating this image point provides a refocused image distance b U which may help to get the refocused object side distance a U when applying the thin lens equation.It will be demonstrated hereafter that the ray intersection on image-side requires less computational steps as the ascertainment of two object-side ray slopes becomes redundant.For conciseness, we trace rays along the central horizontal axis, although subsequent equations can be equally employed in the vertical domain which produces the same distance result.First of all, it is necessary to define the optical center of an SPC image by letting the micro lens index be j = o where Here, J is the total number of micro lenses in the horizontal direction.Given the micro lens diameter ∆s, the horizontal position of a micro lens' optical center is given by where j is seen to start counting from 0 at the leftmost micro lens with respect to the main lens optical axis.As rays impinging on MICs are seen to connect an optical center of a micro lens s j and the exit pupil A , their respective slope m c, j may be given by where d A denotes the separation between exit pupil plane and the MLA's front vertex.Provided the MIC chief ray slope m c, j , an MIC position u c, j is estimated by extending m c, j until it intersects the sensor plane which is calculated by Central positions of adjacent pixels u c+i, j are given by the number of pixels i separating u c+i, j from the center u c, j .To calculate u c+i, j , we simply compute which requires the pixel width ∆u.The slope m c+i, j of a ray that hits a micro image at position u c+i, j is obtained by With this, each ray on the image side can be expressed as a linear function as given by At this stage, it may be worth discussing the selection of appropriate rays for the intersection.A set of two chief ray functions meets the requirements to locate an object plane a because all adjacent ray intersections lie on the same planar surface parallel to the sensor.It is of key importance, however, to select a ray pair that intersects at a desired plane a.In respect of the refocusing synthesis in Eq. ( 15), a system of linear ray functions is found by letting the index subscript in f c+i, j (z) be A = {c + i, j} = {c − c, e} for the first chief ray where e is an arbitrary, but valid micro lens s e and B = {c + i, j} = {c + c, e − a(M − 1)} for the second ray.Given the synthesis example depicted in Fig. 5, parameters would be e = 2 , a = 1 , M = 3 , c = 1 such that corresponding ray functions are f 0,2 (z) for E f s [u 0 , s 2 ] and f 2,0 (z) for E f s [u 2 , s 0 ].Finally, the intersection of the chosen chief ray pair is found by solving Related object distances a U are retrieved by deploying the thin lens equation in a way that With respect to the MLA location, the final refocusing distance d a can be acquired by summing up all parameters separating the MLA from the principal plane H 1U as demonstrated in with H 1U H 2U as the distance which separates principal planes from each other.
Depth of field
A focused image spot of a finite size, by implication, causes the focused depth range in object space to be finite as well.In conventional photography, this range is called Depth of Field (DoF).Optical phenomena such as aberrations or diffraction are known to limit the spatial extent of projected image points.However, most kinds of lens aberrations can be nominally eliminated through optical lens design (e.g.aspherical lenses, glasses of different dispersion).In that case, the circle of least confusion solely depends on diffraction making an imaging system called diffraction-limited. Thereby, light waves that encounter a pinhole, aperture or slit of a size comparable to the wavelength λ propagate in all directions and interfere at an image plane inducing wave superposition due to the ray's varying path length and corresponding difference in phase.A diffracted image point is made up of a central disc possessing the major energy surrounded by rings with alternating intensity which is often referred to as Airy pattern [16].According to Hecht [16], the radius r A of an Airy pattern's central peak disc is approximately given by To assess the optical resolution limit of a lens, it is straightforward and sufficient to refer to the Rayleigh criterion.The Rayleigh criterion states that two image points of equal irradiance in the form of an Airy pattern need to be separated by a minimum distance (∆ ) min = r A to be visually distinguishable.Let us suppose a non-diffraction-limited camera system in which the pixel pitch ∆u is larger than or equal to (∆ ) min at the smallest aperture diameter A. In this case, the DoF merely depends on the pixel pitch ∆u.To distinguish between different pixel positions, we define three types of rays that are class-divided into: • central rays at pixel centers u c+i, j • inner rays at pixel borders u {c+i, j }− towards the MIC • outer rays at pixel borders u {c+i, j }+ closer to the micro image edge For conciseness, the image-side based intersection method nearby the MLA is applied hereafter.Nonetheless, it is feasible to derive DoF distances from an intersection in object space.
U
Main Lens u c,0 Fig. 6.Refocusing distance estimation where a = 1.Taking the example from Fig. 5, the diagram illustrates parameters that help to find the distance at which refocused photographs exhibit best focus.The proposed model offers two ways to accomplish this by regarding rays as intersecting linear functions in object and image space.DoF border d 1− cannot be attained via image-based intersection as inner rays do not converge on the image side which is a consequence of a U − < f U .Distances d a ± are negative in case they are located behind the MLA and positive otherwise.
Similar to the acquisition of central ray positions u c+i, j in Section 3.1, pixel border positions u {c+i, j }± may be obtained as follows where u c, j is taken from Eq. ( 19).Given u {c+i, j }± as spatial points at pixel borders, chief ray slopes m {c+i, j }± starting from these respective locations are given by Since border points are assumed to be infinitely small and positioned at the distance of one micro lens focal length, light rays ending up at u {c+i, j }± form collimated beams between s and U propagating with respective slopes m {c+i, j }± in that particular interval.The range that spans from the furthest to closest intersection of these beams defines the DoF.Closer inspection of Fig. 6 reveals that inner rays intersect at the close DoF boundary and pass through external micro lens edges.Outer rays, however, yield an intersection providing the furthest DoF boundary and cross internal micro lens edges.Therefore, it is of importance to determine micro lens edges s j± which is accomplished by Outer and inner rays converging on the image side are seen to disregard the refraction at micro lenses and continue their path with m {c+i, j }± from the micro lens edge as depicted in Fig. 6.Hence, a linear function representing a light ray at a pixel border is given by Image side intersections at d a − for nearby and d a + for far-away DoF borders are found where Related DoF object distances a U ± are retrieved by deploying the thin lens equation such that With respect to the MLA location, the DoF boundary distances d a± can be acquired by summing up all parameters separating the MLA from the principal plane H 1U as demonstrated in Finally, the difference of the near limit d a− and far limit d a+ yield the DoF a that reads The contrived model implies that the micro image size directly affects the refocusing and DoF performance.A reduction of M, for example via cropping each micro image, causes depth aliasing due to downsampling in the angular domain.This consequently lowers the number of refocused image slices and increases their DoF.Upsampling M, in turn, raises the number of refocused photographs and shrinks the DoF per slice.An evaluation of these statements is carried out in the following section where results are presented.
Validation
For the experimental work, we conceive a customized camera which accommodates a full frame sensor with 4008 by 2672 active pixels and ∆u = 9 µm pixel pitch.A raw photograph used in the experiment can be found in Appendix.The optical design is presented in what follows.Modern objective lenses are known to change the optical focus by shifting particular lens groups while other elements are static which, in turn, alters cardinal point positions of that lens system.To preserve previously elaborated principal plane locations, a variation of the image distance b U is achieved by shifting the MLA compound sensor away from the objective lens while its focus ring remains at infinity.The only limitation is, however, that the space inside our customized camera confines the shift range of the sensor system to an overall focus distance of d f ≈ 4 m with d f as the distance from the MLA's front vertex to the plane the main lens is focused on.Due to this, solely two focus settings (d f → ∞ and d f ≈ 4 m) are subject to examination in the following experiment.With respect to the thin lens equation, b U is obtained via
Lens specification
where By substituting for a U , however, it becomes obvious that b U is an input and output variable at the same time which gives a classical chicken-and-egg problem.
To solve this, we initially set the input b U := f U , substitute the output b U for the input variable and iterate this procedure until both b U are identical.Objective lenses are denoted as f 193 , f 90 and f 197 .The specification for f 193 and f 90 are based on [20,21] whereas f 197 is measured experimentally using the approach in [23].Calculated image, exit pupil and principal plane distances for the main lenses are provided in Table 2.Note that parameters are given in respect of λ = 550 nm.Focal lengths can be found in the image distance column for infinity focus.A Zemax file of a plenoptic camera with f 193 and MLA (II.) is provided online (see Dataset 1, [22]).
Experiments
On the basis of raw light field photographs, this section aims to evaluate the accuracy of predicted refocusing distances as proposed in Section 3. The challenge here is to verify whether objects placed at a known distance exhibit best focus at the predicted refocusing distance.Hence, the evaluation requires an algorithm to sweep for blurred regions in a stack of photographs with varying focus.One obvious attempt to measure the blur of an image is to analyze them in frequency domain.Mavridaki and Mezaris [24] follow this principle in a recent study to assess the blur in a single image.To employ their proposition, modifications are necessary as the distance validation requires the degree of blur to be detected for particular image portions in a stack of photographs with varying focus.Here, the conceptual idea is to select a Region of Interest (RoI) surrounding the same object in each refocused image.Unlike in Section 3 where the vertical index h in t h is constant for conciseness, refocused images may be regarded in vertical and horizontal direction in this section such that a refocused photograph in 2-D is given as E a s j , t h .A RoI is a cropped version of a refocused photograph that can be selected as desired with image borders spanning from the ξ-th to Ξ-th pixel in horizontal and the -th to Π-th pixel in vertical direction.Care has been taken to ensure that a RoI's bounding box precisely crops the object at the same relative position in each image of the focal stack.When Fourier-transforming all RoIs of the focal stack, the key indicator for a blurred RoI is a decrease in its high frequency power.To implement the proposed concept, we first perform the 2-D Discrete Fourier Transformation and extract the magnitude X σ ω , ρ ψ as given by while κ = √ −1 is the complex number and | • | computes the absolute value, leaving out the phase spectrum.Provided the 2-D magnitude signal, its total energy T E is computed via with Ω = (Ξ − ξ ) /2 and Ψ = (Π − ) /2 as borders cropping the first quarter of the unshifted magnitude signal.In order to identify the energy of high frequency elements H E, we calculate the power of low frequencies and subtract them from T E as seen in where Q H and Q V are limits in the range of {1, .. , Ω − 1} and {1, .. , Ψ − 1} separating low from high frequencies.Finally, the sharpness S of a refocused RoI is obtained by serving as the blur metric.Thus, each RoI focal stack produces a set of S values which is normalized and given as a function of the refocusing variable a.The maximum in S thereby indicates best focus for a selected RoI object at the respective a.
To benchmark proposed refocusing distance estimates, an experiment is conducted similar to that from a previous publication [12].As opposed to [12] where b U was taken as the MIC chief ray origin, here d A is given as the origin for rays that lead to MIC positions.Besides this, frequency borders Q H = Ω/100 and Q V = Ψ/100 are relative to the cropped image resolution.
To make statements about the model accuracy, real objects are placed at predicted distances d a .Recall that d a is the distance from MLA to a respective refocused object plane M a .As the MLA is embedded in the camera body and hence inaccessible, the objective lens' front panel was chosen to be the distance measurement origin for d a .This causes a displacement of 12.7 cm towards object space (d a − 12.7 cm), which has been accounted for in the predictions of d a presented in Tables 3(a) and 3 Figures 7 and 8 reveal outcomes of the refocusing distance validation by showing refocused images E a s j , t h and RoIs at different slices a as well as related blur metric results S. The reason why S produces relatively large values around predicted blur metric peaks is that objects may lie within the DoF of adjacent slices a and thus can be in focus among several refocused image slices.Taking slice a = 4 /11 from Table 3(b) as an example, it becomes obvious that its object distance d4 /11 = 186 cm falls inside the DoF range of slice a = 5 /11 with d5 /11+ = 194 cm and d5 /11− = 140 cm because d5 /11+ > d4 /11 > d5 /11− .Section 3.2 shows that reducing the micro image resolution M yields a narrower DoF which suggests to use largest possible M as this minimizes the effect of wide DoFs.Experimentations given in Fig. 8 were carried out with maximum directional resolution M = 11 since M = 13 would involve pixels that start to suffer from vignetting and micro image crosstalk.Although objects are covered by DoFs of surrounding slices, the presented blur metric still detects predicted sharpness peaks as seen in Figs.7 and 8.
A more insightful overview illustrating the distance estimation performance of the proposed method is given in Figs.7(r) and 8(r).Therein, each curve peak indicates the least blur for respective RoI of a certain object.Vertical lines represent the predicted distance d a where objects were situated.Hence, the best case scenario is attained when a curve peak and its corresponding vertical line coincide.This would signify that predicted and measured best focus for a particular distance are in line with each other.While results in [12] exhibit errors in predicting the distance of nearby objects, refocused distance estimates in Figs.7(r) and 8(r) match least blur peaks S for each object which corresponds to a 0 % error.It also suggests that the proposed refocusing distance estimator takes alternative lens focus settings (b U > f U ) into account without causing a deviation which was not investigated in [12].The improvement is mainly due to a correct MIC approximation.A more precise error can be obtained by increasing the SPC's depth resolution.This inevitably requires to upsample the angular domain meaning more pixels per micro image.As our camera features an optimised micro image resolution (M = 11) which is further upsampled by M, provided outcomes are considered to be our accuracy limit.The following section aims at gaining quantitative results by using an optical design software [13].
Simulation
The validation of distance predictions using an optical design software [13] is achieved by firing off central rays from the sensor side into object space.However, inner and outer rays start from micro lens edges with a slope calculated from the respective pixel borders.The given pixel and micro lens pitch entail a micro image resolution of M = 13.Due to the paraxial approximation, rays starting from samples at the micro image border cause largest possible errors.To testify prediction limits, simulation results base on A = {0, e} and B = {12, e − a(M − 1)} unless specified otherwise.To align rays, e is dimensioned such that A and B are symmetric with an intersection close to the optical axis z U (e.g. e = 0, 6, 12, ...).DoF rays A± and B± are fired from micro lens edges.Ray slopes build on MIC predictions obtained by Eq. ( 19).Refocusing distances in simulation are measured by intersections of corresponding rays in object space.Exemplary screenshots are seen in Fig. 9.It is the observation in Figs.9(a) to 9(c) that the DoF shrinks with increasing parameter a which reminds of the focus behavior in traditional cameras.Ray intersections in Figs.9(d) to 9(f) contain simulation results with a fixed a, but varying M. As anticipated in Section 3, a DoF becomes larger with less directional samples u and vice versa.
To benchmark the prediction, relative errors are provided as E RR. Tables 4 to 6 show that each error of the refocusing distance prediction remains below 0.35 %.This is a significant improvement compared to previous results [11] which were up to 11 %.The main reason for the enhancement relies on the more accurate MIC prediction.While [11] was based on an ideal SPC ray model where MICs are seen to be at the height of s j , the refined model takes actual MICs into consideration by connecting chief rays from the exit pupil's center to micro lens centers.
Refocusing on narrow planes is achieved with a successive increase in a. Thereby, prediction results move further away from the simulation which is reflected in slightly increasing errors.This may be explained by the fact that short distances d a and d a± force light ray slopes to become steeper which counteracts the paraxial approximation in geometrical optics.As a result, aberrations occur that are not taken into account which, in turn, deteriorates the prediction accuracy.When the objective lens is set to d f → ∞ (a U → ∞) and the refocusing value amounts to a = 0, central rays travel in a parallel manner whereas outer rays even diverge and therefore never intersect each other.In this case, only the distance to the nearby DoF border, also known as hyperfocal distance, can be obtained from the inner rays.This is given by d a− in the first row of Table 4.The 4-th row of the measurement data where a = 4 and d f → ∞ for f 193 contains an empty field in the d a− simulation column.This is due to the fact that corresponding inner rays lead to an intersection inside the objective lens which turns out to be an invalid refocusing result.Since the new image distance is smaller than the focal length (b U < f U ), results of this particular setting prove to be impractical as they exceed the natural focusing limit.
Despite promising results, the first set of analyses merely examined the impact of the focus distance d f (a U ).In order to assess the effect of the MLA focal length parameter f s , the simulation process has been repeated using MLA (I.) with results provided in Table 5. Comparing the outcomes with Table 4, distances d a± suggest that a reduction in f s moves refocused object planes further away from the camera when d f → ∞.Interestingly, the opposite occurs when focusing with d f = 1.5 m.
According to the data in Tables 4 and 5, we can infer that d a ≈ d f if a = 0 which means that synthetically focusing with a = 0 results in a focusing distance as with a conventional camera having a traditional sensor at the position of the MLA.Tracing rays according to our model yields more accurate results in the optical design software [13] than by solving Eq. (23).However, deviations of less than 0.35 % are insignificant.Implementing the model with a high-level programming language (see Code 1) outperforms the real ray simulation in terms of computation time.Using a timer, the image-side based method presented in Section 3 takes about 0.002 to 0.005 seconds to compute d a and d a± for each a on an Intel Core i7-3770 CPU @ 3.40 GHz system whereas modeling a plenoptic lens design and measuring distances by hand can take up to a business day.
Conclusion
In summary, it is now possible to state that the distance to which an SPC photograph is refocused can be accurately predicted when deploying the proposed ray model and image synthesis.Flexibility and precision in focus and DoF variation after image capture can be useful in professional photography as well as motion picture arts.If combined with the presented blur metric, the conceived refocusing distance estimator allows an SPC to predict an object's distance.This can be an important feature for robots in space or cars tracking objects in road traffic.
Our model has been experimentally verified using a customized SPC without exhibiting deviations as objects were placed at predicted distances.An extensive benchmark comparison with an optical design software [13] results in quantitative errors of up to 0.35 % over a 24 m range.This indicates a significant accuracy improvement over our previous method.Small tolerances in simulation are due to optical aberrations that are sufficiently suppressed in present-day objective lenses.Simulation results further support the assumption that DoF ranges shrink when refocusing closer, a focus behavior similar to that of conventional cameras.
It is unknown at this stage to which extent the presented method applies to the Fourier Slice Photography [7], depth from defocus cues [25] or other types of plenoptic cameras [9,10,26].Future studies on light ray trajectories with different MLA positions are therefore recommended as this exceeds the scope of the provided research.
Fig. 1 .
Fig. 1.Lens components.(a) single micro lens s j with diameter ∆s and its chief ray m c+i, j based on sensor sampling positions c + i which are separated by ∆u; (b) chief ray trajectories where red colored crossbars signify gaps between MICs and respective micro lens optical axes.Rays arriving at MICs arise from the center of the exit pupil A .
which yields the image-side distance, denoted d a , from MLA to the intersection where rays would have focused.Note that d a is negative if the intersection occurs behind the MLA.Having d a , we get new image distances b U of the particular refocused plane by calculating b U = b U − d a .
by recalling that A = {c + i, j} and A±, B± select a desired DoF ray pair A± = {c − c, e}, B± = {c + c, e − a(M − 1)} as discussed in Section 3.1.We get new image distances b U ± of the particular refocused DoF boundaries when calculating b (b).Moreover, Tables 3(a) and 3(b) list predicted DoF borders d a± at different settings M and b U while highlighting object planes a.
Fig. 9 .
Fig. 9. Real ray tracing simulation showing intersecting inner (red), outer (black) and central rays (cyan) with varying a and M. The consistent scale allows results to be compared, which are taken from the f 90 lens with d f → ∞ focus and MLA (II.).Screenshots in (a) to (c) have a constant micro image size (M = 13) and suggest that the DoF shrinks with ascending a.In contrast, a DoF grows for a fix refocusing plane (a = 4) by reducing samples in M as seen in (d) to (f).
visualizes the irradiance planes while introducing a new physical sensor plane I f s (s, u) located one focal length f s behind I b U with u as a horizontal and v as a vertical angular sampling domain in the 2-D case.The former spatial image plane I b U (s, t) is now replaced by an MLA, enabling light to pass through and strike the new sensor plane I f s (s, u).When applying the method of similar triangles to Fig.3, it becomes apparent that I U (s, U) is directly proportional to I f s (s, u) which gives
Table 1
[13]s parameters of two micro lens specifications, denoted MLA (I.) and (II.), used in subsequent experimentations.In addition to the input variables needed for the proposed refocusing distance estimation, Table1contains relevant parameters such as the thickness t s , refractive index n, radii of curvature R s1 , R s2 , principal plane distance H 1s H 2s and the spacing d s between MLA back vertex and sensor plane which are required for micro lens modeling in an optical design software environment[13].
Table 3 .
Predicted refocusing distances d a and d a±
Table 5 .
Refocusing distance comparison for f 193 with MLA (I.) and M = 13.thirdexperimentalvalidation was undertaken to investigate the impact of the main lens focal length parameter f U .As Table6shows, using a shorter f U implies a rapid decline in d a± with ascending a. From this observation it follows that the depth sampling rate of refocused image slices is much denser for large f U .It can be concluded that the refocusing distance d a± drops with decreasing main lens focusing distance d f , ascending refocusing parameter a, enlarging MLA focal length f s , reducing objective lens focal length f U and vice versa. A | 11,019.4 | 2016-09-19T00:00:00.000 | [
"Physics"
] |
Bottom-up dust nucleation theory in oxygen-rich evolved stars II. Magnesium and calcium aluminate clusters
Spinel (MgAl$_{2}$O$_{4}$) and krotite (CaAl$_{2}$O$_{4}$) are alternative candidates to alumina (Al$_2$O$_3$) as primary dust condensates in the atmospheres of oxygen-rich evolved stars. Moreover, spinel was proposed as a potential carrier of the circumstellar 13 $\mu$m feature. However, the formation of nucleating spinel clusters is challenging; in particular, the inclusion of Mg constitutes a kinetic bottleneck. We aim to understand the initial steps of cosmic dust formation (i.e. nucleation) in oxygen-rich environments using a quantum-chemical bottom-up approach. Starting with an elemental gas-phase composition, we constructed a detailed chemical-kinetic network that describes the formation and destruction of magnesium-, calcium-, and aluminium-bearing molecules as well as the smallest dust-forming (MgAl$_{2}$O$_{4}$)$_1$ and (CaAl$_{2}$O$_{4}$)$_1$ monomer clusters. Different formation scenarios with exothermic pathways were explored, including the alumina (Al$_2$O$_3$) cluster chemistry studied in Paper I of this series. The resulting extensive network was applied to two model stars, a semi-regular variable and a Mira-type star, and to different circumstellar gas trajectories, including a non-pulsating outflow and a pulsating model. We employed global optimisation techniques to find the most favourable (MgAl$_2$O$_4$)$_n$, (CaAl$_2$O$_4$)$_n$, and mixed (Mg$_x$Ca$_{(1-x)}$Al$_2$O$_4$)$_n$ isomers, with $n$=1$-$7 and x$\in$[0..1], and we used high level quantum-chemical methods to determine their potential energies. The growth of larger clusters with $n$=2$-$7 is described by the temperature-dependent Gibbs free energies.In the considered stellar outflow models, spinel clusters do not form in significant amounts. However, we find that in the Mira-type non-pulsating model CaAl$_2$O$_3$(OH)$_2$, a hydroxylated form of the calcium aluminate krotite monomer forms ...
Introduction
The importance of kinetics in cosmic dust formation was recently highlighted by Tielens (2022).In particular, the highly dynamical atmospheres of asymptotic giant branch (AGB) stars, which are affected by convection, pulsational shock waves, and varying light emission, imply too short timescales for chemical equilibrium conditions to prevail (Freytag et al. 2017).Therefore, it is not surprising that sophisticated chemical-kinetic networks are successful in explaining the abundances of many molecules, including relevant dust precursors (Gobrecht et al. 2016).Chemical equilibrium calculations can also reproduce the observed abundances of the predominant molecules CO, H 2 O, and SiO in AGB atmospheres with C/O<1, but the predictions for aluminium-, calcium-, and magnesium-bearing oxides, Send offprint requests to: D. Gobrecht, e-mail<EMAIL_ADDRESS>which potentially play a role in dust formation, show discrepancies with observations (Agúndez et al. 2020).
It is well established that silicates represent the principal dust component in oxygen-rich astrophysical environments (see e.g.Henning 2010;Höfner & Olofsson 2018).However, kinetic investigations indicate that the formation of silicates, perhaps instigated by the dimerisation of SiO molecules, is too slow to proceed via free gas-phase molecules (Bromley et al. 2016;Andersson et al. 2023).Moreover, silicate formation routes instigated by MgO polymerisation are unlikely (Koehler et al. 1997;Boulangier et al. 2019).Instead, it is likely that silicates form on the surfaces of different seed particles.Owing to the high sublimation temperatures of their condensates, alumina (Al 2 O 3 ) and titania (TiO 2 ) are prototypical seed particle candidates for silicate formation (Gobrecht et al. 2016;Sindel et al. 2022).In the diverse gas-phase mixture of O-rich envelopes, it is possible that chemically more complex oxide species play a significant role in circumstellar dust nucleation and seed particle formation.
As such, ternary metal oxides, which contain two different metal cations, show a greater structural diversity and are more complex than binary oxides.Several ternary oxides that include silicates and titanates are expected to play a significant role in the dust condensation zones of oxygen-rich AGB stars (Goumans & Bromley 2012, 2013;Plane 2013).Plane (2013) predicted that CaTiO 3 is a kinetically favourable condensation nucleus.In contrast, silicate and titanate clusters that contain Mg show negligible concentrations.These results are supported by the fact that so far there has been no unambiguous detection of Mg-bearing molecules in oxygen-rich AGB stars.However, circumstellar dust contains a substantial amount of magnesium in the form of Mg-rich silicates (Rietmeijer et al. 1999;Goumans & Bromley 2012) and potentially also in the form of Mg-containing spinel.Therefore, Mg is assumed to be predominantly in atomic (or ionised) form before it condenses, which is in line with the non-detection of Mg oxides and hydroxides (Decin et al. 2018).The situation is similar for CaO and CaOH, of which there has been no detection in circumstellar envelopes.
Recent Atacama Large Millimeter Array (ALMA) observing campaigns carried out by the ATOMIUM 1 collaboration addressed the potential molecular precursors of oxygen-rich circumstellar dust, SiO, AlO, AlOH, TiO, and TiO 2 (Gottlieb et al. 2022), and their oxidation agents OH and H 2 O (Baudry et al. 2023), as well as the specific locations of dust formation in the circumstellar envelopes (Montargès et al. 2023).Ca-and Mgbearing molecules and clusters were observationally not addressed, for the reasons elaborated upon in the previous paragraph, but their condensates can be identified in observations of broad spectral dust features and meteoritic stardust analysis.Sloan et al. (2003) investigated the 13 µm spectral feature that shows an anti-correlation with the silicate features seen at 10 and 18 µm.Furthermore, they found that stars that show the strongest 13 µm feature are associated with low to moderate mass-loss rates of Ṁ = 10 −8 − 1.5 × 10 −7 M ⊙ yr −1 .The authors conclude that crystalline forms of alumina are likely carriers of the 13 µm feature.Alternatively, Posch et al. (1999) suggested spinel as a probable carrier of this dust feature as it shows an additional emission at 16.8 µm in the laboratory that is also observed in the spectra of some stars.Fabian et al. (2001) confirmed the spinel emission at 16.8 µm and identified a third prominent feature at 32 µm.
In laboratory studies of meteoritic stardust, two spinel grains with sizes of 230 and 590 nm were identified (Mostefaoui & Hoppe 2004).Gyngard et al. (2010) identified 38 spinel grains, the majority of which are likely the condensates from red giant and AGB stars.Later, Zega et al. (2014) found 37 individual spinel grains with typical AGB star isotopic anomalies and sizes of 0.8 to 4 µm.Several of these grains show a blocky appearance, suggesting that they are aggregates of smaller grains.These studies show clear evidence for the presence of sub-micron-sized stardust grains with a spinel composition.
Calcium-aluminium-rich inclusions are found in carbonaceous chondrite meteorites and are attributed to the first and oldest solids formed in our Solar System (Connelly et al. 2012).Calcium aluminate was found in the crystalline form of the rare mineral krotite in the NWA 1934 meteorite (Ma et al. 2011).
1 https://fys.kuleuven.be/ster/research-projects/aerosol/atomiumCertain stability limits for different crystalline solids can be predicted from equilibrium vapour pressure measurements.In stellar outflows, a condensation sequence with decreasing temperatures and pressures can be derived (Gail et al. 2013).The following sequence is reported: corundum (Al 2 O 3 ), gehlenite (Ca 2 Al 2 SiO 7 ), spinel (MgAl 2 O 4 ), forsterite (Mg 2 SiO 4 ), and enstatite (MgSiO 3 ).We note that this sequence relates to the crystalline bulk material and equilibrium conditions and represents a top-down approach.In contrast, in this study we model the nucleation of dust seed particles at the (sub-)nanoscale following a bottom-up approach, which is not restricted to the bulk stoichiometry, sphericity, bimolecular association reactions, or monomeric growth of the nucleating clusters.
The chemical-kinetic formation routes to alumina dust seed particles were studied in Paper I of this series (Gobrecht et al. 2022).In the present study, we investigate the possibility of spinel (MgAl 2 O 4 ) and calcium aluminates (CaAl 2 O 4 ) as seed particles that trigger the onset of dust formation and mass loss in AGB stars.Spinel is a ternary oxide that shows the same atomic components as alumina with an additional Mg-O building block.For the calcium aluminate (i.e.krotite), there is an extra Ca-O unit with respect to alumina.To our knowledge, a theoretical bottom-up investigation of aluminium-bearing ternary oxides in circumstellar envelopes had never before been carried out.Because of the similarity of spinel and krotite clusters to those of the astrophysically relevant Mg-rich silicate of olivine type, we compare our results to the study of Escatllar et al. (2019).
This paper is organised as follows.In Sect. 2 we describe the methods used to derive the structures and kinetic reaction rate coefficients of the molecular and cluster species included in our study.The results of our investigations, including kinetic modelling, cluster energies and properties, and predictions for larger cluster and dust sizes, are presented in Sect.3. We discuss these results in light of observations and previous studies in Sect. 4. Finally, Sect. 5 provides a summary of our findings.
Chemical kinetics
In the present study we considered a set of reactions that are combined to form a chemical rate network.The kinetic network entails the complete aluminium-oxygen-hydrogen chemistry from Gobrecht et al. (2022) and reactions R26-R37 as well as R48-R57 reported in Decin et al. (2018) for the magnesiumcalcium-oxygen chemistry.In addition, we included reactions between Al-, Mg-, and Ca-bearing molecules (see Appendix A for details), as well as pathways that describe the formation and destruction of spinel and krotite monomers, and derived their rate coefficients via state-of-the-art rate theory.The corresponding reactions and rate coefficients are summarised in Table A.1.To solve the set of the stiff differential rate equations we used of the Linear Solving of Ordinary Differential Equations (LSODE) solver (Hindmarsh 2019).
We employed transition state theory (TST) based methods, which require reaction energetics and rovibrational properties of the reagents and transition states, to derive diagrams of potential energy surfaces (PESs).For reactions involving molecular systems containing more than four atoms, rate coefficients were estimated by combining electronic structure calculations with Rice-Ramsperger-Kassel-Marcus (RRKM) statistical rate theory.First, the relative energies of the reactants, products and intermediate stationary points on each PES were calculated using the benchmark complete basis set (CBS-QB3) level of the-Table 1. Coulomb-Buckingham pair potential parameters used in this study.ory (Montgomery et al. 2000) within the Gaussian 16 suite of programs (Frisch et al. 2016).
The PESs for seven of these reactions are illustrated in Figs. 2 and 3 (note that the relative energies include vibrational zeropoint energy corrections).The Master Equation Solver for Multi-Energy well Reactions (MESMER) program (Glowacki et al. 2012) was then used to estimate the rate coefficients.The methodology is described briefly here (see Douglas et al. 2022 for more details).The density of states of each stationary point on the PESs was calculated using the vibrational frequencies and rotational constants calculated at the B3LYP/6-311+g(2d,p) level (Becke 1993;Frisch et al. 2016); vibrations were treated as harmonic oscillators, and a classical densities of states treatment was used for the rotational modes.
Microcanonical rate coefficients for the dissociation of intermediate adducts, either forwards to products or backwards to reactants, were determined using inverse Laplace transformation to link them directly to the relevant capture rates that were calculated using long-range TST (Georgievskii & Klippenstein 2005).The probability of collisional transfer between discretised bins was estimated using the exponential down model (Gilbert & Smith 1990), with the average energy for downward transitions, < ∆E> down , set to 200 cm −1 with no temperature dependence for collisions with H 2 .The Master Equation was then solved for each reaction to yield rate coefficients for recombination and bimolecular reaction at specified pressures and temperatures; it should be noted that at the low densities (n(H 2 )< 10 14 cm −3 ) and high temperatures (T>1000 K) in a stellar outflow it is only the bimolecular channels that matter.The rate coefficients for the reverse reactions were calculated by detailed balance.
Cluster candidates
The PES of a cluster containing monomer building blocks of seven atoms, for example MgAl 2 O 4 , is multi-dimensional, intricate, and computationally expensive to obtain.With increasing cluster sizes the number of local minima on the PES grows rapidly and a thorough search with electronic structure methods becomes untractable and prohibitive.Therefore, an extensive Monte Carlo basin hopping (MC-BH) search is performed (Wales & Doye 1998) to find favourable candidate isomers for each size and stoichiometry by using a simplified PES that is described by the Coulomb-Buckingham pair potential: where r i j is the relative distance between two ions, q i and q j the charges of ions i and j, respectively, and A i j , B i j and C i j the Buckingham pair parameters listed in Table 1.The chosen Coulomb-Buckingham pair parameters correspond to the values of Woodley (2009) and we used an in-house modified version of GMIN (Bromley & Flikkema 2005).For (CaAl 2 O 4 ) n and mixed calcium-magnesium aluminates we did not perform separate global optimisation searches.
We used the geometries of the lowest-energy candidates of MgAl 2 O 4 clusters to optimise the (CaAl 2 O 4 ) n , n=1−7, clusters.These two metal aluminate species differ only by their alkaline earth metals Mg and Ca.Owing to the larger atomic radius of Ca compared to Mg, the calcium aluminate clusters exhibit larger Ca-O bond distances but the overall cluster geometry is largely preserved.
For each cluster stoichiometry investigated, we optimised the 100-200 lowest-energy isomers at the quantum level of theory.We used the hybrid B3LYP functional along with the cc-pVTZ basis set (Schäfer et al. 1992) as a compromise between computational cost and accuracy.As for the kinetic computations described in Sect.2.1, the optimisations were performed with the Gaussian16 program suite (Frisch et al. 2016).For the lowestenergy isomers of each cluster stoichiometry and size found, a vibrational frequency analysis is included.The resulting vibrational modes are required to exclude transition states and higher order saddle points, to construct partition functions for the thermodynamic potentials, and to predict the emission spectra of the clusters.
Results
In this section we present the rate expressions and nucleation routes derived in this study (Sect.3.1), the resulting species abundances in the different physico-chemical models (Sect.3.2), the characteristics of the most favourable clusters (Sect.3.3), and a comparison to their macroscopic (i.e.crystalline) bulk material (Sect.3.4).Finally, the vibrational cluster spectra are addressed in Sect.3.5.
Molecular precursors of aluminate clusters
In addition to the existing rate network for the oxides of aluminium (Gobrecht et al. 2022), as well as of magnesium and calcium (Decin et al. 2018), we included several redox reactions linking aluminium with magnesium and calcium chemistry.The prevalent aluminium-bearing molecules are AlO and AlOH, whereas magnesium and calcium are predominantly in atomic form.Since the formation of the metal dioxides AlO 2 , and in particular, MgO 2 and CaO 2 , is hampered by strongly endothermic oxidation reactions, they are not considered in our proposed nucleation schemes.The gas-phase reactions between (hydr-)oxides of Al and Mg/Ca, which are included in this study, comprise and Owing to the low importance of reactions 2, 3, and 4 to spinel and krotite production, the relevant calculations are described in Appendix A. To model the spinel and calcium aluminate nucleation processes, we considered different scenarios for the formation of the respective monomer, (MgAl 2 O 4 ) 1 and (CaAl 2 O 4 ) 1 .The most promising formation scenarios are graphically illustrated in Fig. 1.
As pointed out in Appendix A, the inclusion of Mg and Ca atoms in the first steps of cluster formation is energetically and kinetically inefficient.Therefore, we investigated the possibility The corresponding rate can be found in A.1.In principle, reaction 5 could also proceed as radiative association, which was not considered in this study, as it proceeds with slow timescales in AGB atmospheres.For the CaAl 2 O 4 monomer formation, we considered a similar starting point as for the spinel formation (see the top-left panel of Fig. 3): with ∆ r H(0K)= −94 kJ mol −1 , leading to the formation of the CaAl 2 O 4 monomer and molecular hydrogen, H 2 .Despite the reverse reaction being endothermic, it involves H 2 , which is very abundant in space and leads to an effective destruction of CaAl 2 O 4 .In order to prevent its destruction, the monomer might further react with abundant oxygen-bearing molecules such as H 2 O, SiO, or AlO by the exothermic processes with reaction enthalpies of ∆ r H(0K)= −318 kJ mol −1 , ∆ r H(0K)= −392 kJ mol −1 , and ∆ r H(0K)= −488 kJ mol −1 , respectively.We note that the analogous reaction pathway involving Al 2 O 4 H 2 and Mg is quite endothermic (+74 kJ mol −1 ).Hence, the spinel monomer cannot be formed in the same manner as CaAl 2 O 4 .
Numerical modelling in circumstellar envelopes
In this study the numerical modelling of the kinetic reaction network is performed under the conditions given by the hydrodynamic models of two model AGB stars, a semi-regular variable (SRV) and a Mira-type variable (MIRA), that were presented in detail in Paper I. We summarise the main physical quantities of these models in Table 2. Furthermore, we differentiated between non-pulsating models and pulsating models.The initial elemental abundances are taken from the FRUITY stellar evolution database (Cristallo et al. 2015) and correspond to the m1p5zsuntdu3 model with a C/O ratio of 0.75.Molecular abundances are reported as number fractions of the total gas and are considered to be significant, if they exceed values of 10 −9 .
Table 2. Physical quantities of the stellar models SRV and MIRA.
quantity peaks in the MIRA models occur at smaller radial distances compared with the SRV model.This is a consequence of the lower gas densities and higher temperatures in the SRV model.CaO exhibits a similar behaviour in both model stars, decreasing from 10 −10 at the photosphere to lower values farther out.CaOH is one to two orders of magnitude more abundant in the MIRA model compared with the SRV outflow.The most abundant Mg-bearing molecule is MgOH, with abundances below 3×10 −10 in the entire computational range.MgO shows negligible abundances below 10 −12 in both the SRV and MIRA models.
In Al 2 O 4 H 2 , CaAl 2 O 3 , CaAl 2 O 4 , and most prominently, CaAl 2 O 3 (OH) 2 .In the SRV non-pulsating model, spinel formation is therefore ineffective and the significant calcium aluminate formation occurs only at distances larger than 4 R ⋆ .This indicates that CaAl 2 O 4 is not a primary seed particle.Radial distance (R ★ ) In the MIRA non-pulsating model, we find a similar abundance peak of the species Al 2 O 3 H and Al 2 O 3 H 2 as in the SRV non-pulsating model, but at closer distances around 1.4 R ⋆ (see Fig. 6).This radial distance marks the onset of the alumina cluster nucleation in the MIRA non-pulsating model (see Fig. 4).This effect was noted already in Paper I of this series on alumina nucleation, and is related to the comparatively higher densities and lower temperatures in the MIRA models.Between 2 R ⋆ and Fig. 6.Fractional abundances of the molecular precursors related to the aluminate formation and nucleation as a function of the radial distance in the non-pulsating MIRA model.0.2), where the temperatures and the gas densities are high, the chemistry is controlled by dissociation reactions, which is particularly pronounced for radial distances close to the star.We note that at the initial conditions at 1 R ⋆ and Φ=0.0 the gas is purely atomic.At later phases the temperatures drop and and the molecules recombine in the wake of the still dense postshock gas.Overall, the chemistry is dominated by CO and H 2 O with similar abundances as in the non-pulsating models, and in good agreement with observations (Decin et al. 2010).The aluminium content is governed by AlOH and alumina dust, represented by the Al 8 O 12 clusters, whereas the AlO abundance is two to three orders of magnitude lower.Recent observations of the SRV AGB star R Dor and the Mira-type AGB star IK Tau deduce lower AlOH abundances (Decin et al. 2017).This might have several reasons, which were discussed in Paper I. Here, we note that the AlO/AlOH ratio is very sensitive to the AlOH photolysis rate (Mangan et al. 2021).As AGB atmospheres are frequently crossed by pulsational shocks, their radiation field is strongly time dependent, which impacts the AlOH photolysis rate.In addition, we note that the hydroxides CaOH and MgOH as well as the oxides CaO and MgO with abundances below 10 −10 play a minor role in the SRV pulsating model.
The MIRA pulsating model is presented in Fig. 8.Although it shows many similarities with the SRV pulsating model, there are some notable differences.First, the variation in the early postshock gas is larger for most of the species, which is a consequence of the higher shock strength in the MIRA model, as compared with SRV.Second, the hydroxides CaOH and MgOH are more abundant than in the SRV pulsator.Third, a tiny amount of MgO forms at 1 R ⋆ and phase Φ=0.2.
By inspecting the abundances of the nucleating species we find that neither CaAl 2 O 4 nor MgAl 2 O 4 forms to any significant extent in the SRV pulsating model (see Fig. 9).The precursors Al 2 O 3 H and Al 2 O 3 H 2 for the MgAl 2 O 4 and CaAl 2 O 4 monomers, respectively, exhibit maximum abundances at 1R ⋆ and Φ=0.85.However, the subsequent nucleation steps are not efficient and the spinel and calcium aluminate formation does not take place in the pulsating SRV model.
The situation is similar for the MIRA pulsating case, where Al 2 O 3 H and Al 2 O 3 H 2 peak at 1 R ⋆ and Φ=0.75 (see Fig. 10).In contrast to the SRV pulsating case, some concentrations of Al 2 O 3 H 2 and Al 2 O 4 H 2 form at 2 R ⋆ in the pulsating MIRA Pulsation phase Φ=t/P Pulsation phase Φ=t/P 10.Fractional abundances of the considered molecular precursor related to aluminate nucleation as a function of the pulsation phase and the grid of radial distances in the pulsating MIRA model.barely forms, but the precursors of CaAl 2 O 4 (i.e.Al 2 O 3 H and Al 2 O 4 H 2 ) form in significant amounts of > 10 −8 at late phases and 2 R * −2.5 R * (see Fig. 11).The presence of these precursors leads to a buildup of the CaAl 2 O 4 monomer and its hydroxylated form, showing fractional abundances of ∼10 −9 in the pulsating MIRA model.These amounts are still about two orders of magnitude less than the predictions for alumina clusters, if included.In the following we use the term binding energy, E b /n, defined as the absolute difference between total potential energy of the cluster under consideration and the contribution of its atomic components at T =0 K, normalised to the cluster size, n.This should not be confused with the surface binding energy used in the chemistry of dust grain surfaces and astronomical ices.Furthermore, we compared our results for (MgAl 2 O 4 ) n with the predictions of Woodley (2009), who used an evolutionary algorithm based on interatomic potentials to derive global
The monomers (n=1)
The spinel monomer GM candidate (1A) exhibits C s symmetry and is shown in Fig. 12.The Mg-O bond lengths are 1.980 Å and the Al-O bonds range from 1.703 to 1.811 Å.The CBS-QB3 binding energy of 1A is 2910 kJ mol −1 , at the B3LYP/cc-pVTZ level of theory it is lower (2728 kJ mol −1 ).We note that isomer 1B, a C 2v structure reported by (Woodley 2009), has a CBS-QB3 relative energy just 10.8 kJ mol −1 (B3LYP/cc-pVTZ: 11.0 kJ mol −1 ) above our GM candidate 1A.For temperatures above 900 K, structure 1B becomes more favourable according to its Gibbs free energy of formation.1A and 1B essentially differ by the distance of the Mg cation to the out-of-plane oxygen anion, which is 2.11 Å in 1A and 3.11 Å in 1B, so these structures can be regarded as conformers.1B exhibits a very low vibration frequency of 41 cm −1 , which was identified as a hindered rotation.
Structure 1A also corresponds to the most favourable isomer of the CaAl 2 O 4 monomer with a CBS-QB3 binding energy of 2995 kJ mol −1 .Owing to the mentioned problems of the CBS-QB3 method with Ca (see Sect. 3.1), we also provide the B3LYP/cc-pVTZ binding energy of 2921 kJ mol −1 .The Ca-O bond lengths of 2.208 Å are larger than the Mg-O bonds in the spinel clusters, as the Ca cation has a larger radius (i.e. an extra shell of electrons).The Al-O bonds in 1A range from 1.677 to 1.797 Å and are slightly smaller than for MgAl 2 O 4 .Isomer 1B exhibits an imaginary frequency of 71i cm −1 for CaAl 2 O 4 monomer and represents a transition state.
The dimers (n=2)
The most favourable spinel dimer structure, 2A, is depicted in Fig. 13 and has a C i point group symmetry.It was previously reported by Woodley (2009).We find a binding energy of E b /n=3114 kJ mol −1 at the B3LYP/cc-pVTZ level of theory.The Mg-O bond lengths are 1.980 Å and the Al-O bonds range from 1.703 Å to 1.811 Å. 2A shows a large aspect ratio with xyz dimensions of 7.22 Å × 4.68 Å × 1.88Å.A metastable isomer (2B) with a relative energy to 2A of 14 kJ mol −1 is found to be the second most favourable spinel dimer structure in our searches.For the entire temperature range considered in this study (T = 0−6000 K) 2B is less favourable than 2A.For (CaAl 2 O 4 ) 2 structure 2B is slighly preferred (by 7 kJ/mol) to 2A at T =0 K, but 2A becomes favourable for temperatures above room temperature (i.e.298 K).Therefore, we consider 2A to be our best GM candidate for both (CaAl
The trimers (n=3)
In Fig. 14 the GM candidate of the spinel trimer, (MgAl 2 O 4 ) 3 , 3A, is shown.We find that structure 3A is lower in energy by 34 kJ mol −1 than isomer 3B, which was found by Woodley (2009).
The structures 3A and 3B show an overall similar geometry with differences of an Al-O and a Mg-O bond that are visible at the top of the structures in Fig. 14.The lowest-energy trimer isomers are not symmetric and belong to the C 1 space group.For (CaAl 2 O 4 ) 3 3B is 11 kJ mol −1 more favourable than 3A.For temperatures > 700 K, 3A becomes the preferred geometry.For this reason we report (CaAl 2 O 4 ) 3 results for both 3A and 3B.
One of the peculiarities of isomer 3B is that it shows a fivefold coordinated oxygen atom.For the mixed Mg/Ca aluminate trimers Mg 2 CaAl 6 O 12 and MgCa 2 Al 6 O 12 , 3A corresponds to the favoured isomer.We note that the relative energy between 3A and 3B decreases with increasing Ca content, and becomes negative for (Ca 3 Al 6 O 12 ) 3 .
The larger clusters (n=4−7)
The GM candidates for n=4, 4A and 4B, are shown in Fig. 15.4A and 4B have very similar structures, but show differences in some bonds and coordinations.For (MgAl 2 O 4 ) 4 , structure 4A is lower in potential energy by 12 kJ mol −1 than isomer 4B, which corresponds to the GM candidate found by Woodley (2009).For elevated temperatures 4A remains the most favourable cluster isomer.Both isomers, 4A and 4B, show a structural similarity without symmetry.For (CaAl 2 O 4 ) 4 , the optimisation of 4A and 4B leads to a pair of stereoisomers with the same energy.4A also represents the most favourable geometry for the mixed Ca 2 Mg 2 Al 8 O 16 cluster species.For temperatures above 5000 K, 4A and 4B have essentially the same free energies.Generally, we note a trend of increasing binding energy of about ∼ 40 kJ mol −1 , when substituting one Mg with one Ca cation.The pentamer (n=5) GM candidate 5A is shown in Fig. 16.A different C s symmetric structure, was predicted as GM by Woodley (2009) for (MgAl 2 O 4 ) 5 .This structure was also found in our MC-BH searches.However, during the optimisation with the B3LYP/cc-pVTZ method, the symmetry of this isomer was broken, which led to a slightly distorted geometry (5B).By imposing symmetry our optimisations did not converge.The distorted geometry isomer 5B is 32 kJ mol −1 above 5A.For (CaAl 2 O 4 ) 5 , 5A lies 73 kJ mol −1 below 5B and represents the GM candidate.Mixed Mg/Ca aluminates also exhibit 5A as a preferential structure and their binding energies scale with the Ca/Mg ratio.The larger the Ca content, the higher is the binding energy E b /n (∼20-30 kJ mol −1 per Ca atom).
The lowest-energy spinel hexamer (n=6) cluster 6A is shown in Fig. 17. 6A shows a quasi-symmetric mirror plane, where an Mg atom is replaced by an Al atom on the right hand side of the plane.For the isomer reported by Woodley (2009), we find a large relative energy of 312 kJ mol −1 above 6A.By swapping one Al with one Mg ion in the hexamer structure of Woodley (2009), we find the more favourable structure 6B that lies 264 kJ mol −1 above 6A.For (CaAl 2 O 4 ) 6 6A corresponds to the most favourable isomer found in our searches and it lies 55 kJ mol −1 below 6B.The energy difference between the GM candidates of (MgAl 2 O 4 ) 6 and (CaAl 2 O 4 ) 6 is E b /n=143 kJ mol −1 .For the mixed Mg/Ca aluminate hexamers we find a gradual increase in the formation enthalpy with increasing Ca/Mg ratio, consistent with the findings for the other cluster sizes.For example, Mg 3 Ca 3 Al 12 O 24 with a Ca:Mg ratio of 1:1 is 68 kJ mol −1 higher per unit n than (CaAl 2 O 4 ) 6 , but 75 kJ mol −1 per unit n lower than (MgAl 2 O 4 ) 6 .
For the spinel heptamer we find structure 7A to be the lowest-energy isomer (see Fig. 18).Similar to 6A, 7A also shows a quasi-mirror plane where an Mg ion is substituted by an Al ion.Structure 7B reported by Woodley (2009) shows a potential energy of 66 kJ mol −1 above 7A and we find a small imaginary vibration mode with a frequency of 11.32 i cm −1 .The mode could not be attributed to a specific bond stretch or bend, and appears as a collective breathing mode.For (CaAl 2 O 4 ) 7 , 7A has a binding energy E b /n that is 149 kJ mol −1 larger than for the (MgAl 2 O 4 ) 7 counterpart.7B lies 158 kJ mol −1 above 7A.Test calculations of mixed Mg/Ca aluminate clusters confirm the correlation of increased binding energies with increased Ca content for smaller clusters.The atomic coordinates of the clusters presented in this study can be found in Table A.2.
Homogeneous nucleation and the bulk limit
In the following two sub-subsections we summarise the geometric and electrostatic properties of the GM cluster candidates as a function of their size, n, and compare the size-dependent trends with the bulk limit that is represented by the crystals spinel and krotite.
Homogeneous nucleation
In Fig. 19, the Gibbs free energies of dissociation of the GM spinel cluster candidates, normalised to the cluster size, n, are shown.We illustrate this normalised cluster energy for three different temperatures of T =0, 1000, 2000 K.At T =0 K, the absolute of the Gibbs free energy of dissociation corresponds to the binding energy, and also to the enthalpy of dissociation.Using the spherical cluster approximation (SCA; see e.g.Johnston (2002)) and excluding the monomer (n=1), we fitted the normalised energies in the form of where parameter a corresponds to the normalised bulk energy and parameter b is related to the surface tension.For MgAl 2 O 4 , we find fitting parameters of The value for a agrees reasonably well with the cohesive energy of 4070 kJ mol −1 for crystalline MgAl 2 O 4 at T =0 K derived from the JANAF-NIST thermochemical tables2 (Chase 1998).When including the monomer in the fitting procedure, we find a larger value for a of 4241.8 kJ mol −1 .Moreover, the fittings that include the monomer increasingly overpredict the energies for cluster sizes n >5, which reflects the fact that the SCA is derived in the large cluster limit.With these fitting relations the free energies of larger spinel clusters, whose investigation is computationally very demanding, can be predicted.However, we also note that some cluster sizes (e.g.n=3,4) are more favourable than others (n=5,7).Hence, it is possible that some of the larger clusters with n >7 show enhanced stability.
For comparison, we also included Mg-rich olivine clusters, (Mg 2 SiO 4 ) n , that were studied by Escatllar et al. (2019).These silicate clusters can be directly compared to the spinel clusters as they contain the same number of oxygen anions and metal cations per formula unit.For coherence and consistency we applied the same functional and basis set (B3LYP/cc-pVTZ) to optimise the silicate clusters.where a at T =0K is in very good agreement with the value of 3888 kJ mol −1 derived from JANAF-NIST for crystalline magnesium-rich olivine.As for spinel clusters, the inclusion of the monomer leads to a more negative value of a (-4036.55kJ mol −1 ) and to a worse fit for larger cluster sizes.Fig. 19.Normalised Gibbs free energies of dissociation for the stoichiometric spinel GM clusters, (MgAl 2 O 4 ) n , as a function of cluster size, n, for different temperatures of T =0 K (solid line), T =1000 K (dashed line), and T =2000 K (dash-dotted line).For comparison, the Mg-rich olivine GM candidate clusters, (Mg 2 SiO 4 ) n , as found in Escatllar et al. (2019), and their free energies are also included.
In addition to the silicates, the MgAl 2 O 4 cluster energies are compared to those of Ca aluminates (i.e.CaAl 2 O 4 ).In Fig. 20, the Gibbs free energies of dissociation of the aluminate clusters with respect to their atomic components, are compared for different temperatures as a function of the cluster size, n.The (CaAl 2 O 4 ) n clusters are more favourable than their corresponding Mg spinels.This can partially be explained by the stronger Ca-O bond (414 kJ mol −1 ) in comparison with the Mg-O bond (260 kJ mol −1 ).
The bulk limit
Crystalline spinel shows a T2 d symmetry in the cubic crystal system with unit cell parameters a=b=c=8.089Å and α=β=γ=90 • (Finger et al. 1986).The spinel unit cell is illustrated in Fig. 21.In this form, Al atoms are 6-coordinated, O atoms 4-coordinated, and Mg 4-coordinated, respectively.The bond distances are Mg-O 1.889 Å and Al-O 2.058 Å.
Generally, the AlO and MgO bond lengths as well as the atomic coordinations increase as a function of cluster size, as can be seen in Table 3.This is not unexpected as the fraction of 'surface' atoms decreases with cluster size and, therefore, the average coordination and the bond lengths increases.However, the increase is not strictly monotonic, for example d(AlO) for n=4 and d(MgO) for n=2 represent outliers.Also, the average coordinations of the n=7 GM cluster are lower than those for n=6.Moreover, individual bond length and atomic coordinations in the clusters can differ from both their average Electrostatic properties of the spinel clusters and the bulk limit (n = ∞) are given in Table 5.For the mean Al charge, an increasing trend with size n can be seen.The average charges of the Mg and O ions do not, however, follow a clear trend with respect to the cluster size.Generally, all Al and Mg ions are positively charged cations, and the O ions are negatively charged anions.Apart for the symmetric dimer (n=2) the spinel clusters exhibit considerable dipole moments with a maximum value of 7.32 Debye for n=3.Therefore, we predict that our GM spinel cluster candidates should be detectable by IR spectroscopy, if they are present.The highest molecular orbital-lowest unoccupied molecular orbital (HOMO-LUMO) gap of the spinel GM clusters ranges from 3.88 to 5.11 eV and is just below the band gap of the crystalline bulk spinel of ∼5.11 eV (Pilania et al. 2020).The ionisation potentials range between 7.36 and 9.24 eV and generally decrease with cluster size.
In Table 6 the electrostatic properties of the (CaAl 2 O 4 ) n GM candidate clusters are displayed.The Al cation charges generally increase with cluster sizes n for both cluster families.For CaAl 2 O 4 the Al charges are slightly larger than for MgAl 2 O 4 , except for n=1,2.Ca charges are considerably more positive than the Mg charges of spinel clusters.The oxygen anions show slightly more negative average charges for krotite than for spinel1 mean interatomic distances in Å2 mean coordination numbers3 normalised binding energies (in kJ mol −1 ) 5 derived from SCA fit (this study) 1 average Mulliken charge (in e) 2 total moment (in Debye) 3 HOMO-LUMO gap (in eV)4 vertical and adiabatic ionisation energies (in eV) 5 experimental value from Pilania et al. (2020) Table 5. Electrostatic properties of the (MgAl 2 O 4 ) n GM candidate clusters.clusters.The larger atomically partitioned charges in krotite clusters is also reflected in their larger dipole moments.For n ≥3 the (CaAl 2 O 4 ) n clusters exhibit dipole moments with large values >8 Debye making them suitable targets for IR observations.The HOMO-LUMO gaps of the two cluster families are comparable, though the range in (CaAl 2 O 4 ) n is narrower (i.e.4.19-4.84eV).In this range also the band gap of crystalline calcium aluminate (∼4.54 eV) is located (Qu et al. 2015).The vertical and adiabatic ionisation energies are slightly lower for (CaAl 2 O 4 ) n and generally decrease with cluster size, n.
Harmonic spectra
Clusters exhibit 3N-6 vibrational modes, where N is the number of constituent atoms.Therefore, the spinel monomer (n=1) shows 15 modes and the heptamer (n=7) 141 modes.We note that for the monomer (n=1) two of 15 and for the dimer (n=2) 18 of 36 vibrational modes are IR inactive; all larger considered spinel GM cluster candidates have only IR active modes.The most intense vibrational emission lines of the spinel family are found in a wavelength range between 10.5 and 11.5 µm (see Fig. 23).With regard to the 13 µm feature, emissions from the monomer (n=1), the dimer (n=2) and the hexamer (n=6) are predicted.However, their relative intensities are rather low.
The harmonic vibrational spectra of the (CaAl 2 O 4 ) n GM cluster candidates are shown in Fig. 24.The most intense vibrational emissions occur between 10.5 and 12.0 µm.There are common spectral features with (MgAl 2 O 4 ) n clusters (see e.g. for n=2), but also differences are apparent (see e.g.n=5).Some peaks of (CaAl 2 O 4 ) n are slightly shifted towards longer wavelengths.We note that several assumptions and approximations are made for the vibrational IR spectra presented in this study.They include the harmonic approximation that neglects anharmonic and temperature effects, but also the simplification that only GM candidates of (Mg/CaAl 2 O 4 ) n , n=1−7 stoichiometry contribute to the IR emission.However, a detailed treatment of the IR spectra in stellar atmospheres with radiative transfer modelling is beyond the scope of this paper.
Discussion
The present paper, along with Paper I, demonstrates that the kinetic formation of alumina clusters and particles is efficient.A kinetic CaAl 2 O 4 synthesis is viable if pulsational shocks are ex-cluded, while MgAl 2 O 4 does not form in O-rich circumstellar envelopes.
The analysis of pre-solar oxygen-rich silicate grains in meteorites revealed that many grains contain a certain fraction of Ca and Al (see e.g.Nittler et al. (2008); Nguyen et al. (2010); Bose et al. (2010)).Moreover, Leitner et al. (2018) found an unusual large AGB stardust grain with an inner core that contains Al and Ca, but no Si.Therefore, alumina and Ca-and Mgbearing aluminates do indeed appear to serve, at least in some cases, as seed nuclei for larger silicate grains, which is in agreement with our studies.
Chemical equilibrium calculations show a different picture.To showcase these differences, we applied conditions very similar to those used in the present kinetic study: the same initial elemental composition, a pressure corresponding to the photospheric gas density in the MIRA models, and a temperature range of 500 K and 3000 K in the chemical equilibrium code GGchem (Woitke et al. 2018).We find that none of the considered cluster families (i.e.alumina, spinel, mixed Mg/Ca aluminates, and krotite) sustains above a temperature of T =900 K (see Fig. 25) The top panel includes a gas-phase mixture of 865 species and the thermochemical data of (Al 2 O 3 ) n , n=1−10 clusters, whereas the second panel additionally includes the (MgAl 2 O 4 ) n , n=1−7 clusters reported in this study (see Table A.4 for the corresponding fitting coefficients).The largest alumina cluster, (Al 2 O 3 ) 10 , predominates only for temperatures below 850 K.The inclusion of different (MgAl 2 O 4 ) n cluster sizes leads to a situation, where the largest cluster with n =7 essentially replaces (Al 2 O 3 ) 10 .This is in clear contrast to the results of our chemical-kinetic study predicting alumina clusters as primary seed particles and no spinel formation in any of the considered models.If additionally mixed Mg/Ca aluminate clusters and (CaAl 2 O 4 ) n are included (see the bottom panel of Fig. 25), we find that the Ca-rich aluminates dominate the Al equilibrium chemistry, which is in agreement with the thermochemical energies derived in this study.
Generally, chemical equilibrium abundances are valid for conditions that remain constant for an infinite time.However, this is not the case in many highly dynamical astrophysical environments, where active dust formation takes place.Therefore, a chemical-kinetic approach accounting for reaction timescales and barriers represents a more correct approach than using chemical equilibrium.In equilibrium, the species concentrations do not change by definition and their formation routes cannot be traced.More important, in an equilibrium approach the role of reaction barriers and unstable intermediates is ignored.
It is also worth noting that non-thermal effects can have a significant impact on the chemistry in circumstellar envelopes.The importance of vibrational (or internal) thermal non-equilibrium for circumstellar dust formation was postulated in the past decades (Nuth & Donn 1981;Patzer et al. 1998).As has been shown in Plane & Robertson (2022), clusters with large dipole moments can efficiently lower their internal temperatures via spontaneous and stimulated photon emission.In turn, the lower internal temperatures lead to dissociation rates that are significantly reduced, and aid the nucleation to proceed at an accelerated pace.Moreover, small temperature differences between different sizes of the same cluster species can affect the corresponding nucleation rates considerably (Kiefer et al. 2023).In this study we do not account for different vibrational and translational temperatures of the species, or cluster sizes.This means that in the pulsating models the effect of rapidly changing temperatures is implemented simultaneously for all considered species.
Regarding comparison with recent observations, we note that the model results for the prevalent molecules CO, H 2 O, and OH agree well (Maercker et al. 2016;Baudry et al. 2023).For the Al chemistry, the modelled abundances are in accordance with observations with the exception of AlOH, where model abundances exceed the observed values by one to two orders of magnitude.This fact was previously noted and discussed in detail in Paper I.
So far, no Mg-or Ca-bearing molecules have been detected around oxygen-rich AGB stars.Owing to the relatively large dipole moments of their oxides (MgO and CaO), sulfides (MgS and CaS), and hydroxides (MgOH and CaOH), it is unlikely that these molecules are very abundant.As has been noted by Agúndez et al. (2020), neutral atoms are likely the main reservoir of magnesium and calcium in AGB atmospheres.We largely agree with this conclusion, but do not exclude the possibility of Ca/Mg being in the form of Ca + and Mg + cations or part of nascent dust grains.
The kinetic network presented in this study includes termolecular and bimolecular neutral-neutral reactions.Atomic and molecular ions, however, are not considered in this study.The first ionisation energy of Mg atoms is 7.65 eV, which is similar to those of stoichiometric (MgO) n clusters ranging from 7.1 to 8.2 eV (Gobrecht et al. 2021).The ionisation potentials of the spinel and krotite clusters presented in this study are in a similar range of ∼ 7−10 eV (see Tables 5 and 6).Atomic Al has the lowest relevant ionisation potential of 5.99 eV.These energies are fairly large compared to the thermal energies at the photosphere of AGB stars of typcially 0.15−0.3eV.Dredged-up 26 Al with a half-life of ∼ 717 000 years might represent a more efficient source of ionisation than temperature, and is expected to be become particularly important in C-rich AGB stars that have experienced several dredge-up episodes.In addition, some AGB stars show significant UV emission, which is possibly caused by chromospheric activity and could lead to a partial ionisation of circumstellar matter (Montez et al. 2017).Nevertheless, a relatively low ionisation fraction is expected in these environments.However, even a low ionisation degree can impact the chemistry since ion-molecule rates are typically orders of magnitude faster than neutral-neutral reactions.
It is also possible that the spinel and krotite nucleation does not proceed via the monomer as presumed in this study, but via different stoichiometries or different kick-starter species (i.e.heterogeneously).The inclusion of Mg in clusters represents a major challenge in modelling bottom-up dust nucleation in oxygen-rich environments.This is not only the case for spinel, but also for Mg-rich silicates of olivine and pyroxene stoichiometry, as well as for Mg-bearing titanates that are affected by this problem (Plane 2013).In contrast, Ca can be incorporated more easily in clusters under certain circumstances as was shown in this study.These circumstances include the absence of pulsational shocks and the exclusion of the competing alumina nucleation.
Comparing our kinetic results to classical nucleation descriptions used in, for example, Sindel et al. (2022), we find formal 'monomeric' radii of 2.164 Å for Al 2 O 3 , 2.505 Å for MgAl 2 O 4 , and 2.771 Å for CaAl 2 O 4 , respectively.At a temperature of T =1000 K, surface tensions of 2.027×10 −4 J cm −2 for Al 2 O 3 and 1.741×10 −4 J cm −2 for MgAl 2 O 4 can be derived using the cluster energies derived in this study; for CaAl 2 O 4 no value for the surface tension is provided, since thermodynamic information on the crystalline bulk is lacking.
From the Gibbs free energy extrapolation of the SCA fit, we find that krotite clusters are more favourable than their Mgrich counterparts for all sizes, n.We note that there are several caveats in the interpretation of this result.Foremost, this result is a fitted extrapolation for n >7 and does not rely on actually calculated or measured cluster data.Also, this approximation does not account for particularly favourable 'magic' cluster sizes, or energetically unfavourable nucleation bottlenecks.Second, as shown for small cluster sizes n=1−7, it is likely that also mixed Ca/Mg aluminate clusters exist with energies that are in between pure krotite and spinel clusters.Although Ca-rich clusters are favoured for all cluster sizes, Mg is about an order of magnitude more abundant than Ca and could be incorporated in the clusters at some stage.For these reasons, we believe that the larger Ca ions can successively be replaced by smaller and more abundant Mg ions.
As clusters are typically intermediate in size between gasphase molecules and solid bulk material, their respective vibrational spectra show properties that are in between discrete molecular line emissions and broad dust features.This is a consequence of their number of degrees of freedom that scale with the cluster size (i.e. the number of atoms that they contain).As the clusters presented here have several Al−O and Mg/Ca−O bonds of different lengths, their respective intense stretching modes cover a range of wavelengths, leading to a non-discrete spectrum.Yet the cluster spectra are not as broadened over a large wavelength range as the emissions of sub-micron-sized dust grains.As for alumina clusters, the most intense vibrational modes are located in the 10.5−11.5 µm wavelength range.These intense modes are attributed to Al−O stretchings, whereas the Mg/Ca-O modes are more modest and occur at slightly longer wavelengths (i.e.10.5−12.0µm).Therefore, these clusters are unlikely to be carriers of the 13 µm feature, which is expected to arise from larger particles.The emissions of larger cluster particles become largely independent of the interior composition and will gradually evolve towards a black body radiation.Generally, the IR spectrum of circumstellar dust shells is dominated by fully grown grains that could cover the spectral signatures of nucleation species with smaller sizes.In reality, the cluster spectra are influenced by anharmonicities and elevated temperatures that can lead not only to broadening, but also to shifts in wavelengths, appearance and/or disappearance of certain peaks and asymmetries in the intensities (Guiu et al. 2021).
Summary
In this study we explored several kinetic pathways for the formation of spinel (MgAl 2 O 4 ) and krotite (CaAl 2 O 4 ) monomers.Under certain conditions, including the absence of pulsational shocks and the exclusion of alumina cluster nucleation, the krotite monomer, CaAl 2 O 4 , and its hydroxylated form, CaAl 2 O 3 (OH 2 ), can be produced in significant amounts, up to a fractional abundance of 2×10 −8 .This corresponds to slightly less than 1% of the global budget of the elements aluminium and calcium, and would result in a krotite dust-to-gas mass ratio of 1.5 ×10 −6 , which is two orders of magnitude lower than the alumina dust-to-gas mass ratio of 1.1 ×10 −4 .However, the kinetic formation of the MgAl 2 O 4 monomer represents a major challenge under circumstellar conditions.In particular, the inclusion of Mg in spinel and silicates is inefficient since the reverse reactions proceed at a higher speed.In contrast, the nucleation of alumina is efficient close to the star (1 R ⋆ −2.5 R ⋆ ) and is only barely affected by the inclusion of the magnesium and calcium aluminate chemistry.Therefore, alumina remains the most likely seed particle candidate according to our physico-chemical models.
Presuming the existence of monomers including (MgAl 2 O 4 ) 1 , the subsequent cluster growth is energetically favourable for temperatures in the dust condensation zone (i.e.T<2000 K).Extensive global optimisation searches were performed to derive the energies and structures of the most favourable cluster isomers for (Mg/CaAl 2 O 4 ) n , n=1−7, including mixed Mg/Ca aluminates.For cluster sizes n=3−7, hitherto unreported GM candidates were revealed.Some of these lowest-energy isomers show large dipole moments and are therefore potentially suitable for future IR observations.From the thermodynamic properties of the (sub-)nanometre-sized clusters presented in this study, we predict a stability sequence in which CaAl 2 O 4 clusters are the most favourable species, followed by mixed Ca/Mg aluminate clusters, MgAl 2 O 4 , and olivinic silicates.The harmonic vibrational spectra of the clusters cannot account for the commonly observed 13 µm feature in circumstellar envelopes, which likely arises due to grown (sub-)micrometre-sized alumina and aluminate dust grains.
Instead, the most intense vibrational modes are found in a wavelength regime between 10.5 and 11.5 µm for (MgAl 2 O 4 ) n and between 10.5 and 12 µm for (CaAl 2 O 4 ) n , n = 1 − 7 clusters.The reaction of AlOH with atomic Mg is endothermic by 155 kJ mol −1 with respect to the formation of AlOMg+H.The formation of the AlOMgH adduct is exothermic by 133 kJ mol −1 with respect to AlOH+Mg, but involves a tight transition transition state that lies 123 kJ mol −1 above the reactants.The situation is similar for the AlOCaH system.AlOCaH can form exothermically (-122 kJ mol −1 ) from AlOH+Ca, but involves a tight transition state 117 kJ mol −1 higher than the reagents.(1) the reaction number, (2) the reaction, (3) the CBS-QB3 heat of reaction (enthalpy) at T = 0 K, (4) the reaction rate with the pre-exponential rate constant A given as a(-b) = a × 10 −b (for bimolecular reactions in units of cm 3 s −1 , for termolecular reactions in units of cm 6 s −1 ), and (5) the reference or method of calculation.Reactions 1-166 are adopted from Gobrecht et al. (2022)
Fig. 4 .
Fig. 4. Non-pulsating models.Top panel: Gas number densities (in cm −3 ) as a function of the radial distance.Middle panel: Gas temperatures (in K) as a function of the radial distance.Bottom panel: Fractional abundances of the prevalent gas-phase molecules H 2 O, OH, and CO and the metal oxides and hydroxides MgO, MgOH, CaO, CaOH, AlO, AlOH, and Al 8 O 12 as a function of the radial distance in the non-pulsating models.Solid lines represent the MIRA model and dashed lines the SRV model.
Fig. 5 the model abundances of the molecular precursors related to the formation of MgAl 2 O 4 and CaAl 2 O 4 are shown as a function of the radial distance for the SRV model.At around 2.2 R ⋆ these species show a distinct peak, where Al 2 O 3 H and Al 2 O 3 H 2 obtain abundances in the range of 10 −12 −10 −11 .This peak is related to the emergence of Al 8 O 12 clusters as shown in Fig. 4. Whereas the Al 2 O 3 H abundance remains approximately constant for larger radial distances and does not lead to MgAl 2 O 4 formation, Al 2 O 3 H 2 keeps increasing and eventually triggers the formation of
Fig. 5 .
Fig. 5. Fractional abundances of the molecular precursors related to the aluminate formation and nucleation as a function of the radial distance in the non-pulsating SRV model.
Fig. 8 .
Fig. 8. Pulsating MIRA model.Top panel: Gas number densities (in cm −3 ) as a function of the pulsation phase, i.e. time, and the grid of radial distances.Middle panel: Gas temperatures (in K) as a function of the pulsation phase, i.e. time, and the grid of radial distances.Bottom panel: Fractional abundances of the prevalent gas-phase molecules H 2 O, OH, and CO and metal oxides and hydroxides MgO, MgOH, CaO, CaOH, AlO, AlOH, and Al 8 O 12 as a function of the pulsation phase and the grid of radial distances.
Fig. 9 .
Fig. 9. Fractional abundances of the considered nucleation clusters as a function of the pulsation phase and the grid of radial distances in the pulsating SRV model.
Fig. 11 .
Fig. 11.Fractional abundances of the considered molecular precursors related to aluminate nucleation as a function of the pulsation phase and the grid of radial distances in the pulsating MIRA model, excluding alumina clustering reactions.
Fig. 23 .
Fig. 23.Harmonic vibrational spectra of (MgAl 2 O 4 ) n clusters as a function of wavelength.The normalised IR intensities are plotted with a Lorentzian profile and a full width at half maximum of 0.033 µm .
Fig. 24 .
Fig. 24.Harmonic vibrational spectra of (CaAl 2 O 4 ) n clusters as a function of wavelength.The normalised IR intensities are plotted with a Lorentzian profile and a full width at half maximum of 0.033 µm .
Once Al 2 O 3 H has formed, it can be oxidised to Al 2 O 4 H via ∆ r H(0K)= −201 kJ mol −1 .In principle, Ca could react exothermically with Al 2 O 4 H, which can be produced by reaction 6.However, Al 2 O 4 H 2 forms more quickly due to large H 2 O concentrations, as compared to OH.Therefore, as a dominant route for making calcium aluminates, atomic Ca can react with Al 2 O 4 H 2 : Ca + Al 2 O 4 H 2 ⇋ CaAl 2 O 4 + H 2 (10) Fig. 3. Potential energy diagrams for reactions 8, 9, 10, and 11 .with Fig. 7. Pulsating SRV model.Top panel: Gas number densities (in cm −3 ) as a function of the pulsation phase, i.e. time, and for the grid of radial distances r=1 R ⋆ −3 R ⋆ .Middle panel: Gas temperatures (in K) as a function of the pulsation phase, i.e. time, and for the grid of radial distances.Bottom panel: Fractional abundances of the prevalent gas-phase molecules H 2 O, OH, and CO and metal oxides and hydroxides MgO, MgOH, CaO, CaOH, AlO, AlOH, and Al 8 O 12 as a function of the pulsation phase.
model, reaching fractional abundances of 10 −10 at Φ=1.0.These abundances do not persist the passage of the subsequent pulsational shock at 2.5 R ⋆ and are about four orders of magnitude lower than the solar abundance of Al and Ca.Moreover, the Ca inclusion to form the CaAl 2 O 4 monomer and its hydroxylatedDavid Gobrecht et al.:Bottom-up dust nucleation theory in oxygen-rich evolved stars
Table 3 .
Geometric properties and binding energies of the (MgAl 2 O 4 ) n GM candidate clusters.
Table 4 .
Geometric properties and binding energies of the (CaAl 2 O 4 ) n GM candidate clusters.
Table A.1.Reaction rate network. | 13,298.6 | 2023-10-12T00:00:00.000 | [
"Physics"
] |
Alternatives to staff reduction in the context of labour digitalization
The article discusses alternative forms of employment in the process of staff reduction with the development of information and communication technologies in the world of work. Using the results of the research, the authors propose to critically evaluate Russian legislation in terms of the regulation of self-employment, consider new forms of employment and remote work, legislatively enshrined in the Labour Code of the Russian Federation. One of the problems highlighted in the article has related to the lack of necessary digital skills for using the Internet, which limits the possibilities of the population and employers in using alternative forms of staff reduction and interaction with employment services. Changes in the structure of the economy, the transition of personnel to remote work are accompanied by a reduction in the participation in the trade union movement, as a result, of which an employee may be forced to terminate an employment contract without social guarantees and compensation. The authors of the article argue that alternative solutions to regulate employment of the population based on the principles of social partnership and uniting the efforts of all interested parties the state and society, representatives of employers and workers themselves.
Introduction
The digitalization of the labor market and the employment sphere changes the strategy of the behavior of the employee and the employer in connection with potential or ongoing processes of staff layoffs, which have accompanied by the de-standardization of employment. New forms of employment are emerging all over the world: telecommuting, part-time, hybrid, irregular, temporary, freelance, self-employment, work on digital labour platforms, work are using mobile or cloud applications and others. The scaling up of these forms of employment is associated with the use of information and communication technologies, which allows workers to be always in contact with employers and clients, as well as get the opportunity to represent yourself on digital labor platforms.
The authors used the results of their own research obtained within the framework of the following research projects and government assignments: "Digitalization of the labour market and employment in Russia: trends and development mechanisms", financed from the funds of the Federal State Budgetary Educational Institution of Higher Education Plekhanov Russian University of Economics" (Order No. 867 dated 28/06/2021); "Components, social standards and indicators of the level and quality of life of the population in modern Russia: qualitative identification and quantitative assessment in conditions of socio-economic inequality" (No. 0137-2019-0032) financed from the funds of the Federal Scientific Research Center of the Russian Academy of Sciences; "Organizational and financial mechanisms to support the employment of the population in 2021-2023, aimed at reducing the unemployment rate", carried out by the Scientific Research Institute for Basic Research under the state order of the Ministry of Finance of Russia (No. 092-00001-21-00).
Empirical methods base on the analysis of scientific foreign and Russian literature. Field research, a combination of quantitative and qualitative analysis methods, online surveys, personal and telephone interviews used to assess the challenges and opportunities for transforming the employment service of the Russian Federation to support laid-off workers and unemployed citizens.
1 Assessment of legislation on self-employment
Russian studies show that in order to reduce the costs associated with the guarantees to employees provided by the Labour Code of the Russian Federation, employers can use nonstandard forms of employment that are outside the scope of labour law, which allows employees not to pay severance pay, compensation for unused vacation, etc. Employers, in violation of labour legislation, massively transfer hired workers to the category of selfemployed, concluding civil law contracts with them, or prefer to hire new employees on the terms of similar contracts. Of the 1 million self-employed registered in 2020, 400 thousand people are former employees, from whose salary fund the employer paid insurance premiums and transferred the personal income tax on the amount of 13% from the salary of each employee. The self-employed pay 4% or 6% from their income, depending on who they did the work for -for individuals or legal entities [1].
The transfer of employees to the category of self-employed frees the employer from the obligation to pay insurance premiums for the employee and guarantee the compensation provided for by the labour Code of the Russian Federation. At the same time, an employee in the status of self-employed, and in fact an employee, finds himself in a vulnerable position, deprived of guarantees and social benefits. The signs of labour relations established in Article 15 of the labour Code of the Russian Federation do not allow to unambiguously draw the line between self-employment and labour relations.
The legal status of self-employment is a subject of heated debate [2]. The position of the International labour Organization, based on the fact that "Countries should strengthen, and sometimes adapt, their social protection systems to ensure that all workers benefit from social protection coverage" [3], seems justified. A critical assessment of the Russian legislation on self-employment, which determines the legal status, rights and guarantees of this category of workers, is advisable.
2 Alternative forms of employment in case of staff reduction
The development of non-standard forms of employment allows both the employee and the employer to implement alternative scenarios in case of staff reduction. The employee has more opportunities for part-time jobs, secondary employment, and work via the Internet. The employer, as an alternative to the dismissal of personnel, could be use a remote, parttime or temporary forms of employment [4].
The authors compiled a table of new forms of employment that as alternatives to the staff reduction in Russia (table 1). Table 1. New forms of employment [5].
Forms of employment Content Interim Management
Highly qualified specialists are temporarily engaged by the company to carry out a specific project or solve a specific problem or anti-crisis management. Job sharing An employer hires two or more workers to do the work together (with all co-performers taking one fulltime job).
Сasual work
Casual work refers to a type of work where employment is neither stable nor continuous.
Two main types of casual work Intermittent work. An employers approach workers on a regular or irregular basis to conduct a specific task, often related to an individual project or seasonally occurring jobs. On-call work. An employer does not guarantee regular employment, but calls the employee on demand / demand.
ICT-based mobile work
An employees do their work from any location, at any time, with the support of modern technologies and remote access to the computer systems of the employer.
Voucher-based work
The employer pays the worker by means of vouchers purchased from an authorized organization (generally a governmental authority) which cover wages and contributions to the social insurance system.
Portfolio work
The self-employed and freelancers works for many clients, performing small tasks for each. The portfolio of the candidate with the results of his work plays an important role. It is used in some areas, for example, in the creative field and the scientific and technical field. Platform work Employers and workers, service providers and consumers of services communicate with each other through an online platform, with large tasks usually spread across the workers' virtual cloud. Collaborative selfemployment Freelancers, self-employed, micro-enterprises have cooperate to fulfill the minimum size criteria for enterprises-applicants, or to overcome the professional isolation (co-working, a cooperatives) Outstaffing Removal of employees from the staff of the actual employer and their transfer to the staff of the employer-supplier, which are private employment agencies or recruitment agencies.
However, the use of such forms of employment is associated with the risks of not being in demand in the labour market and loss of income for employees, especially among older people. Psychological stress is growing, many workers experience constant stress and social insecurity with the increase of precarious work.
3 Remote work
According to the Eurostat data, the share of teleworkers only for the period from 2008 to 2019 increased from 12.4% to 16.1% [6]. In the long term, remote work will become an increasingly common alternative to staff cuts. ILO experts estimate the potential of remote work in 18% of jobs (from 12-13% in low-income countries to 25% in developed countries), which is about six times more than the proportion of those who worked remotely before the pandemic [7].
In Russia, the conditions to create for the development of this form of employment -the labour Code of the Russian Federation regulates the features of remote work, including a hybrid form, which implies combining work at home and in the office. The transfer of workers to remote work as an alternative to the dismissal is suitable for some companies, since it has a limited character and more focused on office personnel [8].
With all the advantages of remote work as an alternative to the reduction of staff, questions remain about the organization of remote work at the company level. Companies and employees need to responsibly approach the remote work format and conduct trainings and courses on managing and organizing work remotely (time management, communication skills, stress management skills when working with large flows of information) in order to increase work efficiency, exercise control, create a comfortable environment for the interaction of colleagues, promote corporate culture and minimize the level of stress [9].
4 Digital Skills Assessment
The researches by foreign authors have shown that employers searching for workers in more concentrated labour markets, demand higher the cognitive, social, and organizational skills. They write that upskilling is more prevalent among low-skill workers than high-skill ones and these workers are also more vulnerable to employment instability and low wage growth [10].
As shown of our researches, in the context of the digitalization of the economy, those who do not have the necessary digital skills prevail among the dismissed workers. Mastering digital technologies allows you to increase the competitivene ss of personnel, maintain social status, but they require personal motivation, as well as accessibility to the digital environment.
According to statistics, in 2019, 77% of the surveyed households had access to the Internet, which is lower than in the member states of the European Union, with the exception of Bulgaria, where this figure is 75%. In 2019, 72.6% of households in the Russian Federation used the Internet on a daily basis. At the same time, statistics indicate that the non-use of the Internet in households (as a percentage of the number of households without access to the Internet) occurs for the following reasons: the lack of need to use the Internet, unwillingness to use it or lack of interest in it (in 2015 -66.3%; in 2019 -70.7%); insufficient skills to work on the Internet (16.8% in 2015; up to 32.5% in 2019; lack of sufficient financial resources to connect to the Internet (18.6% in 2015; 20.8% in 2019). Only 7.3% of respondents indicated the lack of technical capabilities to connect to the Internet in 2019 [11]. Therefore, professional retraining programs are important, which will allow to maintain the activity of labor resources in situation of their limited supply and the search for talents [12].
5 Transfer to online services of the employment service
New strategies for employers and employees, and alternatives to dismissed employees are accompanied by an accelerated transfer to online services of the employment service. For the employer and for the employee, this means a more convenient format of online interaction on the Job in Russia portal; applying for vocational training in the framework of programs to reduce tensions in the labour market; and etc.
However, the analysis of the functioning of the Internet portal revealed a number of problems: the results of the level of user satisfaction with its work are not published; the probability of failures in its work remains; employers see a restriction of their right to choose the form of interaction with the employment service; the population lacks Internet skills.
6 The role of the social partnership
According to the research results, we revealed a tendency of the reduction of the role of social partnership in regulating labour and employment issues, which makes it possible for the employer to feel "more comfortable" when releasing personnel, and puts the employee in a vulnerable position. Changes in the structure of the economy, the transfer of personnel to "remote employment" and robotization are accompanied by a reduction in the scale of the trade union movement. In such conditions and in the absence of a trade union, an unscrupulous employer, when dismissing employees, can compel the employee to terminate the employment contract "by agreement of the parties" and thereby avoid all compensations required. Digital technologies, while creating new employment opportunities, simultaneously reinforce the challenges of protecting and representing workers' rights in the digital economy. In order to respond to the challenges of digitalization, the Russian trade union movement needs new organizational forms that go beyond specific organizations; the use of online technologies and online collective forms of action for the mass of workers.
Conclusion
In the context of the digitalization of the employment sector, the problem of staff reduction remains inevitable. In this regard, the behaviour strategy of the employee and the employer is changing, since the employer has opportunities for alternative ways of the staff reduction and providing them with a new job with the necessary digital skills. At the same time, the worker becomes more vulnerable in social and labour relations, since the de-standardization of employment is often implemented outside the legal framework. In this regard, through legal regulation, it is necessary to use the positive possibilities of the digital transformation of the world of work and ensure the protection of the rights and interests of the employee, as well as maintaining a high level of his quality of working life. The use of non-standard forms of employment, is associated with overcoming new challenges in terms of finding optimal management solutions, and will require new forms and approaches to balancing between a set of preventive employment policy measures and the choice of means of providing maximum support to those who have already lost their jobs, including financial and other incentives to increase their professional and geographic mobility. It is necessary to master digital skills that will increase the competitiveness of personnel, maintain social status, and be able to access the digital environment, including in interaction with employment services and trade unions. | 3,322.2 | 2021-01-01T00:00:00.000 | [
"Economics",
"Business"
] |
Development and Application of Artificial Intelligence in Auxiliary TCM Diagnosis
As an emerging comprehensive discipline, artificial intelligence (AI) has been widely applied in various fields, including traditional Chinese medicine (TCM), a treasure of the Chinese nation. Realizing the organic combination of AI and TCM can promote the inheritance and development of TCM. The paper summarizes the development and application of AI in auxiliary TCM diagnosis, analyzes the bottleneck of artificial intelligence in the field of auxiliary TCM diagnosis at present, and proposes a possible future direction of its development.
Introduction
AI is the main force of the fourth scientific and technological revolution [1], which is dedicated to embodying human intelligence through computational methods. It is widely used in various fields and currently mainly possesses functions such as voice and image recognition, logical reasoning ability, and emotion recognition [1,2]. Traditional Chinese medicine (TCM) is the product of the successful combination of ancient Chinese macroscience and the medical practice of that time [3]. After thousands of years of accumulation and precipitation, a unique diagnosis and treatment system has been gradually formed, and its therapeutic effect has been widely recognized. According to TCM theory, the physiological and pathological changes of the human body's viscera and bowels, yin and yang, and qi and blood can be reflected on the outside, such as the face, tongue, pulse, and voice.
rough "inspection, listening and smelling examination, inquiry, and palpation" to grasp the basic situation of the patient, the information of the four examinations is integrated to achieve the correlation of all four examinations, to realize the purpose of diagnosing diseases and symptoms. As the basis of syndrome differentiation and treatment, the four examinations are easily influenced by objective conditions such as environment and light source, as well as the subjective judgment of doctors, and lack objective and quantitative indicators.
rough the combination of AI and TCM diagnosis, a large amount of TCM diagnostic information can be collected, organized, and analyzed, thus providing the possibility of establishing disease-pattern models, and promoting the objective and scientific development of TCM diagnosis.
AI with machine learning and deep learning as the mainstream ushered in a new upsurge [4]. In the 1970s, AI was applied in the field of TCM diagnosis [5], which provided a rare opportunity for the development of objective and modernized TCM diagnosis, but the problems of logical reasoning and objective quantification were not well solved and its development speed was slow [6]. In recent years, thanks to the rapid development of microsensors [7], computer image analysis, speech recognition technology [8], and deep learning [9,10], the programmatic innovation of TCM has been accelerated and a milestone progress has been made in the standardization and normalization of TCM diagnosis [11]. e purpose of this paper is to review the application and development of AI in assisting TCM diagnosis and to analyze the current development bottlenecks and future development directions of AI-assisted TCM diagnosis.
Application of AI in the Field of Medicine
e advent of computers in the 1940s prompted people to explore the use of computers to replace or extend part of human mental work [4]. e concept of artificial neural networks was first introduced by Donald Hebb in 1949 [12], marking the nascent stage of AI. e term artificial intelligence was first coined by McCarthy in 1955 when AI was defined as the science and engineering of making intelligent machines [13]. Over the years, the term has been overused to include various computerized automated systems, logic programming, probabilistic algorithms, and remotely controlled robots [13]. It is worth mentioning that convolutional neural networks (CNN), an important part of deep learning (DL), have extensive application in various subfields of medical image analysis, including classification, detection, and segmentation. For example, the TCM facial diagnostic detector and tongue manifestation analysis system mentioned below both benefit from the development of machine learning and neural networks, while machine learning-based speech recognition technology plays an important role in listening examination.
In the 1970s and 1980s, Stanford University's Experimental Computer Research Program in Medicine was established [14], and medical expert systems were broadly used in clinical diagnosis [15], laying the foundation for the development of AI in the medical field. Nowadays, AI is widely used in modern medicine for both clinical diagnosis of common diseases [16] and image-assisted diagnosis [17,18]. Hirasawa et al. [19] designed a CNN system capable of automatically identifying gastric cancer from a large number of endoscopic images, which is promising in aiding the diagnosis of gastric cancer. Numerous studies have shown that DL systems have a powerful ability to diagnose diabetic retinopathy [20]. And some achievements have been made in the application of auxiliary surgery [21], the most representative being the da Vinci surgical robot [22]. Multiple pieces of evidence support that AI and big data classifiers play a vital role in the field of organ transplantation [23]. A group of researchers have proposed a facial analysis framework for genetic syndrome classification, called DeepGestalt, which uses computer vision and DL techniques to quantify the similarity of hundreds of symptoms. e accuracy of the test of identifying correct symptoms on 502 different images was 91%, which was better than that of clinicians. DeepGestalt has potential application value in the phenotypic evaluation of clinical genetics and gene testing [24]. e application of artificial intelligence nursing providers (AICP) in the field of mental health care and treatment is also an embodiment of the development of AI virtual technology [25]. e combination of TCM and AI began in the 1970s, with the birth of the first international expert system of TCM, which was named "TCM Guan Youbo Hepatitis Diagnostic and Treatment Procedures," as a landmark event.
In the decade since, AI has also made progress in TCM disease diagnosis, assisting in standardizing TCM diagnostic models [26]. Recently, Zhang et al. [27] proposed an AIbased TCM auxiliary diagnosis system, which can diagnose 187 common TCM diseases and related syndromes. e prediction accuracy of the top 3 and the top 5 disease types were 80.5%, 91.6%, and 94.2%, respectively, indicating that they had precise diagnostic accuracy. Besides, AI has an active impact on the development and quality monitoring of new Chinese medicines, the construction of Chinese medicinal prescription models, and acupuncture point combination [28,29]. Yao et al. [30] proposed an ontology-based artificial intelligence model for medicine side-effect prediction, and these predictions were validated with neural network structures, but the model is highly dependent on sufficient clinical data, and more in-depth exploration to improve the accuracy of the predictions is necessary in the future.
Application of AI in Auxiliary TCM Diagnosis
3.1. Inspection. As the first of the four TCM examinations, inspection has the characteristics of intuitiveness and simplicity and plays an important role in the TCM diagnosis.
rough inspection, the physician observes the patient's general or local appearance and morphology, thus achieving the goal of determining the patient's disease state. e contents of TCM inspection are numerous, with particular emphasis on facial and tongue diagnosis in clinical practice.
Facial Diagnosis.
Changes in facial color and luster can reflect the glory and decline of qi and blood of the corresponding viscera and bowels and meridians. AI plays a key role in the recognition and extraction of facial information, so it has been used to study the differences and correlations of facial information between different diseases as well as between different patterns of the same disease. Dong [31] applied a TCM face digital detector to collect and analyze the facial color characteristics of patients with coronary heart disease, chronic renal failure, and chronic hepatitis B. e results showed that there were significant differences in the facial color index between the three diseases, indicating that there is a pattern of changes in the facial color and its parameters in different visceral diseases. Liu et al. [32] used a TCM face detection instrument to collect facial features of 60 patients with chronic nephritis, to study the facial features of patients with damp-heat type chronic nephritis and to analyze the relationship between the facial features of nephritis patients and changes in renal function. It was found that there were differences in facial color parameters between different types of patients, and there was a correlation between facial color parameters and renal function testing indicators. Guo et al. [33] applied the TCM face detection instrument to carry out a study on the correlation between different stages of chronic renal failure and the changes in face information and found that the face color index of patients in the decompensated stage of renal function and the uremia stage decreased significantly, which confirmed that there is a certain correlation between the disease stages of chronic renal failure and the face color parameters.
Tongue Diagnosis.
e tongue and internal viscera and bowels are connected by meridians. Exuberance and debilitation of the healthy qi or pathogenic qi and the changes of qi, blood, fluid, and humor can be obtained by observing the tongue manifestation. Tongue diagnosis mainly includes looking at the tongue body and tongue fur. e tongue body mainly reflects the patient's exuberance and debilitation of qi and blood and strength and weakness of the viscera and bowels. e location and nature of the disease can all be reflected by tongue fur. To integrate the collection, processing, and analysis of tongue manifestation into a whole procedure and make TCM tongue diagnosis tend to be more intelligent, the development and application of tongue manifestation analyzer and "TCM tongue diagnosis automatic identification system" have been set off.
As early as the 1990s, the "TCM tongue diagnosis automatic identification system" established by Tsinghua University and Xiyuan Hospital of China Academy of Traditional Chinese Medicine realized the quantitative analysis of tongue body and tongue fur. In an experiment using this system to observe the tongue body of patients with blood stasis, the analysis result was 86.34% consistent with the visual observation [34]. At the beginning of the 21st century, Jiang et al. [35] designed a computerized tongue diagnosis system of TCM to analyze the characteristics of the tongue using fuzzy theory, which can initially read the amount of tongue fur, the bias in the distribution of the tongue fur, and the thickness of the tongue fur. Cui et al. [36] applied the "TCM tongue diagnosis expert system" to quantitatively study the tongue manifestation of patients with stroke disease, and the results were consistent with the characteristics of the mechanism of disease changes in the acute and recovery phases of stroke disease. Zhang et al. [37] designed a Bayesian network-based tongue diagnosis auxiliary system, and the accuracy of which was higher than 75% in identifying tongue manifestation of healthy individuals, patients with pulmonary heart disease, appendicitis, gastritis, pancreatitis, and bronchitis. Lo et al. [38] extracted nine tongue manifestation features such as tongue color and tongue body using an automatic tongue diagnosis system and further divided the extracted features by region to achieve screening for early breast cancer. Han et al. [39] used a tongue diagnosis information acquisition system to collect tongue manifestation from colorectal cancer patients and healthy individuals and analyzed the thickness of their tongue fur. e study showed that the tongue fur of colorectal cancer patients was significantly thicker than that of healthy people, which provided a basis for tongue diagnosis as an early screening tool for colorectal cancer.
Tongue manifestation analyzer can be adjusted according to the patients' tongue diagnostic environment and tongue spitting posture, and it can acquire the tongue in a multidimensional manner, analyze the tongue manifestation completely, evaluate the tongue accurately, and finally store data and imaging [40]. In recent years, many scholars have used tongue analyzers to study the correlation between pathological evidence and tongue manifestation and between tongue manifestation and objective manifestation indicators. Xu et al. [41] applied the TP-1 TCM tongue and pulse condition digital analyzer to study the tongue manifestation of patients with deficiency symptoms of chronic renal failure and found that the tongue color index and the moist and dryness index were of great reference significance for the differentiation, especially the tongue color index was more meaningful. Zhang et al. [42] used a DS01-B tongue information acquisition device to collect tongue color information of 273 nontraumatic femoral head necrosis patients, used frequency analysis and cluster analysis to analyze the distribution characteristics, and concluded that the tongue color indices of different ARCO (Association Research Circulation Osseous) stages have different characteristics. It provides an objective basis for TCM diagnosis of nontraumatic femoral head necrosis.
Listening and Smelling Examination.
Listening and smelling examination is a diagnostic way for physicians to understand the various abnormal sounds and smells emitted by patients through hearing and smelling [43]. Listening to sounds includes voice, breathing, coughing, yawning, sighing, snoring, sneezing, borborigmus, and splashing sound [44]. By smelling and listening, the doctor can determine the nature and location of the disease and predict the progression and prognosis of the disease [45].
Listening Examination.
In the listening examination, AI is mainly used in physique identification, disease diagnosis, syndrome type research, and evaluation of clinical efficacy through voice information. Wang [46] collected voices of normal adults aged 20 to 79 years old, used a computer voice analyzer to obtain voice data, and concluded that the voice changes with age; also, there is a positive correlation between the voice changes and exuberance and debilitation of Qi. It provides an objective basis for TCM to understand the deficiency and excess syndromes of voice diseases of different ages. Qian et al. [47] used BD-SZ auxiliary diagnostic instrument to identify the five-phase constitution by audio characteristics of 36 Parkinson's sufferers. e results showed that there were more wood, earth, and water constitution, and less metal constitution, among which wood constitution was the most, suggesting that Parkinson's disease and wood constitution may have a certain correlation. Dong et al. [48] used the "Collection System of TCM Auscultation" to capture the speech signals of chronic pharyngitis patients and obtained the energy feature data of different frequency bands by wavelet packet decomposition method and found that there were different energy characteristics in more frequency bands between patients with chronic pharyngitis and normal people, the chronic pharyngitis lung qi deficiency group, the phlegmheat accumulation group, and the yin deficiency and lung dryness group. Li et al. [49] collected TCM acoustic parameters of patients with bronchial asthma in remission before and after treatment by using "acoustic information Evidence-Based Complementary and Alternative Medicine acquisition system" and found that the resonance peak indexes F1 and F2 of patients with asthma in remission were significantly reduced after treatment, which provided an effective basis for the clinical efficacy evaluation of bronchial asthma.
AI has also been applied to the study of the phoneme theory of the five viscera in the listening examination. Based on the theory of "five visceral phonemes," Chen et al. [50] used the "TCM acoustic diagnosis acquisition system" to collect sound signals from patients with lung, liver, spleen, kidney, and heart diseases as well as normal people and analyzed the sound signals using the sample entropy method. e differences in the sample entropy values of the six time-domain frequency bands were statistically significant (P < 0.05), which provided an objective basis for the localization of diseases in viscera and bowels based on listening examination information in TCM. Zheng [51] used a 25-tone analyzer to detect and analyze the average frequencies of the voices of women with hot and cold constitutions and found that the difference in the average number of times in the pinnacle and angular regions between cold and hot constitutions was statistically significant.
In addition, AI has also made progress in the recognition of other pathological sounds. Allwood et al. [52] described the progress of AI in signal recognition and processing of hyperactive bowel sounds, and Abeyratne et al. [53] designed an automated algorithm to diagnose pneumonia by extracting parameters from patients' cough and breath sounds.
Smelling Examination.
e application of AI in the smelling examination is mainly in the recognition and analysis of odor signals. e e-nose based on advanced array gas sensor technology can grasp the odor information from a holistic perspective [54], which provides good technical support for the objective research of TCM smelling examination. Lin et al. [55] used e-nose to accurately identify the odor map characteristics of type 2 diabetes patients and healthy individuals. Based on the principle of e-nose, Liu [56] combined the sensor technology, signal processing technology, and pattern recognition technology and optimized the artificial neural network recognition algorithm to construct an oral odor detection system for TCM diagnosis.
Inquiry.
Inquiry is a diagnostic method for doctors to obtain complete and true information about a patient's condition through purposeful questioning of the patient and his or her family to understand the occurrence and development of the disease. e content of TCM inquiry is mainly based on the "Ten Question Song" created by Zhang Jingyue, but nowadays, it also incorporates past history, allergy history, and family history in modern medical records [57]. In the early stages of certain diseases, the lack of objective signs of abnormalities makes it important to obtain information about the patient's condition through inquiry [58].
AI facilitates the development of TCM inquiry systems. He et al. [59] combined computer technology, intelligent information processing technology, and TCM theory to try to develop a computerized TCM inquiry system. e system collects the basic information of users and the information of inquiry symptoms through the front desk module and the inquiry processing module, stores them in the inquiry database of user symptoms, and finally makes the preliminary judgment of the inquiry results through the diagnosis module based on the criteria set in the inquiry criteria database. Afterwards, they tested the system on 1767 clinical patients in internal medicine, surgery, gynecology, and pediatrics, compared the test results with the experts' interpretation, and found that the clinical interpretation rate of the system was 90%. Similarly, Zheng et al. [60], based on the TCM splenic inquiry scale, combined with TCM clinical practice, designed a standardized inquiry information collection system for TCM splenic diseases. e system has been tested by experts and clinicians and can collect systematic and standardized clinical information from patients, and the data analysis function is convenient and fast. However, there is a lack of research and evaluation on the accuracy of the diagnosis. Liu et al. [61] have developed a TCM heart disease inquiry system, which provides a complete and standardized record of inquiry information, but the results of the diagnosis are determined by clinical experts on their own.
Palpation.
e method of palpation is for doctors to understand the health status and diagnose the patients' condition by touching and pressing on certain parts of the patient [62]. Among them, the wrist pulse-taking method is the most common and important method of palpation. e movement of qi and blood can affect the changes of the pulse condition, and the pulse condition can be used to understand the location and nature of the disease. e application of AI in pulse diagnosis is mainly reflected in the acquisition and analysis of pulse condition information, which to some extent solves the problem that the results of pulse diagnosis are susceptible due to the lack of objectivity influenced by the sensitivity of the doctors' fingertips and clinical experience.
Among the methods of pulse condition-information analysis, time-domain analysis, frequency domain analysis, time-frequency analysis, and wavelet analysis are mainly used [63]. Yang et al. [64] applied the ZBOX-I pulse digital analyzer to study the parameters of the pulse map of the three parts of the right wrist pulse condition in patients with IgA nephropathy with spleen-kidney qi deficiency pattern by frequency domain analysis and concluded that the 24 h urine protein quantification was negatively correlated with the pulse map parameters w/t and h3/h1 and positively correlated with h1 and h5, which could provide a relevant basis for the clinical diagnosis and treatment of patients with IgA nephropathy with spleen-kidney qi deficiency pattern. Using a TP-I digital pulse analyzer, Yan et al. [65] compared the distribution of pulse shape and parameters of pulse maps of the right and left hand of 119 pregnant women with those of normal subjects by time-frequency analysis.
e results showed that in the normal group, normal pulse and slippery pulse were most common; in the pregnant group, slippery pulse and string-like pulse were most common. e PSR 1 and BF of the left and right pulses of the pregnant group were significantly higher than those of the normal group, and the LMS 2 was significantly lower than those of the normal group. Mo and Wang [66] designed a pulse condition detection system based on wavelet analysis to judge the subhealthy population, which is of high accuracy in identifying subhealth states.
In terms of pulse condition-information acquisition, Sun [67] designed a pulse condition acquisition system based on a mobile terminal. rough comparison experiments with the standard pulse diagnosis system and validation analysis of the pulse characteristics of arteriosclerosis patients, the results showed that the system could accurately acquire pulse information of normal humans and arteriosclerosis patients. Zhang [68] designed a wearable pulse condition detection and analysis system. Jin [69] designed a portable three-position pulse condition acquisition system that can meet the needs of home healthcare and experimental research. In recent years, the development of TCM remote pulse diagnosis systems has also begun to emerge. Wang and Bai [70] designed an adjustable closed-loop remote pulse diagnosis system based on virtual reality technology. She [71] designed the TCM remote pulse diagnosis system to initially complete the acquisition and reproduction of the TCM finger technique and patients' pulse. Wu et al. [72] applied the Junlan pulse diagnosis bracelet to explore the characteristics and regularity of the pulses of healthy female college students during their menstrual cycle and found that the slippery pulse was the most common during the menstrual cycle, and the rough pulse was more common during ovulation.
Discussion
AI has great potential in the development of healthcare and presents an opportunity to modernize the development of TCM diagnostics. Over the past decades, many scientists and medical scientists have contributed to the combination of them.
e research and development of intelligent TCM diagnostic instruments, the advent of TCM expert diagnostic systems, and the realization of TCM diagnosis based on cell phone platforms are all exciting research results that have greatly promoted the development of TCM diagnosis and healthcare. Combining AI with TCM diagnosis cleverly avoids the malpractice of uncertainty in doctors' subjective judgment, makes the diagnosis information more real, and improves the accuracy of clinical diagnosis. e application of AI in the early screening and diagnosis of certain diseases will facilitate early understanding of the disease and halt its progression.
Although AI has made some achievements in the application of TCM diagnosis, there is still a lot of room for development. For AI diagnostic accuracy in auxiliary TCM diagnosis, there seems to be a lack of relevant reports. For inspection, the application of AI is limited only to facial and tongue diagnosis, and inspection has not been given to other parts of human body. Although objective studies of tongue color, tongue body, and tongue fur have been made in tongue diagnosis, there remain certain difficulties in the study of the motility of the tongue. e tongue color is easily influenced by food and medication, and it is worthwhile to explore how to intelligently identify whether it is influenced by such factors. In the facial diagnosis, it is mainly limited to the analysis of facial color. Although there are studies on the analysis of facial expressions in some countries, it is incapable of establishing a connection between facial changes and TCM symptoms. As to facial color analysis, how to intelligently determine the true nature of the disease, for instance, whether the redness of the face is caused by heat pattern or a pattern of exuberant yin repelling yang, is also a question that will need to be addressed later. e role of AI in the listening examination is mainly to study the speech sounds of different diseases, but the study is limited to lung, vocal cord, and throat diseases, while the study of speech sounds of other clinical diseases and the study of other pathological sounds such as vomiting sounds are less reported. e application of AI in the smelling examination is based on the study of gas composition and the oral odor by electronic nose technology, but it has not been possible to determine the corresponding TCM syndrome and disease by smelling the odor. In the inquiry, human-computer dialogues can be used to obtain information about diseases, which can avoid the problem of emotional stress that affects the presentation of medical information in some patients when facing doctors. However, there are still problems such as the language of inquiry is not standardized and the inquiry of complex diseases is not yet realized. Based on TCM thinking, the development and research of inquiry system programs should also integrate the consultation contents of modern medicine, including past history and family history, to realize the combination of Chinese and Western medicine diagnosis. With the help of AI, experts and scholars need to develop an inquiry system that reflects humanistic care and adopts easy-to-understand language for the elderly and children, which is the direction that experts and scholars need to work on in the future. In recent years, the emergence of remote pulse diagnosis systems, pulse diagnosis bracelets, and wearable pulse diagnosis systems has provided patients with the convenience of remote pulse diagnosis, but more technical support is needed to develop a simulator that can reproduce a completely real pulse condition. Also, there is a saying in TCM theory that "correspondence between nature and human" and the change of pulse condition is related to the seasons and natural climate; for example, spring is dominated by string-like pulse and winter is dominated by sunken pulse.
Over the past decades, a large number of TCM expert diagnostic systems have been developed, but due to the complexity of TCM information, the lack of unified diagnostic criteria, and the inflexibility of the systems, there are still some shortcomings in applying them to clinical diagnosis. erefore, building a complete and true four-diagnosis information database, establishing international standardized diagnostic standards, and using the application advantages of deep learning algorithms such as CNN, recurrent neural network (RNN), and deep neural network (DNN) in the field of medicine, developing a smarter TCM Evidence-Based Complementary and Alternative Medicine diagnosis system is a key direction that experts in AI and TCM will need to work on in the future. e intelligence of TCM diagnosis will be the path to the early modernization of TCM. For AI to be universally applied in clinical diagnosis, not only do we need to train multidisciplinary crossover talents and complete the development of relevant intelligent devices with certain financial support. Secondly, the state needs to improve relevant policies and regulations to protect patients' personal data and disease-related information from being leaked and applied for other purposes. In addition, a reasonable cost standard for the use of smart diagnostic devices should be set. With the innovation of AI programming and data, there will be more breakthroughs in AI in the field of TCM diagnosis, and the day of realizing diagnosis at home is just around the corner. With the help of AI, TCM diagnosis will certainly be more accurate and convenient in the future and will play an important role in promoting the development of medical practice worldwide.
Data Availability
is study has no laboratory data, but the review study process and reference records are corrected and put in the data center of Heilongjiang University of Chinese Medicine, and these data will be kept for 8 years.
Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of the paper.
Authors' Contributions
Chuwen Feng, Yuming Shao, and Bing Wang contributed equally to this work. Yang Li and Tiansong Yang made critical revision of the manuscript. All authors read and approved the final manuscript. | 6,524 | 2021-03-06T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Classification of Cyber and Physical Systems of Industry 4.0
The actual task is to create the classification system for cyber and physical technological equipment for smart factories of the Industry 4.0. Smart factories are a new type of production companies which work automatically. To project a smart factory they need to choose samples of cyber and physical systems of different purpose to unite them in automatic sections. To solve the choice task of production machine samples they need to create libraries of technical and tactical characteristics and its ontology description the base of which is the system to classify cyber and physical equipment. There is a classification scheme of industrial cyber and physical systems to work in a smart factory of item designing. The classification systems base is types of technological operations being realized in a company and also methods and technologies which the cyber and physical systems use. Cyber and physical systems functionality is described for mechanical treatment and assembly workshops of item designing company.
Introduction
To provide high-tech cyber and physical technological equipment is [1,2] an actual development of economy industrial sector. The cyber and physical systems market offer today a wide nomenclature of machines which may solve the tasks of automatic production with its characteristics [3].
Each type of cyber and physical systems [4] is capable of producing only some technological operations using one or several digital technologies [5,6]. Each types of cyber and physical systems has some machines for a particular task only with their methods [7][8][9] and technologies. To divide cyber and physical systems with technological operations being done and methods and technologies being used creates cyber and physical systems classification of industrial purpose.
This classification system is the base for automatic technological equipment systematization and can be developed in digital [10] production companies which are today viewed as the Industry 4.0.
The Industry 4.0 creates [11] in machine and item designing special companies to realize the item manufacturing production without human participation. Those companies of humanless and paperless technologies are called smart factories.
To synthesize [12,13] the Industry 4.0 smart factory they need to implement criteria of company functionality and to find parameters of the production assets. So the classification system of cyber and physical equipment is an information resource to create the libraries of tactical and technical characteristics of automatic machines and its ontology to describe production processes being placed in the smart factory cloud.
Cyber and physical systems of industrial purpose
Cyber and physical systems are a new type of technological equipment with calculation resources to complete production tasks in automatic mode. Each cyber and physical system is for a limited number of operations only. To unite some cyber and physical systems and their cinematic interaction and agreement of their production data exchange protocols helps to create new flexible automatic The Industry 4.0 smart factories are a new type of item designing (machine designing) companies with closed loop of technological operations to produce high-tech items without humans. Selforganizing cyber and physical systems in the Industry 4.0 is for technological operations from technological maps and routes with algorithms from the smart factory clouds.
In the item designing the cyber and physical equipment is placed within two production workshops of a smart factory: -the workshop of mechanical treatment for production operations of additive technologies to prepare the items (assembly units); -the assembly workshop for production operations with technologies of Machine-to-Machine, Systems-to-Systems to prepare assembly units (surface montage of electric and radio components on the printed circuit boards) and to finish the item assembly.
Assembly units transportations among the workshops and within a workshop among cyber and physical systems is done with manipulator robots under the computerized system. Control algorithms and production tasks for robots and cyber and physical systems are formed in the smart factory cloud using the analysis results (BigData algorithms) for standard operative plans of technological equipment.
The classification of industrial purpose cyber and physical systems is done with technological operations from mechanical treatment and assembly workshops of the Industry 4.0 smart factory. The main classes of item designing automatic production equipment are: -additive production equipment to complete the technological operations of 3D-printing and to cover the assembly units with lacquering and electrotyping covers; -surface montage equipment with standard set of technological operations to place electric and radio components on printed boards, components soldering and other with subsequent washing of mounted assembly units from the flux residue, solder or other contaminations; -transport production system equipment to load (unload) trays of cyber and physical systems to mark the items being produced to storage the items, assembly units and finished products; -production control equipment to check the quality of humanlessly produced items with necessary technological operations.
Cyber and physical systems of additive production
The main purpose of cyber and physical systems in additive production is to complete technological operations of layer by layer adding (growing) of materials on the base plate. In the item designing this technological operation is done with three different types of cyber and physical systems (see figure 1): -3D-printers to print out the items automatically; -production machines of lacquering which lacquers the assembly units (printed boards with mounted electric and radio components); -electrotyping production machines to put the necessary covers of electrotyping on the made of metal items to grant them the necessary physical and chemical properties (anti-corrosivity, high electric conductivity, anti wearing out the surfaces which are necessary rugged, the surface preparation for soldering and other).
3D-printer is a complicated technological device (machine) which works automatically with numeric control (calculating device controls the process of the technological operations completion). The 3Dprinter functionality is based on layer by layer creation (printing) of an item from a solid powder. There are several methods of 3D-printing and some additive technologies that could be supported by a 3D-printer. The most popular in practice are the following technologies: -extrusion or layer by layer creation of a solid object (item) with the method of melt-down. The melted down material is being placed on the base with drops of small size which are rapidly cooled down and united with each other and this is how the current layer is formed (printed layer); -direct (selective) sintering or layer by layer creation of a solid object (item) with the method of electron beam melt-down, heat melt-down or laser sintering. The material is melted down (metallic powder) with a heating head of a 3D-printer, with an electron beam in vacuum or a laser beam.
Step by step sintering of the metallic powder is for layer by layer creation of the item. Unlike the photopolymerization where the base material is the liquid polymer (organic material) sintering use the metallic powder as the base material and the cover refusing method is the same -the influence of laser emission. And the laser emission range in both cases of those additive technologies and its output power are quite different.
In practice they use additive technologies of 3D-printers based on melt-down of gauze material under the electronic emission (the method of welding), tide printing and printing with liquid glue (gluing of a powder material) being conducted automatically and other.
Production machine of lacquering is a complicated technological device which functions automatically with numeric control. The lacquering machine puts protective cover on the printed board surface with installed radio and electronic components. This protective cover is to prevent the printed board surface being influenced with negative environmental effects (Very high or very low environmental temperature, very high level of the air humidity and other) on the electronic components installed on the board or materials from which the board is made.
There are several methods of lacquering cover putting on the printed circuit boards surface. The most popular methods in practice are the following: -lacquering putting with pulverizing method. This kind of additive technology is used compressed air of excessive pressure in the spray and this how the pulverization is being done on the printed circuit board surface with liquid lacquering material; -application of lacquering with the immersion. This kind of additive technology uses special bathes with lacquering material in liquid state where the printed boards are submersed with the manipulator robot. The lacquering material viscosity creates protective cover on the printed board after the submersion. In practice for a quality cover they submerse the boards two or three times (But before the next submersion the applied cover must be hardened); -application of lacquering with selective (tidal) technology. This kind of additive technology in practice apply the lacquering cover on the limited in square sections of the printed boards (After the elements repair when the original lacquering cover was damaged in a small area) or to apply lacquering covers on the assembly units which are non-standard in construction.
The number of lacquering layers and their thickness affects significantly the assembly unit weight. In practice the maximum number of lacquering covers is three. If they apply more it may damage the final cover (In some places the air bubbles could be formed which may result in equipment damage). After application of each lacquering layer on the PCB (Printed Circuit Board) surface it is necessary to reject them and for that purpose the Industry 4.0 company they use automatic oven of rejection.
Electrotyping automatic machines are a kind of cyber and physical systems to support additive technologies of electric and chemical processes to process mechanical units. To apply good electrotyping covers on the metallic surfaces they apply work chambers of cyber and physical systems after the electric current passing through the electrolyte in the bath where the item is placed. The power (density) of current the electrolyte is chemically chosen (by the Law of Faraday) to make the molecules of metal layer not just to bombard the item surface but to penetrate the upper layer of the item. And this is how the applied electrotyping cover is secured in surface.
The most popular methods of electrotyping contain nickel, chrome, gold, silver and other metals. Electric and chemical properties of electrotyping are done with the same cyber and physical equipment. The difference how to apply the cover of different metals is based on the containing of electrolyte, its temperature and the mode of completion of some electric and chemical processes. The choice of modes, electrolyte work temperature and other details how to complete the technological operations of electrotyping is done with electrotyping cyber and physical system controller under its software placed in the cloud (libraries of technological processes completion algorithms). The final stage of electrotyping over metallic items is the procedure of electrolyte residue elimination from the part surface. This procedure is done in special bathes of hot (cold) parts washing. The washing bathes are a kind of cyber and physical automatic equipment where the program controls the work modes of manipulator robots which submerge parts into washing liquid.
Electrotyping is a hazardous production which affects negatively the human health. To organize a production division with cyber and physical systems of electrotyping in automatic mode (without humans) may increase the quality of finished product and the level of operator personal security in a smart factory.
Cyber and physical systems of radio and electronic components surface montage
The surface montage of electric and radio components is done over the printed boards with cyber and physical equipment which is a part of assembly workshop of the Industry 4.0 smart factory. The surface montage includes the following technological operations: -the application of soldering paste over the printed boards (sample printing); -the placement of radio and electric components over the printed boards (element outputs are connected with PCB pads which were already soldered); -solder trimming for an electric contacts of element outputs and printed boards pads and element mechanical fixture (to make the fixture stronger elements may have some ceramic bases for support, glue, mastics and other which was installed or applied before); -PCB washing from the flux residue, solder and other contaminations which are formed after the item manufacturing technological cycle.
The solder paste is applied with special cyber and physical system: -solder paste dozer to form a dose of solder which will be applied on the pads of the printed boards (the dosage depends on the pads linear sizes and radio and electric components output step); -sample creation printer which produces technological masks with holes from metal which is placed over the board when the solder is being provided. The mask holes are the same with the PCB pads of the upper and lower layers of the board; -soldering paste application machine which precisely put together the printed board and the mask (sample) to apply some doses of soldering paste over the printed board. The roller with solder presses the surface of samples with which holes the solder is being penetrated on the printed board pads.
The placement of electric and radio components over the printed board surface is done with the special type of cyber and physical equipment which can be classified with the components they install: -the placer of output components or electric and radio items with the package type of DIP (Dual Inline Package); -the placer of planar components or electric and radio items with the package type of SMD (Surface Mount Device); -the placer of ball output components or electric and radio items with the package type of BGA (Ball Grid Array).
All type of placers work automatically and support one side or two sides component placement type on the printed board. Placer gears put away the electric and radio components from the belt feeder (a bobbin which cells have components already pre-arranged) with a vacuum nozzle and transfer the elements on the printed board surface (on the necessary scheme place). The element positioning precision on the printed board surface is a technical specification of cyber and physical system and may define the minimum step of elements output to work with which the system was created. The controller controls the cyber and physical system gears which memory has a program with digital description of the board geometry and the board collective scheme in its digital form.
In multi-nomenclature production technical documentation of the smart factory product (item) which is placed in company cloud and being extracted from there automatically. The transfer of technical documentation and control program into the memory of cyber and physical system according with requirements and terms which is in operative plan of the item manufacturing in company.
Solder trimming in item designing company to create a secure electric connection of electric and radio component output and PCB pad. Technological operation of solder trimming is done with the following types of cyber and physical equipment: -convection oven of solder trimming inside of which the solder is melted down under the heat convection beam with the necessary temperature stability. The oven has work modes where the -wave soldering machine where the PCB bottom is submerged into a bath of prepared solder. This machine is for montage of the components which have the DIP-bodies (pin outputs for penetrating montage). After the PCB bottom is submerged into a bath of prepared solder (the prepared solder is administrated to the bottom part of the PCB as a wave) on the elements outputs there is solder residue which can penetrate to the holes between layers. With the capillary effect the solder reaches the upper surface of the board which makes a pass through solder connection which touches the pad of transfer hole and element output.
After technological operation of solder paste application, components placement and solder trimming there are some contaminations on the PCB surface which makes the item designing components to be of lower quality. To prevent a scrap to be produced after those technological operations there is the procedure of printed boards washing.
The printed boards washing is done with cyber and physical systems which differs in board and washing liquid way of interaction. There are the following types of automatic technological equipment to wash out the printed boards: -tidal washing bath where the printed board is washed with the cleaning liquid under pressure which helps to take out all the mechanical contaminations from the board; -ultra-sonic washing bath where the washing out of the printed boards surface is done with amplitude movements of the cleaning liquid of ultra-sonic frequency. Those movements provoke contractions of the liquid which spreads to the printed board surface; -bubbling bath to wash out the printed board surface is done with air bubbles generated in cleaning liquid with special device.
Washing bathes are to delete contaminations from the printed board surface and from the sample surfaces which are used to apply the solder paste. When the item washing is done (PCB, sample) it is necessary to dry it out with hot air in the drying case.
Cyber and physical systems of smart factory transport infrastructure
Item transportation (assembly units, products) in the production division of an item designing company which functions automatically is done with robot manipulators which are a part of the Industry 4.0 smart factory transport infrastructure. The Industry 4.0 smart factory transport infrastructure has the following cyber and physical systems: -loading (unloading) machines of parts (assembly units) with the following functions: item transferring and reception to the production machine pallets to complete technological operations (for example, components montage) and the item evacuation from the machine pallet to transport it to the next cyber and physical system; -product marking machines put special barcodes on the items to identify the number of item being manufactured, number and containing of the technological operations done with the item and other. Barcodes are used for marking the products, radio frequency marks and other; -machines of item turning where manipulator robots with function of linear rotation around the axe to complete some technological operations (for example, one by one montage of electric and radio components on the both surfaces of the printed boards using the equipment of one side montage); -cases of components dry storage, parts and products (finished products) which is the part of the storage systems inside the workshop of the items being manufactured in the company.
To unite cyber and physical systems of smart factory transport infrastructure with production machines may create in item designing companies the assembly conveyors which function automatically and collective systems to organize the warehouse storage of the finished products.
Cyber and physical systems of production control
Production control is to monitor the quality of the item designing company product. Production control has the following types of cyber and physical systems: -machine of optical (infra-red) control to evaluate the quality of montage and how much the given assembly units is according to its requirements, the item size and how much they correspond to the item technical documentation and other; -ultra-sonic control machine to evaluate the quality of the given item and to find out the hidden defects which cannot be found visually with the machines of optical range (with the machines of ultrasonic control can be found micro-fissures of metallic items produced with additive production); -machines of X-ray control to evaluate the quality of the given item and to find out the hidden deep defects in the thickness of materials and parts (The X-ray control machines may find fissures and breakdowns of printed conduits deep inside the PCB layers); -quality control machines of the soldering paste application to evaluate the volume the paste applied on the pads of printed boards (the volume is defined after multi-frequency measurement with mathematical calculation of the paste parameters; the height is evaluated, square, volume, paste displacement from the pads and other); -the machines of inner-schemes control (flying probes) to evaluate the quality of electric connection after the soldering electric and radio components on the printed board surface. After the verifying they may control the parameters of item functionality in different areas of the board electrical scheme. In some points defined with the controller program they apply detectors (probes).
Conclusion
To classify cyber and physical systems of industrial purpose is the base to systemize the equipment and technology to complete automatically technological processes of item designing components manufacturing. Given in figure 1 production machines are industrial objects which must be computerized first.
Computerized control system coordinates cyber and physical systems interaction on the algorithms of self-organization production machines. The algorithms of cyber and physical systems selforganization placed in a smart factory cloud and they function on the data analysis of: -results of theme and production orders completion in an item designing company; -well functionality of cyber and physical technological equipment; -the availability of materials and parts which are necessary to complete technological processes and other.
Computerized control system in production has calculation resources of physical world and calculation resources placed in virtual (cloud) part of the smart factory. Smart factory production activity control is done with operators and interface of computerized control system interaction with the software of production processes completion automatizing. | 4,856.6 | 2019-10-17T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Novel Polymorphisms and Genetic Characteristics of the Prion Protein Gene (PRNP) in Dogs—A Resistant Animal of Prion Disease
Transmissible spongiform encephalopathies (TSEs) have been reported in a wide range of species. However, TSE infection in natural cases has never been reported in dogs. Previous studies have reported that polymorphisms of the prion protein gene (PRNP) have a direct impact on the susceptibility of TSE. However, studies on polymorphisms of the canine PRNP gene are very rare in dogs. We examined the genotype, allele, and haplotype frequencies of canine PRNP in 204 dogs using direct sequencing and analyzed linkage disequilibrium (LD) using Haploview version 4.2. In addition, to evaluate the impact of nonsynonymous polymorphisms on the function of prion protein (PrP), we carried out in silico analysis using PolyPhen-2, PROVEAN, and PANTHER. Furthermore, we analyzed the structure of PrP and hydrogen bonds according to alleles of nonsynonymous single nucleotide polymorphisms (SNPs) using the Swiss-Pdb Viewer program. Finally, we predicted the impact of the polymorphisms on the aggregation propensity of dog PrP using AMYCO. We identified a total of eight polymorphisms, including five novel SNPs and one insertion/deletion polymorphism, and found strong LDs and six major haplotypes among eight polymorphisms. In addition, we identified significantly different distribution of haplotypes among eight dog breeds, however, the kinds of identified polymorphisms were different among each dog breed. We predicted that p.64_71del HGGGWGQP, Asp182Gly, and Asp182Glu polymorphisms can impact the function and/or structure of dog PrP. Furthermore, the number of hydrogen bonds of dog PrP with the Glu182 and Gly182 alleles were predicted to be less than those with the Asp182 allele. Finally, Asp163Glu and Asp182Gly showed more aggregation propensity than wild-type dog PrP. These results suggest that nonsynonymous SNPs, Asp182Glu and Asp182Gly, can influence the stability of dog PrP and confer the possibility of TSE infection in dogs.
The octapeptide region of dogs is identical to that of goats and sheep. The octapeptide region of cattle was composed of six repeats containing an additional octapeptide repeat unit, R5 (PHGGGWGQ), compared to that of dog PrP [44]. The human octapeptide repeat, R5 (PHGGGWGQ) is composed of one less glycine than dog PrP (PHGGGGWGQ). The octapeptide repeat of water buffalos showing unique R1 (SQGGGGWFQ) and R5 (PHGGGWGQ) is composed of one less glycine compared to that of dog PrP (PHGGGGWGQ). The tandem repeat (QPGYPH) of chickens is composed of a hexapeptide repeat, which is significantly different from that of dog PrP. The hexapeptide repeat of chickens also has a different length of tandem repeats compared to dog PrP (chicken: 48 aa; dog: 42 aa).
Comparison of the Distribution of the Haplotypes of PRNP Polymorphisms in Eight Dog Breeds
We compared the distribution of haplotypes of the canine PRNP gene among eight dog breeds ( Figure 5). In brief, the distribution of haplotype 1 was not significantly different among the eight dog breeds ( Figure 5A). However, the distribution or haplotypes 2, 3, 4, 5, and 6 were significantly different among the eight dog breeds. In detail, haplotype 2 frequency of Maltese was significantly different from that of Toy poodle (p < 0.01, Figure 5B) and Yorkshire Terrier (p < 0.01). Haplotype 3 frequency of Maltese was significantly different from that of Toy poodle (p < 0.01) and Schnauzer (p < 0.01, Figure 5C). Haplotype 4 frequency of Maltese was significantly different from that of Pomeranian (p < 0.05, Figure 5D). Haplotype 5 frequency of Maltese was significantly different from that of Mixed dogs (p < 0.01, Figure 5E). Haplotype 6 frequency of Maltese was significantly different from that of Yorkshire Terrier (p < 0.05, Figure 5F).
The Number of Canine PRNP Polymorphisms in Eight Breeds
We investigated the number of polymorphisms found in dog breeds (Table 4). In brief, 8 polymorphisms of the PRNP gene found in 77 Maltese dogs were 64_71delHGGGWGQP, Gly66Gly, Ser101Gly, Ala124Ala, Asp163Glu, Asp182Gly, Asp182Glu and Pro243Pro. Five polymorphisms of the PRNP gene found in 29 Shih Tzu dogs were Gly66Gly, Ser101Gly, Ala124Ala, Asp163Glu, and Pro243Pro. Six polymorphisms of the PRNP gene found in 25 Toy Poodle dogs were 64_71delHGGGWGQP, Gly66Gly, Ser101Gly, Ala 124Ala, Asp163Glu, and Pro243Pro. One polymorphism of the PRNP gene found in 19 Yorkshire Terrier dogs was Asp163Glu. Four polymorphisms of the PRNP gene found in 15 Pomeranian dogs were Gly66Gly, Ser101Gly, Asp163Glu, and Pro243Pro. Six polymorphisms of the PRNP gene found in 11 Chihuahua dogs were Gly66Gly, Ser101Gly, Asp163Glu, Asp182Gly, Asp182Glu, and Pro243Pro. Four polymorphisms of the PRNP gene found in four Schnauzer dogs were Gly66Gly, Ser101Gly, Asp163Glu, and Pro243Pro. Three polymorphisms of the PRNP gene found in three Bichon Frise dogs were Ser101Gly, Asp163Glu, and Pro243Pro. Six polymorphisms of the PRNP gene found in mixed dogs were Gly66Gly, Ser101Gly, Ala 124Ala, Asp163Glu, Asp182Glu, and Pro243Pro. Collectively, the number of polymorphisms showed the diversity from 1 to 8. In brief, the dog breed with the lowest number of polymorphisms was the Yorkshire Terrier (1). In addition, the dog breed with the highest number of polymorphisms was the Maltese (8), followed by Toy poodle (6), Chihuahua (6), and Mixed dog (6).
Estimation of the Functional Effect of Genetic Polymorphisms of Dog PrP
We estimated the impact of nonsynonymous SNPs and insertion/deletion of the PRNP gene on dog PrP using PolyPhen-2, PROVEAN, and PANTHER. Detailed scores predicted by the three programs are described in Table 5. In brief, an octapeptide deletion (64_71del HGGGWGQP) was estimated to be "Deleterious" by PROVEAN. Histidine residue of octapeptide interacts with copper ion and plays a pivotal role in function of octapeptide repeat region. The octapeptide repeat region is related to protection against oxidative stress, N-methyl-D-aspartate receptor activity, glutamate uptake, and copper homeostasis mediated by the binding ability of metal ion [41,45]. Thus, the deletion allele of insertion/deletion can be deleterious to normal physiological function of dog PrP. Ser101Gly and Asp163Glu SNPs were predicted to be 'Benign', 'Neutral', and 'Probably benign' by PolyPhen-2, PROVEAN, and PANTHER, respectively. Interestingly, Asp182Gly and Asp182Glu were predicted to be 'Probably damaging' by PolyPhen-2 and PANTHER. However, Asp182Gly and Asp182Glu were predicted to be 'Neutral' by PROVEAN (Table 5).
Prediction of the Structural Alteration of Dog PrP Induced by Nonsynonymous SNPs
The 3D structure of dog PrP was visualized according to alleles of the nonsynonymous SNPs of the canine PRNP gene ( Figure 6). We analyzed the hydrogen bonds of dog PrP with the Asp163 and Glu163 alleles. Asp163 was predicted to have one hydrogen bond (2.54 Å) with Met138 ( Figure 6A). Glu163 was also predicted to have an identical length of hydrogen bond (2.54 Å) with Met138 ( Figure 6B). We identified two nonsynonymous SNPs (Asp182Gly and Asp182Glu) located on helix 2. Asp182 was predicted to have two hydrogen bonds (1.85 and 2.42 Å) with Arg168 and two hydrogen bonds with Ile186 (1.88 Å) and Asn178 (1.87 Å) ( Figure 6C). Glu182 was predicted to have one hydrogen bond with Ile186 (1.88 Å) ( Figure 6D). Gly182 was also predicted to have one hydrogen bond with Ile186 (1.88 Å) ( Figure 6E).
Evaluation of Polymorphisms on the Aggregation Propensity of Dog PrP
To estimate the impact of the polymorphism of the canine PRNP gene on the aggregation propensity of dog PrP, we utilized the AMYCO program. Dog PrPs with 64_71delHGGGWGQGP, Ser101Gly, and Asp182Glu were predicted to score 0. Dog PrP with Glu163 (score 0.23) showed more aggregation propensity than that with Asp163. Dog PrP with Gly182 (score 0.12) showed more aggregation propensity than that with Asp182 ( Figure 7).
Discussion
Natural interspecies transmission of TSEs has never been reported in rabbits, horses, pigs, and dogs. However, the number of TSE-resistant species has decreased with the reporting of prion diseases by experimental infections. In pigs, experimental transmission of BSE via intracerebral (IC) injection and intraperitoneal (IP) injection was reported [46][47][48], and pig PrP transgenic mice showed BSE infection via IC injection [49]. In addition, experimental transmission of RML, ME7, 22L, 139A, 79A, 22F, CWD, BSE, SSBP/1, and CH1641 using PMCA was also reported in rabbit. Furthermore, rabbit PrP transgenic mice were infected by BSE, BSE-L, de novo New Zealand White (NZW), ME7 and RML. However, dog PrP showed resistance against, several prion agents including BSE, scrape, CWD, ME7, RML, 22F, 22L 87V, 22A, 79A, and 139A, in seeded PMCA [50]. MDCKcells showed resistance to the RML strain, and dog-specific amino acid N158D transgenic mice also showed resistance to infection of three different mouse prion strains, including RML, 301C, and 22L [17,18]. These results indicate that dogs are prion disease-resistant animals. However, these studies did not consider genetic polymorphisms of dog PrP.
Since polymorphism of the PRNP gene has been associated with the susceptibility to prion diseases [5,11,51,52], we amplified the ORF region of the canine PRNP gene to identify the genetic polymorphism of this gene. We identified a total of eight polymorphisms, including two novel nonsynonymous SNPs and one insertion/deletion (Figure 1). We identified strong LDs and six major haplotypes among eight polymorphisms. The distribution of haplotypes was significantly different among the eight dog breeds. In addition, the number of identified polymorphisms was different from each dog breed (Table 4). Notably, Yorkshire Terrier showed the lowest number of polymorphisms in dog breeds with more than 12 samples capable of excavating 1% frequencies of SNPs with 96% probability (Table 4). Since the wolf and dog PrPs have the same amino acid sequence, the evolutionary distance of the PRNP gene between dog and wolf can be estimated according to the number of polymorphisms. In comparison with Maltese, Shih Tzu, Toy Poodle, and Pomeranian, which showed highly polymorphic PRNP gene, Yorkshire Terrier is presumed to be a close evolutionary distance of the PRNP gene with wolf.
We also estimated the impact of polymorphisms on dog PrP using PolyPhen-2, PROVEAN, and PANTHER. All three in silico programs predicted that Asp163Glu was benign. A previous study reported that Asp163Glu did not influence the susceptibility to TSE of transgenic mice expressing dog-specific amino acids 158Asp and 158Glu [53]. Codon 158 in mouse PrP is equivalent to codon 163 in dog PrP. In the present study, we observed similar results using in silico programs in which Asp163Glu does not impact the structure and/or function of dog PrP. Notably, PROVEAN and PANTHER predicted that the p.64_71del HGGGWGQP, Asp182Gly, and Asp182Glu polymorphisms can impact the function and/or structure of dog PrP (Table 5). These estimations suggested the possibility that p.64_71del HGGGWGQP, Asp182Gly, and Asp182Glu can impact the susceptibility of dogs to TSE (Table 5). However, Asp182Gly and Asp182Glu were predicted as neutral using PROVEAN. Because PROVEAN was estimated using clustering of basic local alignment search tool (BLAST) and comparing homologs collected from a database, PROVEAN predicted that Asp182Gly and Asp182Glu did not impact the function of PrP.
Next, we predicted the 3D structure of dog PrP to evaluate the impact of three nonsynonymous SNPs, including Asp163Glu, Asp182Glu, and Asp182Gly. We compared the distribution of hydrogen bonds between alleles Asp163 and Glu163 of dog PrP. The distribution of hydrogen bonds in dog PrP is identical between the Asp163 and Glu163 alleles ( Figure 6A,B). Dog PrP with Asp182 was predicted to have four hydrogen bonds. However, dog PrP with Glu182 and Gly182 was predicted to have only one hydrogen bond ( Figure 6C,D). The number of hydrogen bonds can affect the stability and structure of proteins [54][55][56]. Because the stability of PrP is related to the susceptibility of prion disease, Asp182Glu and Asp182Gly SNPs of the canine PRNP gene can influence the susceptibility to TSE of dogs. We estimated the impact of the polymorphism of the canine PRNP gene on the aggregation propensity of dog PrP and found that dog PrP with Asp163Gly and Asp182Gly (score 0.12) had a higher aggregation propensity than that of wild-type dog PrP (Figure 7). Collectively, Asp182Glu, and Asp182Gly are presumed to be deleterious. Based on our analysis, Shih Tzu, Toy Poodle, and Pomeranian, which do not carry Asp182Glu and Asp182Gly, are presumed to be resistant to prion disease compared to Maltese and Yorkshire Terrier in dog breeds with more than 12 samples. It indicates that evolutionary sensitization to prion infection can be occurred in Maltese and Yorkshire Terrier. To confirm the impact of Asp182Glu and Asp182Gly SNPs on the susceptibility to prion disease of dogs, infection experiments with prion agents will be necessary in MDCK cells and transgenic mice expressing dog PrP with two amino acid substitutions, Asp182Glu and Asp182Gly, in the future.
Although most of our analysis has been focused on nonsynonymous SNPs, there are recent evidences that synonymous SNPs introduce less commonly used codons, which may alter the speed of translation and ultimately folding, function, and stability of the mature protein [57,58]. Since prion diseases are induced by misfolded prion protein, these are important considerations of synonymous SNPs. Further study of synonymous SNP is highly desirable in the future.
Genomic DNA Extraction and Genetic Analysis
Genomic DNA was isolated from whole blood samples of 204 dogs using a Hi Yield Genomic DNA Mini Kit (Real Biotech Corporation, Taipei, Taiwan) and a Bead Genomic DNA Prep Kit (Biofact, Daejeon, Korea) following the manufacturer's instructions. PCR was carried out using gene-specific sense and antisense primers: canine PRNP-F (TGTGCAGATGTTCTCGCTGT) and canine PRNP-R (GAAGCGGGAATGAGACACCA). These primers were designed to amplify the entire ORF region of the canine PRNP gene. We performed PCR using BioFACT™ Taq DNA Polymerase (Biofact, Daejeon, Korea). The 25 µL PCR mixture was composed of 5 µL of 10× DNA polymerase buffer, 2.5 units of 10× Taq DNA polymerase, 1 µL of genomic DNA, 10 pmol of each primer, and 0.5 µL of a 0.2 M dNTP mixture. The PCR conditions were as follows: denaturing at 95 • C for 2 min, followed by 34 cycles of 95 • C for 20 s, 62 • C for 30 s, and 72 • C for 1 min, and 30 s and 1 cycle of 72 • C for 5 min. All PCR products were analyzed by electrophoresis on a 1.0% agarose gel stained with ethidium bromide (EtBr). PCR products were directly sequenced using an ABI 3730 sequencer (ABI, Foster City, CA, USA). Sequencing results were read using Finch TV software (Geospiza Inc, Seattle, WA, USA).
Statistical Analysis
The chi-squared test was used to determine whether polymorphisms of the canine PRNP gene were in HWE and to determine whether there were differences in difference of allele frequencies among the 8 dog breeds. We examined LD and analyzed haplotypes of polymorphisms of the canine PRNP gene. Lewontin's D (|D |), coefficient r 2 values and haplotypes of polymorphisms of the canine PRNP gene were analyzed using Haploview version 4.2 (Broad Institute, Cambridge, MA, USA).
Prediction of Protein Functional Alterations in Dog PrP
PolyPhen-2 (http://genetics.bwh.harvard.edu/pph2/index.shtml), PROVEAN (http://provean.jcvi. org/seq_submit.php) and PANTHER (http://www.pantherdb.org) predicted the impact on protein function induced by variations in protein sequence in dog PrP. The prediction of PolyPhen-2 was based on the number of features containing the sequence, structural, and phylogenetic information characterizing the variation. PolyPhen-2 was predicted to be benign, possibly damaging, or probably damaging based on pairs of false positive rate (FPR) thresholds with scores ranging from 0.0 to 1.0. PROVEAN estimates the impact according to protein sequence variations on protein function. The PROVEAN scores are computed by clustering BLAST hits according to the homologs collected from a database (the NCBI nr database). The top 30 clusters of closely related sequences form the supporting sequence set, which will be used to make the prediction. PROVEAN scores below -2.5 indicate "deleterious," and above -2.5 indicates "neutral". PANTHER was based on evolutionary preservation. Homologous proteins are used to reconstruct the likely sequences of ancestral proteins at nodes in a phylogenetic tree, and each amino acid can be tracked back in time from its present state to estimate how long that state has been conserved in its ancestors [38]. PANTHER calculates preservation time to evaluate the substitution of amino acids. The interpretation of preservation time is described as follows: "Probably damaging" is greater than 450my; "Possibly damaging" is between 200my and 450my; "Probably benign" is less than 200my. AMYCO (http://bioinf.uab.cat/amyco/) can evaluate the impact of polymorphism on the aggregation propensity of proteins. AMYCO calculated the impact of amyloidogenic sequences and the contribution of composition to the aggregation of dog PrP using the pWALTZ and PAPA algorithms. This is the interpretation of the PSEP score.
3D Structure Modeling of Dog PrP
The 3D structure model of dog PrP using nuclear magnetic resonance (NMR) spectroscopy was obtained from the Protein Data Bank (http://www.rcsb.org/structure/1XYK) (Protein ID: 1XYK). Analysis of the impact of the nonsynonymous SNPs on dog prion protein was performed by the Swiss-PdbViewer program (https://spdbv.vital-it.ch/). The models of nonsynonymous SNPs of the canine PRNP gene were generated at Asp163Glu, Asp182Gly, and Asp182G. Hydrogen bonds are predicted according to the interatomic distance, bond angle and atom types. Hydrogen bonds are predicted if a hydrogen is in the range from 1.2 to 2.76 Å of a "compatible" donor atom.
Conclusions
In conclusion, we found eight polymorphisms of the canine PRNP gene, including six novel polymorphisms and identified strong LDs and six major haplotypes among eight polymorphisms. Additionally, we compared the distribution of most haplotype 1 of the canine PRNP gene among eight dog breeds. The distribution of haplotypes was significantly different among the eight dog breeds. In addition, the number of identified polymorphisms was different in each dog breed. Furthermore, we estimated the biological impact of nonsynonymous SNPs and insertion/deletion using in silico 4 programs. The polymorphisms, including 64_71delHGGGWGQP, Asp182Glu, and Asp182Gly, were predicted to be deleterious, and two polymorphisms, including Asp163Glu and Asp182Gly, had more aggregation propensity than wild-type dog PrP. Finally, we predicted the 3D structure and hydrogen bonds of dog PrP according to alleles of nonsynonymous SNPs. The number of hydrogen bonds of dog PrP with alleles Glu182 and Gly182 were predicted to be three less than that of dog PrP with allele Asp182. Based on these results, we suggest that nonsynonymous SNPs, including Asp182Glu and Asp182Gly can affect the stability of dog PrP. | 4,181.8 | 2020-06-01T00:00:00.000 | [
"Biology"
] |
All-optical coherent quantum-noise cancellation in cascaded optomechanical systems
Coherent quantum noise cancellation (CQNC) can be used in optomechanical sensors to surpass the standard quantum limit (SQL). In this paper, we investigate an optomechanical force sensor that uses the CQNC strategy by cascading the optomechanical system with an all-optical effective negative mass oscillator. Specifically, we analyze matching conditions, losses and compare the two possible arrangements in which either the optomechanical or the negative mass system couples first to light. While both of these orderings yield a sub-SQL performance, we find that placing the effective negative mass oscillator before the optomechanical sensor will always be advantageous for realistic parameters. The modular design of the cascaded scheme allows for better control of the sub-systems by avoiding undesirable coupling between system components, while maintaining similar performance to the integrated configuration proposed earlier. We conclude our work with a case study of a micro-optomechanical implementation.
I. INTRODUCTION
Achieving force measurements at the quantum limit has been a significant focus for several decades [1,2] and has fueled the development of optomechanics [3][4][5]. Optomechanical sensors exploit the interaction of a light field with the motion of a mechanical oscillator to measure its displacement with high precision. Force measurements based on these schemes are subject to shot noise and quantum radiation pressure backaction noise [6,7]. Shot noise is caused by the uncertainty in the number of photons over time and can be decreased relative to the signal by increasing the intensity of the optical field. Contrary, the backaction noise arises from the fluctuation in the radiation pressure of the optical field, which will increase with its intensity. The trade-off between these competing processes then sets a lower bound to the precision of the measurement, which is called the standard quantum limit (SQL) [7][8][9].
The SQL is not a fundamental limit, and many different approaches have been suggested to achieve measurements with sub-SQL accuracy. These approaches include frequencydependent squeezing [10][11][12], variational measurements [13][14][15], dual mechanical resonators [16][17][18][19] and optical spring effects [20][21][22]. In essence, these ideas go beyond the SQL by measuring a quantum non-demolition (QND) variable of the probe, that is, a variable that commutes with itself for different moments in time. In a QND measurement, the backaction is transmitted to the canonically conjugate observable and thus avoided. A more general approach to QND measurements is gained by introducing another system that acts like a reference frame with an effective negative mass [23]. By measuring with respect to this reference system, a QND measurement is realized. When the reference system is a harmonic oscillator, an effective negative mass amounts to a negative eigenfrequency. This idea was first experimentally utilized in demonstrating Einstein-Podolsky-Rosen (EPR) states of two atomic spin oscillators of positive and negative mass [24]. Based on this, back action cancellation was demonstrated by Wasilewski et al. [25] in the context of magnetom-etry. Extending this idea, several proposals have been made in a hybrid setting of a mechanical oscillator and atomic spin ensembles [23,26], and the evasion of backaction noise in these spin ensembles experimentally verified in [27]. Independently, Tsang and Caves [28,29] developed this idea in a more general context, called quantum-mechanics-free subsystems. In the context of optomechanics, the main idea is to introduce an 'anti-noise' path to the dynamics of the optomechanical sensor upon coupling to an ancillary resonator that acts as an effective negative mass. This way, the backaction noise can be cancelled coherently, and sub-SQL force sensing is achieved for all measurement frequencies. Appropriately, this approach is called coherent quantum noise cancellation (CQNC). Details and experimental feasibility of this all-optical effective negative mass oscillator were discussed in more detail by Wimmer et al. [30].
Within the area of CQNC force sensing, many other possible negative mass oscillators and setups have been considered. Other setups include the use of ultra-cold atoms inside a separate cavity [31,32], hybrid optomechanical cavity, i.e. implementing an atomic ensemble inside the optomechanical sensor [33,34] and employing Bose-Einstein condensates [35]. Even a new all-optical setup was suggested using two detuned optical modes inside the force sensor [36]. These approaches can be categorized into integrated setups, where the effective negative mass is introduced directly into the optomechanical force sensor, and cascaded setups [37,38], where the effective negative mass oscillator is a separate system. Recently, Zeuthen et al. [39] considered a broad class of effective negative mass oscillators in a cascaded setting and even considered a possible coupling between the positive and negative mass oscillator in a parallel topology.
Inspired by this, we want to discuss a cascaded version of the original all-optical setup [28,30]. Instead of implementing the 'anti-noise' path directly into the optomechanical sensor, an all-optical effective negative mass oscillator is built as a separate system. The backaction is then cancelled by coupling the force sensor to the effective negative mass oscillator via a strong coherent field. This approach will give more freedom in the experimental design and simplify reach- ing the challenging conditions for a CQNC experiment. The main challenge before cancelling quantum backaction noise is to measure the backaction noise. Due to the modular nature of the cascaded approach, this can be tackled entirely separate from the effective negative mass oscillator. We will see that under some modifications to the matching conditions, our cascaded setup recovers the ideal CQNC performance described in [30], and the additional degrees of freedom by expanding the dimension of the system lead to novel phenomena for sub-SQL force sensing. This includes the recovery of ideal CQNC around an off-resonant frequency and possible CQNC performance in the low-or high-frequency regime, even for unmatched CQNC conditions. This paper is organized as follows. In Sec. II, we describe the model of our cascaded CQNC scheme and derive the quantum Langevin equations of motion. In Sec. III, we discuss force sensing in optomechanical sensors and derive the optimal parameters for ideal CQNC. In Sec. IV, we analyze possible deviations from the ideal conditions and their impact on the performance of coherent quantum noise reduction. Then, in Sec. V, a case study is provided. Finally, we summarize our findings in Sec. VI. Fig. 1(a) illustrates a possible, schematic realization of our setup. We refer to [40] for details on the experimental implementation of all-optical CQNC. An optomechanical sensor (OMS), subject to an external force and radiation pressure noise, is connected to an effective negative mass oscillator (NMO) by a coherent light field. The force is then measured by detecting outgoing light after the second system. The order of sub-systems can be chosen freely, and the two possible arrangements are depicted in Fig. 1(b). In the first case, the light travels through the NMO, followed by the OMS. We will refer to this case as NMO → OMS. In the second case, the order is reversed; hence the light will travel through the OMS first, and we will refer to this case as OMS → NMO. The effects of these different arrangements on the performance of the backaction cancellation will be discussed later.
II. MODEL
The OMS is modelled by an optical cavity with resonance frequency ω om , containing a damped mechanical positive mass oscillator (PMO) with resonance frequency ω m and linewidth γ m , coupled to the cavity field via radiation pressure interaction and subjected to an external force F. Following the standard treatment of these force sensors [5], we move to a rotating frame with respect to the frequency ω L of the strong driving laser field and arrive at the linearized Hamiltonian Here, ∆ om = ω om − ω L is the detuning of the optomechanical cavity to the incoming field, c om (c † om ) are the annihilation (creation) operators of the optical mode and x m = X/x ZPF , p m = P x ZPF / are the position and momentum operators of the mechanical oscillator normalized to the zero-point fluctuation x ZPF = /mω m , such that [x m , p m ] = i. The last term in Eq. (1) describes the radiation pressure interaction of the cavity mode and the mechanical oscillator. Its strength is given by g = √ 2ω c x ZPF α c /L, where L is the cavity length and α c ∝ √ P is the field amplitude of the cavity mode, proportional to the input power P . Introducing dimensionless amplitude and phase quadratures c om = (x om + ip om )/ √ 2, the Hamiltonian (1) implies the quantum Langevin equations (QLEs) Here, κ om is the decay rate of the cavity mode and c in om = (x in om + ip in om )/ √ 2 is its vacuum input noise. The noise process fulfills c in om (t)c in † om (τ ) = δ(t − τ ). In Eq. (2) we have defined the scaled force operator F = F/ √ mγ m ω m with dimension √ Hz. It consists of the to-be-detected force signal F sig acting on the mechanical oscillator and Brownian thermal noise F th of the oscillator. The scaled thermal noise satisfies F th (t)F th (τ ) = n th δ(t − τ ), where n th = k B T / ω m is the average phonon number of the mechanical oscillator.
The NMO consists of two optical modes c c and a, with resonance frequencies ω c and ω a , coupled with a beam-splitter and down-conversion process. In analogy to [30,40], we refer to ω c as the meter cavity and ω a as the ancilla cavity. The Hamiltonian of this system, see Appendix A for details, is given by with the detunings ∆ c,a = ω c,a − ω L , the beam-splitter coupling strength g BS and the coupling strength of the downconversion process g DC . As above, we introduce amplitude and phase quadratures associated with the meter and ancilla cavity. Then the Hamiltonian (3) implies the following QLEs for the NMO, with the cavity linewidth κ c and κ a and the input noise processes a in and c in c . Under the condition g BS − g DC = 0, the QLEs (4) generate similar interaction as in Eqs. (2). Additionally, driving the meter cavity on resonance ∆ c = 0, results in ∆ a = ω a − ω c . Thus the detuning between the meter and ancilla cavity can be used to generate an effective negative mass oscillator.
III. FORCE SENSING AND IDEAL CQNC
To solve the dynamics of both systems, we turn to the frequency domain. Introducing the Fourier domain operators equations (2)-(4) can be solved using the standard inputoutput formalism [41] x We consider the systems separately, first the OMS. On resonance, ∆ om = 0, the output quadratures read where e iφ = ( κom 2 −iω)/( κom 2 +iω). We have defined the susceptibilities for the optomechanical cavity and the mechanical oscillator, as The mechanical oscillator is susceptible to the external force F , which contains the force signal F sig and thermal noise F th .
By measuring with light, the force signal can be estimated via a phase measurement, which also introduces additional noise due to the radiation pressure. From the measured phase p out om in Eq. (7) we can give an unbiased estimatorF of the force F , asF where the additional force noise, added by the measurement light, is defined as To characterize the sensitivity of the force measurement, we use the (power) spectral density of the added noise defined by Assuming uncorrelated amplitude and phase quadratures, the added noise spectral density is where we defined the frequency dependent measurement strength as with Lorentzian shape and maximum Γ om = 4 g 2 κom . In this form, the noise spectral density (13) is dimensionless. To arrive at a force noise spectral density in units of N 2 /Hz, one has to re-scale it, such that S F (ω) = mγ m ω m S F (ω) for a given optomechanical force sensor [30]. The terms in Eq. (13) are thermal noise due to Brownian motion of the mechanical oscillator (first term), shot noise in the phase quadrature (second term) and backaction noise from the amplitude quadrature (third term). The thermal noise adds a flat background to the force sensitivity, which is independent of the measurement rate G om and frequency. Throughout this paper, we will assume that the thermal noise is either dominated by backaction noise [30] or is suppressed by cooling of the mechanical mode and therefore neglect this term.
The shot noise term scales inversely proportional to the measurement rate G om and thus inversely proportional to the power, as G om ∝ P . Additionally, the backaction noise is proportional to G om . This implies that an optimal power value, which minimizes Eq. (13), exists for each frequency. Minimizing Eq. (13) with respect to G om for all frequencies gives the achievable lower bound which is the standard quantum limit (SQL). The optimal measurement rate required is Next, we consider the NMO. On resonance, ∆ c = 0, and for g BS = g DC = 1 2 g a the output quadratures read where e iθ = ( κc 2 − iω)/( κc 2 + iω). We have defined susceptibilities of the meter cavity χ c and the ancilla cavity χ a as The aim of our setup is to couple the two systems in a manner that the backaction noise in the force spectrum will cancel, hence allowing for a sub-SQL performance. In our dual-cavity setup, this is done by cascading the two systems and matching the parameters of the NMO accordingly. As seen in Fig. 1(b), the whole scheme has two possible arrangements. Thus, to cascade the systems, we choose x out c = x in om and p out c = p in om for the case NMO → OMS and x out om = x in c and p out om = p in c for OMS → NMO. For ideal CQNC, the order will not matter. We will discuss cases that depend on the order further below. After cascading the two systems, we can again identify the additional force noise as in Eq. (10) and derive the added noise spectral density see Appendix B for details. Analogous to the measurement strength of the OMS in Eq. (14), we have defined the frequency dependent measurement strength of the NMO, with Γ a = 4 g 2 a κc . The terms in Eq. (20) are shot-noise (first term), backaction noise (second term) and shot noise from the ancilla cavity (last term). The backaction noise is then cancelled if the conditions are such that for all ω. This means that the ancilla cavity should couple to the light with the same strength as the PMO, but the response to the force signal should be opposite to the PMO, hence it behaves as an effective negative mass. Considering the explicit form of Eqs. (9) and (19), condition (24) entails further restrictions: 1. The detuning of the ancilla cavity to the meter cavity is which effectively moves the ancilla cavity to the negative mass frame.
2. The linewidth of the ancilla cavity κ a should match the damping rate of the mechanical oscillator to mimic the oscillating behaviour of the PMO.
3. The susceptibilities χ m and χ a differ by a factor κ 2 a /4. To alleviate this, the detuning |∆ a | κ a , and together with forgoing points this implies the resolved sideband limit of the ancilla cavity and a large quality factor of the mechanical oscillator, These conditions are similar to the integrated setup [30], but instead of the coupling strength g a and g, the measurement strengths G a and G om need to match. Assuming conditions (22)-(24) are met, the backaction term in Eq. (20) will vanish, and we arrive at which contains only shot-noise contributions of the measured phase quadrature and the ancilla cavity. The contribution of the OMS is sometimes referred to as the fundamental quantum limit (FQL) [9], energetic quantum limit [42] or the quantum Cramér-Rao bound [43][44][45]. In the limit of large measurement strength, we arrive at the lower bound Combining Eqs. (15) and (26) we find Thus, for Q m 1, we can summarize In conclusion, under the additional condition that G om = G a , the cascaded setup reproduces the same findings of [30], leading to an enhancement in performance up to a factor of 2Q m off-resonance and SQL performance on resonance.
IV. IMPERFECT CQNC
Conditions (22)- (24) are the ideal case for a perfect cancellation of backaction noise and will not be satisfied in an actual experiment. Therefore, we will discuss possible imperfections and their impact on the performance of our cascaded scheme. These imperfections include mismatches to the parameters in Eqs. (22)- (24), and possible losses. Another degree of freedom of our setup is the order in which the light passes through the sub-systems (i.e. NMO → OMS or OMS → NMO), but this will only affect the force sensing performance for imperfections that directly affect the force signal. Hence, we split our discussion into order dependent and independent categories. Data shown in the figures of this section refer to an OMS given by the parameters in Table I.
A. Order-independent imperfections
The parameters discussed in this subsection will impede the cancellation of backaction noise and, as a result, limit the CQNC performance but will not affect the force signal. Hence, the possible CQNC performance in the face of these imperfections will not depend on the system order.
Non-ideal ancilla cavity linewidth κa = γm
The strictest requirement for an all-optical CQNC setup is to match the ancilla cavity linewidth to the damping rate of the mechanical oscillator. Assuming all conditions for ideal CQNC are matched, except κ a = γ m . Since the measurement strengths, G om = G a are matched for all frequencies, and we assume no propagation losses between the system, this effectively reduces to the integrated CQNC setup [30]. The spectral density of added noise (20) in this case becomes For an optimal G om , we find the minimal spectral density for the added noise, Force noise for a mismatch between ancilla cavity linewidth κa and damping rate of the mechanical oscillator γm. For κa < ωm an improvement of κa/2ωm can be achieved off-resonance (solid green). For κa ωm, the effect of CQNC is completely cancelled for low frequencies, and the sensitivity is worse than the SQL for high frequencies (red). The shaded areas mark the bounds for sub-SQL sensitivity, from below the fundamental limit given by Eq. (26), from above the SQL given by Eq. (15). Parameters are given in Table I This is composed of measurement shot and backaction noise (first term) and noise introduced by the ancilla cavity (second term). The second term will dominate the first one for frequencies off-resonance, setting a bound to the achievable performance. The ratio between the spectral density (30) and the SQL is for κ a < ω m . For κ a ω m , the effect of CQNC will vanish for low frequencies, converging to the SQL, while for large frequencies, the added noise is larger than the SQL. This is illustrated in Fig. 2.
Unequal measurement strengths Gom = Ga
Next, we consider a mismatch of the measurement strengths G a = G om , while matching the other CQNC conditions. This entails unmatched cavity linewidth κ c = κ om and unmatched couplings g a = g. Introducing parameters for the linewidth mismatch κ c = κ om and coupling mismatch g a = √ δg, we find for the spectral density For suitable couplings g and cavity linewidth κ, we can find a frequency where the backaction term in Eq. (32) will vanish, and ideal CQNC is possible. This is the case when the Lorentzians G a and G om are such that they will intersect at a frequency ω = 0. We find that is a real-valued frequency for the following parameters: Consequently, a cavity linewidth mismatch can compensate for every possible matching condition of the couplings, and ideal CQNC can be achieved at ω * . For non-vanishing backaction, we can again minimize the spectral density (32) with an optimal G om . Turning to the low-frequency limit (κ c,om ω), the measurement strengths become frequency independent, G om,a → Γ om,a , and the ratio |χ c | 2 /|χ om | 2 → 1/ 2 . The minimal noise spectral density is then Ideal CQNC can be recovered for = δ, which means Γ om = Γ a . Hence, as long as the rate at which the backaction information leaks out of the system is matched, ideal CQNC is possible.
We also considered a combination of the imperfections discussed in this section. If, for example, G om = G a and additionally κ a = γ m , the noise spectral density will be a combination of Eqs. (29) and 32. In this case, the cancellation of backaction noise is possible for the cases discussed above, but the ancilla cavity noise floor is higher because of the linewidth mismatch γ m = κ a . Thus, our findings will remain the same, but the achievable performance off-resonance is given by the noise spectral density (31).
B. Order-dependent imperfections
The parameters discussed in this subsection, namely losses, not only hamper the cancellation of backaction noise but also affect the force signal directly.
Losses
We first consider propagation losses, which occur between the first and the second system. The propagation losses are modelled via mixing the output signal of the first system with vacuum in a beam-splitter-like interaction. This leads to a modified output signal [8], For mismatched coupling strength compensated by linewidth mismatch, perfect noise cancellation can be recovered at low frequencies (solid green for = δ = 0.9). For matched linewidth but mismatched coupling strength, noise cancellation is limited, but sub-SQL performance is possible (dashed red for δ = 0.9). In the case of matched coupling strength but mismatched linewidth, we find a frequency (33) where perfect noise cancellation is possible (dashdotted purple line, with = 0.9). The shaded areas mark the bounds for sub-SQL sensitivity, from below the fundamental limit given by Eq. (26) and from above the SQL given by Eq. (15). Parameters are given in Table I. where x vac represents the vacuum field and η ∈ [0, 1] is the efficiency of the process. Due to this additional noise, information about the backaction interaction of the first system is lost to the vacuum, hence perfect cancellation of backaction noise is not possible. As before, we can find an optimal coupling strength to minimize the additional noise. For the system order NMO → OMS, we achieve a minimal spectral density off-resonance In the opposite order OMS → NMO, additionally to the loss of backaction information, some force signal will be lost due to propagation losses. Hence, the added noise will increase for this topology. We find for the minimal spectral density off-resonance. The spectral density is increased by 1/η compared to the case NMO → OMS and hence directly proportional to the lost force signal. Losses after the second system constitute the detection efficiency and can be modelled similarly. Since this will not affect the cancellation of backaction noise, we will omit detection losses for now. Apart from propagation losses, we take intracavity losses into account. Introducing a Markovian bath for each cavity, with coupling rates κ bath c and κ bath om , the intracavity losses can be described in terms of the escape efficiencies Similar to propagation losses, introducing intracavity losses will always impede the cancellation of backaction noise, and depending on the order of the systems, the available force signal information will differ. For the case NMO → OMS, with optimal measurement strength, we find the minimal spectral density This encompasses both cases with propagation loss, for η esc om → 1 we retrieve Eq. (38) and for η esc c → 1 Eq. (39) respectively. Thus, for the configuration NMO → OMS, the intracavity loss can be handled similarly to propagation loss.
For the case OMS → NMO, we lose additional force signal due to intrinsic loss in the meter cavity. Moreover, the signal also picks up additional information of the phase quadrature. We arrive at the minimal spectral density, The term |1 − η esc c κ c χ c | 2 describes the meter cavity's phase and noise contribution. Due to its dependence on the meter cavity susceptibility χ c , this difference is frequencydependent and will vanish for frequencies ω > κ c . For low frequencies, it will be at a maximal value of |1 − 2η esc c | 2 , making intracavity losses extra punishing for configuration OMS → NMO. We see that introducing losses is detrimental to the possible noise reduction. As losses will never be avoidable, the system order NMO → OMS should always be preferable since higher levels of noise reduction are achieved.
Relative mismatch of gBS and gDC
In addition to losses, a relative mismatch between the beamsplitter coupling g BS and down-conversion coupling g DC will also affect the noise cancellation depending on the system order. So far, we assumed g BS = g DC = 1/2 g a , in order to mimic the backaction interaction of the OMS. We will now fix g BS + g DC = g and introduce a mismatch between the beam-splitter and down-conversion couplings For high frequency, no noise reduction is possible. Traversing through the OMS first seems advantageous for noise cancellation. The shaded areas mark the bounds for sub-SQL sensitivity, from below the fundamental limit given by Eq. (26) and from above the SQL given by Eq. (15). Parameters are given in Table I. As shown in Fig. 4, the relative mismatch g r allows the phase quadratures to couple back into the amplitude quadrature and thus deviate from the backaction interaction of the OMS. This introduces a noise path and will limit the cancellation of backaction noise. It also affects the force noise differently for the different system orders. For the case OMS → NMO, the force signal is imprinted on the output phase quadrature of the OMS, and with the introduced mismatch, it is possible for the signal to couple to the amplitude quadrature. Contrary, for NMO → OMS, the force signal will remain fully in the output phase quadrature. Thus, this results in different spectral noise densities for our phase measurement. For general mismatches, this will not reduce to a simple expression. The resulting spectral densities were calculated numerically and are shown in Fig. 5. The CQNC performance is limited for low frequencies, but sub-SQL levels are still possible. Contrary to losses, the order OMS → NMO seems advantageous for a relative mismatch of the couplings. CQNC will vanish entirely in the high-frequency limit, and no sub-SQL performance is possible. After discussing ideal CQNC and the most relevant deviations from the ideal parameters, we now turn to a realistic situation one would expect in an actual experiment. For an integrated setup, reasonable parameters have been discussed in [30], which were revised in [40] for a cascaded setup, and two reasonable sets of parameters were given. From there, we found a new set of parameters which achieve broadband noise reduction for frequencies below the mechanical resonance of the oscillator. Losses are of particular interest in our case study, as they influence the noise reduction depending on the system order. Our set of parameters is shown in Table II.
The OMS must be limited by quantum backaction noise to measure the possible cancellation of backaction noise. For this, the quantum backaction noise in Eq. (13) must be much larger than the thermal noise. In the low-frequency limit (κ om ω), this can be expressed in terms of the quantum cooperativity, as Modern silicon-nitride membranes have exceeded quality factors of Q m = 10 8 [46], thus the OMS would be quantum backaction limited for a temperature T = 4 K, a temperature achievable with cryogenics. For higher temperatures, the quality factor must be increased to elevate the backaction effects over the thermal noise floor, and similarly, lower temperatures allow for a lower quality factor. In order to account for this and compare all OMS of frequency ω m , once they can resolve the quantum backaction, we normalize our force noise by the quality factor Q m . Matching most parameters, such as ancilla cavity detuning ∆ a and the cavity linewidth κ c and κ om should not be a problem; we assume them to be closely matched. More delicate to match are the coupling strengths. A down-conversion coupling of g DC = 2π × 250 kHz and a beam-splitter coupling of g BS ≥ 2π × 235 kHz were readily achieved [40], thus we set Table II and temperature T = 4 K. For low frequencies, sub-SQL performance is possible for the integrated setup (solid green) and the case NMO → OMS (dashed blue). No sub-SQL levels are possible for the case OMS → NMO (dash-dotted orange). The shaded area shows levels above the SQL. the optomechanical coupling strength to g = 2π × 500 kHz. Optomechanical coupling strengths of g = 2π×440 kHz have been reported in micro-mechanical setups [27] and higher couplings in the order of MHz should be possible [47]. Hence, our assumed coupling strength should be reasonable. If these levels cannot be reached for the optomechanical coupling strength, one could still compensate for this mismatch by the cavity linewidths, as described in Eq. (30) and increase the performance for low frequencies.
For a negative mass oscillator, where the two modes are not spatially separated, as depicted in Fig. 1(a), the escape efficiency will also dictate the achievable linewidth of the ancilla cavity. An escape efficiency of 90% should be achievable [48], which, with a meter cavity linewidth of κ c = 2 MHz, makes an ancilla cavity linewidth of κ a = 200 kHz possible. For the OMS, similar escape efficiencies should be achievable. Detection efficiencies over 97% were already realized [49]. Similarly, propagation losses between the systems should not be an issue. We assume 3% losses from both propagation and detection.
The achievable sensitivities for the parameters in Table II are shown in Fig. 6. In the low-frequency regime, the configuration NMO → OMS shows a reduction of 20% below the SQL and almost comparable results to the integrated setup. No sub-SQL sensitivity can be achieved for the other system order OMS → NMO. This is not surprising, as we saw in subsection IV B that this configuration suffers additional penalties from losses. We see that instead of matching the parameters (22)-(24), the limiting factor for noise reduction in a realistic case will be losses. Additionally, as losses will never be entirely avoidable, choosing the right system ordering, NMO → OMS, is of utmost importance.
VI. CONCLUSION
In this work, we discussed a cascaded version of the alloptical coherent quantum noise cancellation setup proposed by Tsang and Caves [28,30]. Instead of introducing the antinoise path directly into the optomechanical cavity, we considered an all-optical effective negative mass oscillator as a standalone system and removed the backaction noise of the positive mass oscillator by coupling both systems coherently via a strong drive field. Under the conditions (22)-(24), we then rediscovered the perfect cancellation of backaction noise. Afterwards, we discussed deviations from the ideal conditions, including losses and the influence of the system order. We saw that for mismatched measurement strengths, by choosing the cavity linewidth and coupling strength in a specific way, CQNC can be recovered in the high-or low-frequency regime, or even at a specific frequency ω * = ω m . For losses and a relative mismatch of beam-splitter and down-conversion coupling, the system order will also affect the noise cancellation performance. Finally, we discussed the performance of our setup for a set of realistic parameters and showed that a quantum noise reduction of 20% below the SQL is possible for the order NMO → OMS in the low-frequency regime.
with S in the input spectral density matrix. Every sub-system in our setup has four system variables. Hence the system matrices M sys and bath input matrices K bath are all 4 × 4dimensional. The in-and output variables are the two quadratures of the laser light, making the input matrices K in 4 × 2dimensional.
To model losses, the output quadratures are mixed with vacuum noise via a beam-splitter interaction, which are then The second matrix mixes the cavity output with vacuum, and the first matrix is the partial trace over the lost output port of the beam-splitter. Finally, we need to cascade the two sub-systems. For this choose x in,2 = x out,1 , where the subscripts 1 and 2 stand for the first and second system. The total output quadratures are then given by The equations of motion (2) for the optomechanical sensor imply the following matrices, Similarly, the equations of motion (4) for the effective negative mass oscillator implies From these expressions, we calculate the total transfer matrix in Eq. (B9) and together with the input spectral density, S in = 1 2 diag (1, 1, 1, 1, 1, 1, 1, 1, 0, 2S F ) for NMO → OMS we obtain the output spectral density with Eq. (B7). The spectral density of the added noise is then estimated from the phase component S pp out , by dividing it by the coefficient of S F . | 7,882 | 2022-08-03T00:00:00.000 | [
"Physics"
] |
Constraining MOdified Gravity with the S2 Star
We have used publicly available kinematic data for the S2 star to constrain the parameter space of MOdified Gravity. Integrating geodesics and using a Markov Chain Monte Carlo algorithm we have provided with the first constraint on the scales of the Galactic Centre for the parameter $\alpha$ of the theory, which represents the fractional increment of the gravitational constant $G$ with respect to its Newtonian value. Namely, $\alpha \lesssim 0.662$ at 99.7% confidence level (where $\alpha = 0$ reduces the theory to General Relativity).
Introduction
Scalar-Tensor-Vector Gravity (STVG), also referred to in literature as MOdified Gravity (MOG), is a theory of gravity firstly proposed in [1] as an alternative to Einstein's theory of General Relativity (GR). It introduces extra fields in the description of the gravitational interaction, allowing for correct predictions on galactic and extra galactic scales [2][3][4][5][6], without resorting to dark matter [7]. The gravitational action in MOG presents additional terms along the classical Hilbert-Einstein action, depending on the metric tensor g αβ of space-time. More specifically, a massive vector field ϕ α is introduced and its mass, µ, is treated as a scalar field. Furthermore, also Newton's gravitational constant G N is elevated to a scalar field G.
The motion of test particles in MOG is affected by the presence of the vector field ϕ α which acts as a fifth-force, whose repulsive character counteracts the increased attraction due to the scalar field nature of G. The fractional increment of G, with respect to its Newtonian value, G N , is given by a new parameter of the theory, α = (G − G N )/G N . A distinctive feature in the motion of test massive bodies in MOG is that Keplerian orbits in a central potential are characterized by an increased value of the rate of orbital precession [8,9]. This is given by: where, ∆ω GR is the usual expression of the periastron advance in GR, related to semi-major axis, a, and eccentricity, e, of the orbiting body.
Here, we will summarize the extended work done in [8], where we used publicly available data for the S2 star from [10], along with the measurement of its orbital precession from [11] to constrain the parameter space of MOG.
MOdified Gravity
In MOG, the gravitational action is written as [1]: The first term, S HE , is the classical Hilbert-Einstein action of GR, while S m is related to the ordinary matter energy-momentum tensor, where g αβ is the metric tensor of space-time, g its determinant and R the Ricci scalar. The two extra terms, S V and S S , on the other hand, are related to the vector and scalar field respectively, and read: With ∇ α we have indicated the covariant derivative related to the metric tensor g αβ , and with B αβ the Faraday tensor associated to the massive vector field ϕ α : B αβ = ∇ α ϕ β − ∇ β ϕ α . V(ϕ), V(G) and V(µ), on the other hand, represent scalar potentials describing the self-interaction of the vector and scalar fields.
In MOG, particles with mass m move according to a modified version of the geodesic equations [12]: The term on the right-hand side represents a fifth force [1,3,12], due to the coupling between massive particles and the vector field ϕ α . The coupling constant, q, is postulated to be positive (q > 0) so that this force is repulsive [1] and physically stable self-gravitating systems can exist [3]. Additionally, q is taken to be proportional to m, q = κm with κ a positive proportionality constant [12], ensuring the validity of Einstein's Equivalence Principle. The field equations associated to the MOG action in Eq. (3) can be solved exactly assuming that: 1.
the metric tensor is spherically symmetric; 2.
the scalar field G can be treated as a constant on the scales of compact objects, ∂ ν G = 0 [13,14]. This means that the aforementioned parameter α can be regarded as a positive dimensionless constant, whose value depends on the mass of the gravitational source [1]: 3.
The proportionality constant κ defining the fifth-force charge of massive particles is defined by: 4. The mass of the vector field, µ, can be neglected on the scales of compact objects, as its effects are only evident on kpc scales [3,4,15]; Under these assumptions (and by setting the speed of light in vacuum to c = 1), one obtains [13] the following line element: This Schwarzschild-like metric is the most general spherically symmetric static solution in MOG, and it provides with an exact description of the gravitational field around a point-like non-rotating source of mass M (and hence a fifth-force charge Q = √ αG N M). It differs from the classical one in GR (to which it reduces when α = 0) by a different definition of the ∆ function: The solid angle element, on the other hand, has the usual expression dΩ 2 = dθ 2 + sin 2 θdφ 2 .
The vector field ϕ α associated to the metric tensor in Eq. (10) is given by [16] generating a repulsive force directed along the radial direction. As a consequence, the increased value of the gravitational constant G increases the attractive effect of gravity on test particles, while the repulsive effect of the vector field counteracts this effect. As shown in [9], particles around a MOG BH experience an increased orbital precession, whose first-order expression explicitly depends on the parameter α and is given in Eq. (1).
The orbit of S2 in MOG
Upon integrating numerically the geodesic equations in Eq. (7), we obtain fully relativistic sky-projected orbits for the S2 star in MOG starting from its osculating Keplerian elements at the initial time 1 . These parameters are the semi-major axis of the orbit, a, the eccentricity e, the inclination i, the angle of the line of nodes Ω, the angle from the ascending node to pericentre ω, the orbital period T and the time of the pericentre passage t p . These uniquely assign the initial conditions of the star at a given time, that we set to be the time of passage at apocentre, given by t a = t p − T/2. Along with this parameters, one needs to fix the mass of the gravitational source, M, its distance from Earth, R, and a possible offset and drift (described by five additional parameters x 0 , y 0 , v x,0 , v y,0 and v z,0 ) of this object in the astrometric reference frame of the observer. From the integrated geodesic, the astrometric positions can be obtained via a geometric projection of the space-time coordinates, through the Thiele-Innes elements [19], and modulating the observation times for the classical Rømer delay. The kinematic lineof-sight velocity of the star is converted into the spectroscopic observable, i.e. its redshift. In doing so we take into account both the special relativistic longitudinal and transverse Doppler effect and the gravitational time dilation, due to the combination of high-velocity and highproximity at pericentre. Other effects, like the gravitational lensing or the Shapiro time delay give neglectable contributions [17,18], and we hence do not take them into account. In Figure 1 we report how much the spectroscopic and the two astrometric observables deviate around pericentre from a Newtonian orbit of the S2 star, for different values of the parameter α. As can be seen, measurements performed at and after pericentre of both the astrometric position of the star and its radial velocity carry a signature of the gravitational field produced in MOG.
Data and methodology
S2 is a B-type star in the nuclear star cluster of SgrA*, a compact radio source in the Galactic Centre (GC) of our Galaxy, identified with a supermassive black hole (SMBH) with mass M ∼ 4 × 10 6 M . Throughout its 16-year orbit , both special and general relativistic effects have been detected [11,20,21] confirming predictions from GR, on one hand, and opening a new way to test gravity [8,17,18,22], on the other. We exploit publicly available kinematic data for the S2 star to constrain the 15-dimensional parameter space of our orbital model in MOG, given by (M, R, T, t p , a, e, i, Ω, ω, x 0 , y 0 , v x,0 , v y,0 , v z,0 , α). More specifically, we use the astrometric positions and radial velocities of S2 presented in [10] and the measurement of the relativistic orbital precession performed in 2020 by the Gravity Collaboration [11], through precise astrometric observations with the GRAVITY interferometer at VLT (which, however are not publicly available and we can only rely on the precession measurement itself). In particular, they measured the parameter f SP in where ∆ω GR is given in Eq. (13), obtaining f sp = 1.10 ± 0.19, thus favoring GR against Newtonian gravity at >5σ. In order to fit our orbital model to such data we employ the Markov Chain Monte Carlo (MCMC) sampler in emcee [23], and we evaluate the integrated autocorrelation time of the chains to check the convergence of the algorithm. In particular, we perform two separate analyses: A : We only use astrometric positions and radial velocities up to mid-2016 in [10]. Our dataset, thus, contains no information at all about the 2018 pericentre passage. In this case we use the following log-likelihood: by which we assume that all data points are uncorrelated with each other and that they are normally distributed within their experimental uncertainty, namely: where x i is the i-th experimental data point, σ i its uncertainty and µ i the corresponding prediction from our model.
B :
We use the same dataset used in case A , but adding as a single measurement the rate of orbital precession obtained in [11]. Since the latter measurement was done using the same astrometric dataset that we use, plus data recorded at pericentre, we need to multiply all our uncertainties by √ 2 to avoid double counting data points. This yields In both cases we use uniform flat priors for our parameters 2 centered on their best-fitting value by [10] and with an amplitude given by 10 times their experimental uncertainty, and we set heuristically α ∈ [0, 2] as uniform interval for the MOG parameter.
Results
In Figure 2 we report the 1σ confidence intervals for the orbital parameters in our analyses A and B , compared with the corresponding 1σ intervals from [10] (who fitted Keplerian orbits to the data) and [11] (in which a first-order Post-Newtonian orbital model is used). The parameters from our analyses are compatible within their errors with the results from the previous studies. Finally, in Figure 3 we report in logarithmic scale the normalized posterior distributions for the parameter α from the two analysis A (in blue) and B (in red) along with their 99.7% confidence level (c.l.) upper limit. Our results provide with the first constraint on the MOG theory at the GC, yielding: B : α 0.662 w/ precession (18) While both analyses are compatible with GR, the additional information carried by the single orbital precession data point at pericentre results in a more peaked distribution for α in case B , whose upper limit decrease by 55.6% with respect to analysis A .
Conclusions
Here, we have summarized our results in [8] providing with the first constraint on the extra parameter, α, of MOG at the GC, obtained by studying the fully-relativistic motion of the S2 star around the SMBH SgrA*. In particular, we have solved numerically the geodesic equations for a test particle around a static BH in MOG, described by the metric element in Eq. (10), particularizing the kinematic properties of the test particle for the orbital parameters of the S2 star [10,11]. Then, we have explored the 15-dimensional parameter space of our model by means of a MCMC algorithm, which allowed us to study the posterior distributions of the parameters, upon comparison with publicly available kinematic data for S2 and the v z, 0 (km/s) Figure 2. The best fitting values and 1σ confidence interval for the orbital parameters of the S2 star in our analyses A (blue bars) and B (red bars), compared with the best-fitting values from previous works in [11] (in which 1-PPN model is fitted to the data) and [10] (using a Keplerian model to describe the orbit of S2). 99.7% c.l. upper limit A B Figure 3. The normalized posterior probability distribution of the parameter α in logarithmic scale for the two analysis A (in blue) and B (in red). The 99.7% c.l. level upper limit of the parameter is reported as a dashed vertical line in the two cases. The analysis B provides with a more peaked distribution for α around 0, with the upper limit going down by 55.6% between the two analyses. | 3,017 | 2022-06-25T00:00:00.000 | [
"Physics",
"Geology"
] |
Research on sound quality of roller chain transmission system based on multi-source transfer learning
To establish the sound quality evaluation model of roller chain transmission system, we collect the running noise under different working conditions. After the noise samples are preprocessed, a group of experienced testers are organized to evaluate them subjectively. Mel frequency cepstral coefficient (MFCC) of each noise sample is calculated, and the MFCC feature map is used as an objective evaluation. Combining with the subjective and objective evaluation results of the roller chain system noise, we can get the original dataset of its sound quality research. However, the number of high-quality noise samples is relatively small. Based on the sound quality research of various chain transmission systems, a novel method called multi-source transfer learning convolutional neural network (MSTL-CNN) is proposed. By transferring knowledge from multiple source tasks to target task, the difficulty of small sample sound quality prediction is solved. Compared with the problem that single source task transfer learning has too much error on some samples, MSTL-CNN can give full play to the advantages of all transfer learning models. The results also show that the MSTL-CNN proposed in this paper is significantly better than the traditional sound quality evaluation methods.
sound quality prediction model is still a difficult problem when the number of samples is insufficient.Moreover, the constant pursuit of prediction accuracy will inevitably lead to more and more complex models, so it is also important to achieve lightweight sound quality prediction models.
To establish the sound quality evaluation model of roller chain transmission system, we first collect the running noise of the roller chain system under different working conditions and preprocess the noise samples.Secondly, we organize a group of testers to evaluate the noise samples subjectively, and construct feature maps as objective evaluation.To solve the problem of small sample sound quality evaluation, we find and make use of a fuzzy phenomenon in the subjective evaluation of sound quality and propose a data enhancement method called fuzzy generation.However, fuzzy generation can lead to a particular kind of data leakage, and we use transfer learning to eliminate this effect.The basic idea of transfer learning is to accelerate the learning process on the target task by using the knowledge learned in one or more source tasks.This is usually done by using a model that has already been trained on one task as a starting point, and then fine-tuning it to fit the new task [13][14][15][16] .In the study of single-source tasks, Jamil, F. et al., proposed an example-based deep transfer learning method for wind turbine gearbox fault detection to prevent negative migration 17 .Maschler, B. et al. proposed a modular deep learning algorithm for anomaly detection of time series data sets, which realizes deep industrial transfer learning 18 .In the study of multi-source tasks, Rajput, D. S. et al. use multi-source sensing data and fuzzy convolutional neural network for fault classification and prediction, and the accuracy of the model is significantly superior to other machine learning and deep learning methods 19 .Sun, L. et al., proposed a new parameter transfer method to improve training performance in multi-task reinforcement learning 20 .Based on the sound quality study of three chain transmissions (silent chain, Hy-Vo chain and dual-phase Hy-Vo chain), we propose a new method named multi-source transfer learning convolutional neural network (MSTL-CNN).The results show that the proposed model is superior to the traditional sound quality prediction methods.
Data collection and pre-processing
Figure 1 shows the steps of the sound quality evaluation test, including noise acquisition and processing, subjective evaluation test and objective evaluation.
Noise test
For the chain transmission system, the roller chain type is 06B, the tooth number of driving sprocket is 19, and the tooth number of driven sprocket is 38.The noise sensor (MINIDSP UMIK-1) is placed at the same height as the center of the driving sprocket, and the noise test is carried out in a closed indoor reverberation field.The measurement distance is the distance between the noise sensor and the chain system, which is 0.5 and 1 m respectively.In the noise test, the speed range is 500-4000 rpm, and the three loads are 500 N, 600 N and 750 N respectively.Starting from the lowest speed (500 rpm), the noise collection is performed every 500 rpm, and the collection time is greater than 30 s each time.We record noise audio using Adobe Audition 2022 software, and the noise is sampled at 48,000 Hz.Finally, we can get 2 × 8 × 3 = 48 noise samples, and randomly intercept 5 s fragments of each sample for subsequent processing.
Figure 2 shows a comparison between the noise of the roller chain transmission system and a silent chain transmission system.The green time-domain waveform on the left is the roller chain system, and the blue one on the right is the silent chain system.Under the same load and measurement distance, the noise of the roller chain system is stronger than that of the silent chain from low speed to high speed.At the low-speed of 1000 rpm, the noise energy of the roller chain system is obviously stronger than that of the silent chain.As the speed increases, the noise energy of the roller chain system decreases at 2500 rpm.The results show that the roller chain system www.nature.com/scientificreports/works more smoothly at medium speed, but the noise is still stronger than that of the silent chain system.At the high speed of 4000 rpm, the noise energy of both increases significantly, and the noise of the roller chain system is slightly stronger than that of the silent chain system.Therefore, the noise characteristics of the roller chain system are different from those of the silent chain system, so it is necessary to construct the sound quality evaluation model of the roller chain transmission system.
Subjective evaluation test
We use the equal interval direct one-dimensional evaluation as a subjective evaluation method, and the noise discomfort level describes the sound quality 21 .As shown in Fig. 3, there are five discomfort levels for noise, where 0 means extreme discomfort and 10 means no discomfort.The three middle discomfort levels, with three scores for each level, represent the strength of the discomfort level from smallest to largest.Twelve healthy testers with driving experience take part in the auditory perception test, with a ratio (5:1) of men to women.As shown in Fig. 1d, the tester wears a hi-fi headset for the test, and the same noise audio is played five times by the Groove software.The sound pressure level of the test environment does not exceed 30 dB, and the score is recorded in the table after the tester listens to a noise audio.
The subjective evaluation scores at each speed are shown in Fig. 4. Based on the median line, as the speed continues to increase, the score overall shows a downward trend.In the speed range of 500-3000 rpm, the sound quality of the roller chain system decreases with the increase of the speed.However, at 3500 rpm, the score increases, indicating that the roller chain system runs more smoothly at this time.At the limit speed of 4000 rpm, the score is significantly reduced.It should also be noted that the longer the box line means that the sound quality of this speed fluctuates greatly, and there is even an outlier at 2000 rpm.For the same sample, twelve testers should have relatively consistent feelings, so it is necessary to conduct normality test and correlation test on the subjective evaluation results.
Because the sample size is small, Shapiro-Wilk test is used to test the normality of each sample scores, as shown in Fig. 5.If the test value is greater than 0.05, it is consistent with normality, which is represented by the blue circle, and the red circle indicates that the sample does not conform to the normal distribution.We can see www.nature.com/scientificreports/ that quite a variety of evaluation results do not conform to the normal distribution, so spearman correlation test is needed to further analyze the subjective evaluation results.The formula for spearman correlation test is as follows: where x i and y i represent the corresponding elements of the two variables, x and y represent the average value of the corresponding variables.The larger the r value, the stronger the correlation, and the test results are shown in the Fig. 6.In Fig. 6, T i (i = 1, 2,…,12) represents the number of the twelve testers.The correlation between many testers is less than 0.7, and the weakest correlation between T9 and T6 is only 0.40.To screen reasonable subjective evaluation results, we calculated the average correlation coefficient (ACC) of each tester based on Fig. 6, as shown in Table 1.
It can be seen from Table 1 that the ACC of T9 is less than 0.7, so if the results of T9 are excluded, the evaluation results of the remaining testers are all reasonable.The average evaluation scores of the remaining eleven testers are calculated as the final subjective evaluation results, as shown in Table 2.
Objective evaluation
Mel frequency cepstral coefficient (MFCC) is based on the characteristics of the human auditory system.The sensitivity of the human ear to different frequencies is non-linear: it is more sensitive to low frequency sounds and less sensitive to high frequency sounds.The Mel scale is a frequency measurement method based on the human ear's perception of pitch, which can convert the actual frequency into the Mel frequency.MFCC is a powerful feature representation method because it is able to capture the main characteristics of speech signals in a compact manner, and to some extent simulates the characteristics of human auditory perception [22][23][24] .The calculation steps of MFCC are as follows: (1) The original signal is pre-weighted, the high-frequency part is strengthened, and the high-frequency part of the sound signal is compensated for the loss that may be suffered during transmission.
where x(t) is the original signal, y(t) is the pre-weighted signal, and α usually takes 0.95 or 0.97.(2) The signal is divided into N millisecond frames, and the data of each frame is windowed.Window functions usually use Hamming window: (3) The frequency spectrum is obtained by fast Fourier transform of the data of each frame.A set of Mel filters (usually a triangular filter bank) is applied to the spectrum to simulate the perceptual properties of the human ear.Each filter H m (k) is defined as: where f(m) is the central frequency of the filter on the Mel scale.The Mel scale transformation is shown as follows: where M(f) is the representation of the frequency f on the Mel scale.(4) The output of the filter bank needs to be logarithmic.
where X(k) is the spectrum of the frame.
(5) After taking the logarithm of the filter bank output, the discrete cosine transform is used to get the final MFCC.
where C(n) is the n-th MFCC, L represents the order of the MFCC, generally 12-16.In this paper, the MFCC order L is taken as 12, and the length N of each frame is taken as 20 ms, 25 ms and 30 ms respectively.Finally, the 5 s noise sample is divided into F (250,200 and 167) frames.Based on Eqs. ( 2)-( 7), we can calculate the three MFCC for each noise sample.The objective evaluation results generate the input feature space, and the subjective evaluation labels the noise samples.As shown in Fig. 7, the MFCC feature map is constructed as the input of the sound quality evaluation model.To compare with the traditional modeling method of sound quality evaluation, we also select six acoustic parameters as objective evaluation.As illustrated ( 2) www.nature.com/scientificreports/ in Fig. 8, the Audio toolbox in MATLAB is used to calculate these six parameters: A-weighted sound pressure level (A-SPL), loudness, sharpness, roughness, fluctuation, articulation index (AI).
Methodology
Transfer learning (TL) is especially useful in situations where data is scarce, because it allows models to leverage existing knowledge, reducing the need for large amounts of labeled data.TL can be divided into four categories according to different technical methods: instance-based TL, feature-based TL, model-based TL and relation-based TL 25,26 .Instance-based TL directly uses the data instances of the source task to assist the learning of the target task, which usually involves reweighting the data of the source task to better adapt to the target task.Feature-based TL learns feature representations that can be transferred between source and target tasks.
Model-based TL directly transfers the model parameters of the source task to the target task and adjusts them.
Relation-based TL is suitable for situations where both source and target tasks involve relational data, such as knowledge graphs or social networks.These classification methods of TL help to understand its wide application and provide guidance for selecting appropriate transfer learning strategies for specific problems.
In this paper, we take the sound quality evaluation of roller chain transmission system as the target task and choose the sound quality study of silent chain transmission system as the source task.The source task and the target task are the same in feature space and data space, but the data distribution is different.We choose the model-based TL approach, in which the model parameters of the source task are used as the initialization parameters of the target task model 27 .The fine-tuning process can be represented by the following formula: where θ target is the model parameter of the target task, θ source is the model parameter of the source task, and Δθ is the parameter adjustment on the target task.As shown in Fig. 9, by stacking the prediction results of the three target models, the final sound quality evaluation model can be obtained.
As a basic model, convolutional neural network (CNN) is generally used to solve classification tasks.To model sound quality evaluation, the number of nodes in the output layer is set to 1, and a continuous value can be obtained without using nonlinear activation function.The convolutional layer in front of the CNN can be regarded as a feature extractor, as shown in the Fig. 10.In the source model, there are three convolution layers (Conv), one maxpooling layer, one flatten layer, and three fully connected layers (FC).The step size of the maximum pooling layer is 2 with 0 padding, and the step size of the Conv is 1 without 0 padding.In the three FC, the number of nodes is 1024, 128, and 1, respectively.To avoid overfitting, dropout technology is used in FC1, and the dropout rate is set to 0.5.The output layer is the last layer, and the output result is the evaluation score.Except for the last layer, the activation functions of other layers are relu.Some studies have shown that the structure of the source model and the target model should be similar 28 .Therefore, in this paper, the structural parameters of the target model are the same as those of the source model.When the 167 × 12 MFCC feature map is used as input, the structural parameters of the source model are shown in the Table 3.The transfer learning process from the source model to the target model can be summarized as: First, the source model is trained on the source task, and then the feature extractor of the source model is reused on the target model.Based on the samples of the target task, the parameters of the new fully connected layers can be trained on the target model.
We chose three sound quality research of chain transmission system as the source tasks, which are: silent chain, Hy-Vo chain, and dual-phase Hy-Vo chain.To better train the source model, we need to expand the datasets for these three source tasks.Fuzzy generation is a data enhancement method based on the fuzzy phenomenon in the subjective evaluation of sound quality.After the correlation test, all the subjective evaluation scores are reasonable, but researchers often take the mean as the final score.If the average score is taken as the most correct result, the accuracy of the other scores can be defined.Based on the uncertainty of subjective evaluation results, we introduce fuzzy mathematics to quantify and deal with the fuzziness of this problem.Fuzzy mathematics is an effective mathematical tool for dealing with uncertainty and fuzziness.By introducing the concepts of fuzzy sets and fuzzy logic, it allows mathematical modeling of inaccurate or incomplete information that is prevalent in the real world [29][30][31] .Combined with the idea of fuzzy mathematics, fuzzy generation is proposed to expand the datasets.First, we can define a fuzzy map on the evaluation score interval as follows: where I is the value field [0 10], M is the fuzzy interval of I, and M(s) is the membership function.
We treat the average score as having a membership of 1, while the minimum score and the maximum score both have a membership of 0. After constructing different membership functions, the membership degrees of different sizes are selected to divide the fuzzy generation interval.In the fuzzy generation interval, a suitable perturbation method is selected to generate a sufficient number of new samples.In this paper, three membership functions (cusp, ridge and normal) are constructed, and the formula is as follows: where c is the core point (the average score), m is the left boundary point (the minimum score), n is the right boundary point (the maximum score), r is a random generation point, and M(s r ) is the membership of r.Equation (10) is the cusp membership function, Eq. ( 11) is the ridge membership function, and Eq. ( 12) is the normal membership function.We take three samples of the roller chain transmission system as an example, and the three membership functions are shown in Fig. 11.In this paper, we use random perturbation to triple the size of the original dataset with three membership degrees (0.9, 0.7 and 0.5). (10)
Results and analysis
Based on three source tasks (with 168 samples) and one target task, we build three transfer learning models: Source model 1(silent chain)- where n is the number of samples, x i is the predicted value of the sample, and y i is the true value of the sample.R is used to measure the degree of linear correlation between two variables.The value is between − 1 and 1, where 1 means a completely positive correlation, − 1 means a completely negative correlation, and 0 means no linear correlation.In the prediction of sound quality, R can be used to intuitively show the linear correspondence between the predicted value of the model and the real value.To measure the final predictive effect of transfer learning, we also propose an evaluation formula to select the best situation.
The indicator E can represent the effectiveness of the transferred knowledge on the target model.To ensure that the performance difference between the source model and the target model is not too large, the value of E should be as small as possible.Based on the size of the E value, we can get the best set of source model-target model.Due to the small sample size of the target task, serious underfitting occurs in the model without the use of transfer learning, and the results are not presented because they are meaningless.
The training results of the Source model 1 are shown in Table 4, and the training results of the Target model 1 are shown in Table 5.The smaller the evaluation indicator E, the greater the contribution of the source model, and the smaller the prediction error of the target model.As Table 5 shows, the minimum value of evaluation indicator E is 1.745.Therefore, when the membership function is ridge, the membership value is 0.7 and the input size is 167 × 12, the transfer learning effect is the best.The training results of the Source model 2 and Target model 2 are presented in Tables 6 and 7 respectively.
For the Source model 2-Target model 2, as can be seen from the minimum value 1.813 of E in Table 7, transfer learning has the best effect when the membership function is ridge, the membership value is 0.7 and the input size is 200 × 12.For the last transfer learning model (Source model 3-Target model 3), the results are shown in Tables 8 and 9.
As can be seen from Table 9, the minimum value of E is 2.009, indicating that transfer learning has the best effect when the membership function is ridge, the membership value is 0.9 and the input size is 250 × 12.Among the three transfer learning models, the error indicators (RMSE and MAE) of the target model are larger than www.nature.com/scientificreports/ that of the source model, and we find that the transfer learning effect will be poor on specific samples, as shown in the Fig. 12.In Fig. 12, the performance of the three transfer learning models differs significantly for different samples.To get the most out of each model, the stacking technique is shown in Fig. 13.Firstly, we train the three transfer learning models at their best situation, and Fig. 14 shows their convergence curves.Among the three transfer learning models, the initial RMSE of the target model is significantly smaller than that of the source model.It shows that the iteration of the target model is smoother, unlike the source model which has an obvious period of rapid convergence.The three target models are taken as the base model, and the prediction results of the base model are taken as the training data of the meta model.After training the meta model, the prediction of the base model can be effectively integrated.In this paper, the meta model adopts the method of linear regression, and the regression equation is shown as follows: where P final is the final prediction result of the meta model, p 1 is the prediction result of Target model 1, p 2 is the prediction result of Target model 2, and p 3 is the prediction result of Target model 3. Finally, the prediction results of the meta model are shown in the Table 10.
To compare with the traditional sound quality evaluation methods, we also use lasso regression and support vector regression (SVR) 32,33 .Lasso regression is a linear model, which uses L1 penalty term to control variable selection and complexity.This helps reduce the risk of overfitting and enhances the interpretability of the model in data containing multiple related predictors.There are a large number of potential explanatory variables in sound quality prediction, and lasso regression can help identify which features are most important, simplifying the model and improving prediction performance.Unlike lasso regression, SVR is a nonlinear model that deals with linearly indivisible data by using different kernel functions.There are two main advantages of SVR: it has good robustness to outliers and can work effectively in high-dimensional space.For problems such as sound quality that involves complex nonlinear relationships, SVR can provide powerful modeling capabilities.Based on the six acoustic parameters as inputs and the subjective evaluation scores as outputs, we train the lasso regression model and SVR model.For the lasso regression model, five-fold cross validation is used to find the optimal parameter λ is 68.As for the SVR model with radial basis function, five-fold cross validation is also used to find (17) www.nature.com/scientificreports/ the optimal parameter, the value range is [0.01, 0.1, 1, 10, 100], and the optimal parameters (penalty parameter c = 100 and kernel parameter g = 0.01) can be obtained.Due to the small size of the original dataset, we choose the cross-validation approach to get a robust model performance evaluation.Five-fold cross-validation is used as a test method for lasso regression model and SVR model.The prediction results of lasso regression model and SVR model are also shown in the Table 10.Compared with the three target models, the three indicators of the meta model are significantly better, especially the error indexes (RMSE and MAE) are particularly small.Compared with the two traditional methods, the meta model also has obvious advantages: the maximum correlation coefficient R is 0.993, the minimum RMSE is 0.238, and the minimum MAE is only 0.181.Therefore, the results show that the multi-source transfer learning convolutional neural network (MSTL-CNN) proposed in this paper has the best effect and the most accurate results in the evaluation of sound quality.
Conclusion
In this paper, we do a series of noise tests to establish the sound quality evaluation model of roller chain transmission system.Firstly, 48 noise samples are obtained through noise acquisition test, and all the noise samples are evaluated subjectively and objectively.For the subjective evaluation, the results with poor correlation are excluded.As for the objective evaluation, we calculate the Mel frequency cepstral coefficients and six acoustic parameters.
To solve the problem of small sample sound quality evaluation, we propose a multi-source transfer learning convolutional neural network (MSTL-CNN) based on three source tasks.For the transfer learning model, three membership functions (cusp, ridge and normal) and three membership values (0.9, 0.7 and 0.5) are selected for fuzzy generation.By comparing the transfer learning results in different situations, the optimal conditions of each transfer learning model are found.Since the three transfer learning models behave differently on different samples, we stack their predictions into one meta model.The results of the meta model are not only much better than each transfer learning model, but also better than the traditional methods of sound quality research.In particular, the MSTL-CNN proposed in this paper has the smallest mean absolute error of only 0.181, indicating that the model is the most accurate in the evaluation of sound quality.In the future work, how to remove the stacking steps to simplify the model structure is a research difficulty.Therefore, it is crucial to realize the simultaneous training of three transfer models and timely knowledge sharing for specific samples.
Figure 2 .
Figure 2. Comparison of time domain waveform.
( 9 )Figure 8 .
Figure 8. Six acoustic parameters: (a) A-SPL, (b) loudness, (c) sharpness, (d) roughness, (e) Fluctuation, (f) AI (The upper figure of each subgraph represents the parameters at 0.5 m measuring distance and the lower figure of each subgraph represents the parameters at 1 m measuring distance).
www.nature.com/scientificreports/ is the loss function.The initial learning rate of the source model is 0.005 and the epoch is set to 200.After each source model is trained on the source task, this part including the convolution layers, pooling layer, and flatten layer is regarded as the feature extractor and the parameters of these layers are frozen.By connecting the feature extractor with the new input layer, the new fully connected layers, the new dropout layer, and the new output layer, we can finally get three target models.For the target model, we use the dataset of the roller chain system for training, the initial learning rate is 0.003 and the epoch is set to 100.During the training process, only the parameters of the new fully connected layers are constantly updated, and with the help of the feature extractor, the model can converge with fewer epochs.The final results need to take the mean of five training results, and since there are 3 kinds of feature maps, 3 membership functions and 3 membership values, each transfer learning model needs to be trained 5 × 3 × 3 × 3 = 135 times in total.Three indicators: correlation coefficient (R), root mean squared error (RMSE) and mean absolute error (MAE) are selected to evaluate the transfer learning model, and the formulas are as follows:
Table 1 .
L ACC of twelve testers.
Table 2 .
Final score of each sample.
Table 4 .
Results of source model 1.
Table 5 .
Results of target model 1.
Table 6 .
Results of source model 2.
Table 7 .
Results of target model 2.
Table 9 .
Results of target model 3. | 6,384 | 2024-05-16T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Second order cone programming for frictional contact mechanics using interior point algorithm
,
Introduction
Contact problems with dry Coulomb friction are present in many design and validation processes in mechanical engineering.As soon as several objects are involved, the question of computing contact forces arises.Examples include multibody systems and mechanisms (Brogliato, 1999;Pfeiffer and Glocker, 1996), robotic systems and grasping problems (Bicchi and Kumar, 2000;Buss et al., 1997Buss et al., , 1996;;Han et al., 2000), deformable solid mechanics (Johnson and Johnson, 1987;Popov et al., 2010;Wriggers, 2006), and granular materials (Cambou et al., 2013;Radjai and Dubois, 2011).A common model of one-sided contact is to consider a set of inequality constraints on the configuration parameters of the mechanical problem (positions, rotations) associated with forces which must themselves be positive.Coulomb's friction, which governs how objects slide relative to each other, imposes a constraint on the contact forces that must remain within the Lorentz cone.These laws lead us to write second order cone constraints on the contact forces.By introducing a non-linear change of variables of the contact relative velocities which defines the so-called modified velocities, the contact laws can be written as a complementarity condition on second order cones between the modified velocities and the contact forces. 1 In solid and structural dynamics, the discrete mechanical model is usually supplemented by equations of motion that relate the velocities of the system to the applied and contact forces.After discretization in space and time, the resulting system can be expressed as nonlinear Second Order Cone Complementarity Problem SOCCP (Acary et al., 2011;Acary and Brogliato, 2008;Acary and Cadoux, 2013).Section 2 details the problem formulation when the discrete equation of motions are assumed to be linear.
To solve these SOCCP problems numerically, a large number of methods have been used in the computational contact mechanics community.Many of these methods are in fact adaptations of well-known mathematical programming methods for solving variational inequalities and complementarity problems.Some of these methods have been jointly developed by optimization specialists.There are two inherent difficulties in frictional contact mechanics problems.The first difficulty is related to the non-monotone nature of the complementarity problem.Identifying an optimization problem for which the complementarity problem would be the optimality conditions is difficult, and leads to non-convex optimization problems.The second difficulty arises from the fact that the constraints are not of full rank and, in the case of rigid multi-body systems, may be of very low rank with respect to the number of constraints.In (Acary et al., 2018), a review of the main literature is made and methods are compared with performance profiles.When the second-order constraints are of full rank, second-order methods such as semi-smooth Newton methods, e.g., Alart-Curnier's method (Alart and Curnier, 1991), are generally robust and accurate.If not, as it is usual in multi-body systems made of rigid parts (robots, granular material), then the first-order methods such as projected successive over-relaxation gradient methods are robust, but slow and with a limited accuracy in practice.Second order techniques cannot solve the problem due to robustness issues.As far as we know, there is no high accuracy second order method capable of solving the friction contact problems with redundant constraints.
One of the objectives of this work is to propose a second-order method, based on an interior point method that is accurate, robust and efficient for problems where the constraints are rank deficient, but, in the first step, on a convex optimization problem.Some applications of interior point methods for contact problems have already been attempted in the literature (Kleinert et al., 2014;Kučera et al., 2013;Mangoni et al., 2018;Melanz et al., 2017).In the precursor work of (Kučera et al., 2013), the contact problem with Tresca friction (purely quadratic cylindrical constraints) is solved with Mehrotra's algorithm.The work in (Kleinert et al., 2014) is the most advanced for the case of conic constraints.The problem considered is the relaxed convex problem as in this work, which is further regularized by adding a constant diagonal term to the Jacobian matrices of the interior point algorithm.To solve large problems, linear systems are solved by iterative methods.The degeneracy of the conditioning as the iterations progress leads to the use of preconditioners and makes it costly to obtain high accuracy solutions.In (Melanz et al., 2017), a comparison is made between an interior point method and first order methods.The interior point method on second-order cone shows a better convergence rate but without reaching higher accuracies than first order methods.
In this article, we consider a relaxed problem where the actual local velocity is complementary to the reaction force at each contact point.This yields a reformulation of the original problem as a convex second order cone optimization (SOCO) problem.A dedicated interior point method based on the primal-dual algorithm of Mehrotra with the Nesterov-Todd scaling strategy is tailored to solve this problem.In particular, we show on a large bench set of examples that a tight accuracy can be achieved at optimum without regularizing the Jacobian matrices, provided that their conditioning is controlled.
Since our main objective is to have a robust and efficient computer code capable of solving our problem with a high degree of accuracy, we have first performed experiments with existing solvers.At first, we have tried to solve the convex relaxation problem by using a differentiable nonlinear optimization as proposed by Benson and Vanderbei (Benson and Vanderbei, 2003).They suggest four ways of reformulating the second-order cone constraint to get round the non-differentiability of the norm at zero.Our experiments were performed by means of the modeling langage AMPL (Gay, 2015) and the optimization solver KNITRO (Byrd et al., 2006).Our conclusion is that, except in a few rare cases, none of these reformulations is able to solve efficiently and accurately the convex model of our problem.The second experiments were with the interior point solver SDPT3 (Tütüncü et al., 2003).This is an implementation of the Mehrotra predictor-corrector algorithm, which we hoped would be effective and robust in solving our problem.From our point of view, the great advantage of SDPT3 is that it is open source and we can therefore hope to modify it to solve the initial problem.Unfortunately, the results of the experiments with SDPT3 were not conclusive.In particular, the accuracy we wanted to achieve was not sufficient and too many failures were observed.But the experiments were interesting for us, because we identified that the main difficulty in implementing this algorithm is in solving the linear system at each iteration.This led us to carry out our own implementation of this algorithm, devoting our efforts in solving the linear system and in the scaling method.We also made experiments with the interior-point solver MOSEK (Andersen and Andersen, 2000) whose algorithm is based on a homogeneous and self-dual model (Andersen et al., 2003).The numerical results of MOSEK are better than those of SDPT3.However, in order to obtain accurate solutions, the number of iterations can become large or failures occur as soon as the stopping tolerance is reduced (below 10 −7 ).Unfortunately, MOSEK is not open source, so it has not been possible to understand the origin of these failures, nor will it be possible to modify this code to solve our initial problem.These first numerical experiments motivated us to make our own implementation of the interior point algorithm.Especially since in the conclusion of (Tütüncü et al., 2003) the authors state that "For problems with second-order (quadratic) cone constraints, experiments indicate that there is room for improvement in SDPT3".
The outline and the contribution of the article are as follows.In Section 2, we recall the basics of mechanical models and how the convex SOCO problem is obtained.The corresponding primal-dual pair is formulated in Section 3. General results are given on the non-emptiness and compactness of the solution set under the Slater hypothesis.In Section 4, the properties of the central path are detailed.In particular, under the assumption of strict complementarity, we establish a characterization of the limit point of the central path, the analytic center of the optimal set, which to our knowledge is new in the SOCO context.This property of the algorithm is important from the mechanical point of view, since the selection of dual variables, that corresponds to the reaction forces, is completely controlled by the interior point method.The details of the numerical implementation of the algorithm are given in Section 5. Several alternative equivalent formulations of the linear system to be solved at each iteration are detailed and the comparison of their conditioning over the iterations is illustrated.Our experiments show that the choice of formulation is fundamental for the robustness of the algorithm.In Section 6, the interior point method is extended to the case of rolling friction, where the cone of constraints is no longer a Lorentz cone and is not self-dual.
Notation
Let x and y be two vectors of R n .The Euclidean scalar product of x and y is denoted x ⊤ y.The associated ℓ 2 norm is ∥x∥ = √ x ⊤ x.The perp notation x ⊥ y means that x ⊤ y = 0. Let E be a subset of R n .The interior, the relative interior and the boundary of E are respectively denoted by int(E), ri(E) and bd(E).The dual of E is denoted and defined by The definitions and notations related to the Jordan algebra are described in Appendix A.
Mechanical models
Let us first introduce the original mechanical models that we are interested in.Let d be the dimension of the space of the mechanical system.Two classes of problems are considered.The frictional contact (FC) problems (Acary et al., 2018) for which d = 3 and the problems with rolling friction (RF) at contact (Acary and Bourrier, 2021) for which d = 5.Let n ∈ N be the number of contact points and let m ∈ N be the number of degrees of freedom.The mechanical system is described by means of three vectors: the global velocity v ∈ R m , the velocity u ∈ R nd and the reaction r ∈ R nd .After discretizing the dynamical system in time and space, the general model to be considered is a conic complementarity problem of the form where the data of the problem are M ∈ R m×m a symmetric and positive-definite matrix, f ∈ R m , H ∈ R nd×m , w ∈ R nd , K is a cone and K * := {u ∈ R nd : u ⊤ r ≥ 0, for all r ∈ K} is its dual cone.The cone is of the form K = n i=1 K i , each K i being a cone whose definition depends on the model.For a FC problem, we have This is the second order cone of Coulomb friction at the ith contact point.The scalar r i,N (resp.u i,N ) and the vector r i,T (resp.u i,T ) are the normal and tangential components of the reaction (resp.velocity) vector of the ith contact point and µ ∈ R n is the vector of friction coefficients.The dual cone of K, is given by For a RF problem, the cones are of the form The vector r i,R is the rolling friction reaction moment at contact and µ R ∈ R n is the vector of rolling friction coefficients.The dual cone of K i is The vector u i,R is the relative angular velocity at contact.For this model, the components of the function Φ are defined by Φ By observing that Φ(u) = Φ(u + Φ(u)) and by making the change of variable u ← u + Φ(u), the problem (1) is reformulated as (2) In (Acary et al., 2011), for the case of a FC problem, an iterative solution of ( 2) is proposed as follows.The second equation is rewritten as where ϕ : R n → R nd is defined by n components of the form (s i , 0 ⊤ ) ⊤ ∈ R d .The idea is to solve the nonlinear system as a parametric system of the form where s is periodically updated according to There is no guarantee of global convergence of this procedure, but the advantage of this formulation is that, for a fixed s, the parametric system corresponds to the first order optimality conditions of the following convex optimization problem: min This problem belongs to the class of second order cone optimization (SOCO) problems.Our study focuses on the numerical solution of this problem.Obviously, the same parametric strategy can be applied to the solution of an RF problem.
Basic properties
To simplify the models, let us remove the friction coefficients by means of a simple change of variables.Let us define the diagonal matrix P µ whose ith diagonal block is diag(1, µ i , µ i ) for the FC model and diag(1, µ i , µ i , µ R,i , µ R,i ) for the RF model.The parameter s is assumed to be fixed.Let us make the change of variables: u ← P µ u, H ← P µ H, w ← P µ (w + ϕ(s)) and r ← P −1 µ r.
We keep the same names for the variables, but change the notation of the components.For the RF model we define The same notation is used for the FC model, except that the components ũi and ri do not appear.The system (3) becomes where F is the product of cones.The definition of F depends on the model.For the FC model (d = 3) we have This cone is self-dual (i.e., L * i = L i ), therefore For the RF model (d = 5), the cone of constraints is This cone is not self-dual.The dual cone is then The system (4) can be viewed as the first order optimality conditions of the following second order cone optimization problem: min Let r ∈ R nd be the dual variable associated to the linear constraint of (9).The Lagrangian function associated to problem (9) is defined by Since L is separable in v and u, convex in v and linear in u, and since F * * = F, the dual function is readily obtained and defined by The dual problem is then min Let us denote by R the optimal set of this problem.It is a convex set, characterized by the first order optimality conditions: Note that these conditions are equivalent to (4), thanks to the definitions (10) and the equalities By weak duality, the duality gap (i.e., the difference between the value of the primal and the dual function) is nonnegative.Indeed, let (v, u, r) be a primal-dual feasible solution, we have ≥ 0, the inequality arising from the fact that r ∈ F and u ∈ F * .The strong duality and the existence of an optimal solution for (11), we need a constraints qualification assumption, known as the Slater hypothesis.
With respect to the reduced problem ( 11), the Slater hypothesis can be equivalently formulated as follows: The following result summarizes the connection between the solutions of ( 4) and ( 11).They are well known and are derived from basic properties of convex optimization.We provide a proof in Appendix B to be complete.Proposition 3.2.Consider the primal-dual pair of SOCO problems (9)-( 11), where M ∈ R m×m is symmetric and positive-definite, f ∈ R m , H ∈ R nd×m , w ∈ R nd and F is the product of second order cones of the form (5) or (7).
(ii) If the problem (9) is feasible, then it has a unique optimal solution (v, û).(iii) Assumption 3.1 is satisfied if and only if the optimal set of (11) is non-empty and compact.
Central path related to a frictional contact problem
In this section, we are interested in FC problems.We recall some known results about the convergence of the central path in SOCO and put them in our context.For the sake of completeness we also provide the proofs.
For FC problems, the friction contact cone is equal to the product of n Lorentz cones, i.e., F = L n , where L is the Lorentz cone of dimension d.We recall that this cone is self-dual, i.e., F * = F.There is an algebra, the so-called Euclidean Jordan algebra, associated to symmetric cones, which allows an almost direct extension of the interior-point algorithms for linear optimization to the case of SOCO (Alizadeh and Goldfarb, 2003).A summary of Euclidean Jordan algebra is given in Appendix A. Thus, the orthogonality condition in (12) becomes u • r = 0, see (Alizadeh and Goldfarb, 2003, Lemma 15).This leads to a square system of equations with the additional conical constraints.Under Assumption 3.1, (r, u) is a primal-dual optimal solution of the problem (11) if and only if It is important to keep in mind that the matrix W is not an arbitrary positive semidefinite matrix, but has a structure given by ( 10).This structure will be useful for solving the linear system done at each iteration.In interior-point methods, a perturbation is introduced in the complementarity equation, so that the system to be solved becomes where e is the unit vector associated with the Jordan product and µ > 0 is a parameter that is driven progressively to zero along the iterations, so that in the end a solution of ( 14) is found.The parameter µ is called barrier parameter, because (15) can be interpreted as the optimality condition of the barrier problem min The first order optimality condition of ( 16) is W r + q − 2µr −1 = 0.By introducing the variable u = 2µr −1 , we get (15).This system (15) defines a curve, called the central path.The following result states that under Slater's hypothesis, the central path is well defined and remains bounded for bounded values of µ.The proof is given in Appendix C Proposition 4.1.Under Assumption 3.1, for all µ > 0, the perturbed KKT system (15) has a unique solution (r(µ), u(µ)) ∈ int(L 2n ).For all μ > 0, the set {(r(µ), u(µ)) : 0 ≤ µ ≤ μ} is bounded.
From Proposition 3.2-(ii), the optimal solution of ( 9) is unique, therefore lim µ→0 u(µ) = û.For the curve r(•), it can be shown that lim µ→0 r(µ) = r, where r ∈ ri( R).Such a solution is called a maximally complementary optimal solution of (11).It can be characterized as follows.Regarding to the problem (11), the index set {1, . . ., n} of the Lorentz cones is partitioned into six sets (Mohammad-Nezhad and Terlaky, 2021): An optimal solution r ∈ R is maximally complementary if and only if for all i ∈ B, r i ∈ int(L) and for all The next result shows the convergence of the central path to a maximally complementary solution.A proof is given in Appendix C for self-completeness.
In linear optimization, it is known that the central path converges to the analytic center of the optimal set (Monteiro and Zou, 1998).In semidefinite optimization, it is also known that this result holds under strict complementarity (De Klerk et al., 1998;Halická et al., 2005).But in SOCO, as far as we know, we have not found in the literature any characterization of the analytic center, even under the strict complementarity assumption.In order to complete the theory, we provide such a characterization.
This assumption is equivalent to T i = ∅, for i = 1, 2, 3.In that case, (B, N, R) is a partition of {1, . . ., n}.Under this assumption, we can define the analytic center of the optimal set R. If R is a singleton, then the analytic center of R is this point.Otherwise, it is the unique optimal solution of the problem The next proposition shows that the analytic center is well defined.
Proposition 4.4.Under Assumptions 3.1 and 4.3, if R is not reduced to a singleton, then the problem (17) has a unique optimal solution r ∈ ri( R), characterized by the following property: for all r ∈ ri( R), i∈B Proof.Let r ∈ ri( R).For all i ∈ B, det r i > 0, and for all i ∈ R, To show that problem (17) has at least one solution, it suffices to show that the function ψ + δ ri( R) is coercive.This last property is a direct consequence of the compactness of the set R.
The uniqueness of the minimum comes from the strict convexity of ψ on ri( R).Indeed, for a conic component i ∈ B, we have ∇ 2 r (− log det r) r=ri = 2Q r −1 i , which is positive definite for all r i ∈ int(L), see Appendix A. For all conic components i ∈ R, there exists h i ∈ R d−1 , with ∥h i ∥ = 1, such that for all r ∈ ri( R), we have r i = r i,0 (1, h ⊤ i ) ⊤ .Indeed, suppose that for i ∈ R, there exist r and r ′ in ri( R), such that r i and r ′ i are not collinear.By the triangle inequality, it is easy to see that 1 2 (r i + r ′ i ) ∈ int(L), and thus i ∈ B, which would contradict i ∈ R. Finally, for all r, r ′ ∈ ri( R), if r ̸ = r ′ then there exists i ∈ B such that r i ̸ = r ′ i or there exists i ∈ R such that r i,0 ̸ = r ′ i,0 .In both cases, we have ψ(r) ̸ = ψ(r ′ ), which implies that ψ is strictly convex on ri( R).Since ψ is convex on ri( R), r is optimal if and only if −∇ψ(r) is in the normal cone to R at r, i.e., for all r ∈ ri( R), ∇ψ(r) ⊤ (r − r) ≥ 0. For r ∈ ri( R), we have from which we deduce that ( 18) is satisfied.
Let us state and prove the main result of this section.
Theorem 4.5.Under Assumptions 3.1 and 4.3, the central path r(•) converges to the analytic center of R.
Proof.Suppose that R is not reduced to a singleton, otherwise the result is a direct consequence of Proposition 4.2.Let r ∈ ri( R).As in the proof of Proposition 4.2, for all µ > 0, (44) is satisfied.By using r(µ) • u(µ) = 2µe and the definition of the partition (B, N, R), we have i∈B For any index i, by using the Cauchy-Schwarz inequality and the fact that r i,0 ≥ ∥r i ∥, we have In the same manner, for all i ∈ {1, . . ., n} we have From ( 19), ( 20) and ( 21), we deduce that i∈B When µ tends to zero, the first sum tends to i∈B r ⊤ i r−1 i and the second one to |N|.By using the fact that ri,0 = ∥ ri ∥ and ûi,0 = ∥ ūi ∥ for i ∈ R, each term of the third sum tends to ri,0 2ri,0 + 1 2 .By taking the limit as µ → 0 and using n which shows by Proposition 4.4 that r is the optimal solution of ( 17), the analytic center of R.
We would also like to emphasise an important point concerning the characterization of the analytic center from the point of view of computational mechanics.If the matrix H is not of full row rank, the reaction forces r are not unique.The interior point algorithm allows the optimal reaction forces to be uniquely defined.In addition, the analytic center is of mechanical interest and interpretation.By choosing a solution that is furthest from the boundaries of the solution set and therefore from the boundaries of the cones, the chosen solution maximizes the sum of the normal stresses r 0 and the distance to the edges of the cones.For simple cases, this provides a better distribution of reactions for active constraints rather than particular solutions where reaction forces are concentrated on a few constraints.
Numerical solution of the friction contact problems
We have implemented the primal-dual interior point algorithm developed by Tütüncü, Toh and Todd (Tütüncü et al., 2003) and adapted it to our context.The algorithm is an extension of the predictor-corrector algorithm of Mehrotra (Mehrotra, 1992) to the solution of a SOCO problem.The differences with the algorithm implemented in SDPT3 are as follows: • The convex quadratic objective function is taken into account directly, without reformulating the problem with a linear objective function and a new conic constraint.One consequence is that the primal and dual steps must be equal, which is not necessarily the case with a linear objective function.This is explained below.
• The linear system to solve at each iteration, resulting from the linearization of the perturbed KKT system (15), is transformed via the Nesterov-Todd scaling strategy as in SDPT3, but in a different manner.
The scaling matrix (i.e., the quadratic representation of the vector p defined by formula (25) below), which introduces bad conditioning when the iterations are close to an optimal solution, is never explicitly computed and stored in memory.Only matrix-vector products are performed.This method allows us to achieve a high degree of accuracy in the computation of the directions and thus in the solution of the quadratic convex problem.We examine in detail equivalent formulations of the linear system to justify our choice and detail all the computations performed.
• We also exploit the data structure of the problems, by using the matrices H and M instead of the reduced matrix W (see formula (10)), in order to preserve the sparsity of the linear system.This transforms the initial 2 × 2 block linear system into a 3 × 3 block system.We also show, by numerical experiments, that it is better to reduce again this system back to a 2 × 2 block system, while preserving the sparse structure, but avoiding some of the calculations with the scaling matrix.
• Finally, the experiments show that using the C data type long double for all the calculations related to the Nesterov-Todd scaling scheme, improves the robustness of the implementation, while maintaining reasonable computation times.
The main part of the algorithm is the solution of two linear systems that result from the linearization of the equation ( 15) at the current iterate (u, r) ∈ int(L 2n ).They only differ on the right-hand side and are of the form: where U = Arw(u) and R = Arw(r) are the arrow-shaped matrix associated to u and r.See Appendix A for all definitions related to the Euclidean Jordan algebra that are used in this paper.The first direction, denoted (d u a , d r a ) and called the affine scaling direction, is the solution of ( 22) without the square bracketed term in the right-hand side.It then satisfies the linear equation The affine scaling direction is a Newton step on the original optimality system (14).The barrier parameter is set to µ = u ⊤ r n .The centralization parameter σ ∈ (0, 1] is fixed by comparing the current value of µ with its expected reduction obtained along the affine step.The second direction is a linear combination of the affine scaling direction and of a corrector step, to keep the iterates near the central path.It then satisfies the following linear equation u , where α is the largest value in (0, 1] such that for some value τ ∈ (0, 1] (typically τ = 0.995).Contrary to common practice, different primal and dual steplengths are not taken, because of the first equation in ( 22), which is linear and includes both primal and dual variables.Indeed, suppose that the current iterate is such that W r+q−u = 0 and that different steplengths α ̸ = α ′ are taken with d u ̸ = 0. We then have If d u ̸ = 0, then at the next iteration the residual of the linear equation is no longer zero.
The algorithm will be well defined if the matrix in ( 22) is nonsingular at each iteration.Its determinant is equal to det(W + R −1 U ) det R.Although the vectors r and u are kept inside the interior of the second order cones, and thus R and U are positive definite, the matrix W +R −1 U can be singular.This is because the matrix R −1 U is not necessarily symmetric, since in general r and u do not commute.The following example is given by (Peng et al., 2002, p.143 In addition to this singularity problem, there is the symmetry problem.In the context of interior-point algorithms for linear or nonlinear optimization, where the cone of constraints is the nonnegative orthant, the matrices U and R are diagonal and so the matrix of ( 22) can be symmetrized, for example by left-multiplying the second row by −U −1 .See, e.g., (Ghannad et al., 2022) for several symmetrization techniques in interior point methods.The main advantages of a symmetric system are lower factorization costs and an effective control of the inertia of the factorized matrix.In addition, very efficient codes such as MA57 (Duff, 2004) or MUMPS (Amestoy et al., 2001) can be used for this task.
To overcome these problems of singularity and symmetry, a change of variables, called a scaling scheme, is used to obtain a symmetric nonsingular system.The idea is to make a change of variables that leaves the Lorentz cone invariant and such that in the new space the vectors u and r commute.However, this change of variable depends on the current iterate and must be done at each iteration.Let p ∈ int K and let Q p be the associated quadratic representation (see Appendix A).Since p is in the interior of the cone K, the matrix Q p is positive definite.From (Alizadeh and Goldfarb, 2003, Theorem 9), we have The problem (11) becomes min The corresponding perturbed KKT system is W r + q = u and û • ř = 2µe, with (û, ř) ∈ int L 2n .The linearization of these equations leads the following linear system: where Û = Arw(û) and Ř = Arw(ř).The choice of the vector p ∈ int K is made so that û and ř commute, which implies that the matrix of the linear system (24) is nonsingular.Indeed, this matrix is nonsingular if and only if det(W + ( ŘQ p ) −1 Û Q p −1 ) ̸ = 0. Since Û and Ř are positive definite, û and ř commute, and which is symmetric and positive definite.
Several choices for the vector p are possible, see (Alizadeh and Goldfarb, 2003).As mentioned in (Tütüncü et al., 2003), the most efficient scaling technique for the solution of a SOCO problem, is the one using the Nesterov-Todd (NT) direction (Nesterov and Todd, 1997): The main property of the NT direction is that û = ř, which implies that The symmetrization of the system ( 24) is done by left-multiplying the last row by −Q p Û −1 , leading to a symmetric matrix of the form To take advantage of the sparsity of the matrices M and H, the system ( 22) is considered in the following equivalent augmented form: The system (26) can be interpreted as the linearization of the perturbed KKT conditions of the problem (9).Applying the scaling scheme, the system becomes A reduction of ( 27) can be done by eliminating the variable d u , while keeping the sparse structure.This leads to the reduced symmetric system The big flaw of the scaling strategy is the ill-conditioning of the matrix Q p 2 when the solution pair (u, r) approaches an optimal solution.Indeed, suppose that (u * , r * ) is a primal-dual optimal solution of (11), which satisfies the strict complementarity condition.Let (u, r) be an interior point iterate near the optimal solution and let p be defined by (25).For i ∈ {1, . . ., n}, three situations can occur (Cai and Toh, 2006): • u * i ∈ int(L) and r * i = 0, then all the eigenvalues of Q p 2 i are of order µ := u ⊤ r; • u * i = 0 and r * i ∈ int(L), then all the eigenvalues of Q p 2 i are of order 1/µ; • u * i ∈ bd(L), r * i ∈ bd(L) and (u * i , r * i ) ̸ = (0, 0), then the largest eigenvalue of Q p 2 i is of order 1/µ and the smallest is of order µ.
To overcome the difficulties due to ill-conditioning, we propose to solve the linear systems ( 27) and ( 28) under the following equivalent form: where In our numerical experiments, we also consider the reduced system for which the matrix is positive definite.
Figure 1 shows the behavior of the condition number of the matrices of the six linear systems ( 26)-( 31) along the iterations of the interior-point algorithm for two examples.The first example (left figure) has a single contact point: . Since H is of full rank, the primal-dual solution is unique, the optimal reaction and relative velocity vectors are non-zero and on the boundary of the Lorentz cone.
The second example is a "Box Stacks" problem from FCLIB, a collection of discrete three-dimensional frictional contact problems (Acary et al., 2014) 2 .There are n = 69 contact points and m = 450 degrees of freedom, H ∈ R 207×450 and rank(H) = 157.The optimal solution satisfies the strict complementarity condition and (|B|, |N|, |R|) = (18,5,46).The matrix of (26) at the endpoint of the minimization procedure, is nearly rank deficient, there are 15 singular values less than √ ϵ, where ϵ is the machine epsilon.These two examples show that the matrices in ( 29) and ( 30) remain the least ill-conditioned.
An advantage of the systems ( 29) and ( 30) is that the quadratic representation matrices are never explicitly built in memory for the entire computation.Only the product of these matrices times a vector needs to be performed.Indeed, for a pair of vectors (x, y) ∈ L 2 , the product of a vector y by the quadratic representation of x can be done via the formula Q x y = 2(x ⊤ y)x − (det x)R d y.Therefore, the product of a vector by a matrix Q p , where p is the , can be done by performing only three products of a quadratic representation matrix by a vector.Similarly, the computation of the inverse or the square root of a vector in the Jordan algebra, is done by using the spectral decomposition of that vector.Moreover, even if the number of cones can be large, the computational cost of a spectral decomposition per cone is very small, since the dimension of a Lorentz cone is only three.32) is satisfied then return (v, u, r) as primal-dual solution of ( 9);
The linear system ( 26) is solved by means of a LU factorization, the symmetric ones with a LDL ⊤ factorization with MA57 (Duff, 2004).Even for the solution of the positive definite system (31) MA57 is used.Two types of failures are returned during a run: • Failure 1: the stopping criterion (32) is not satisfied after a maximum of 100 iterations.
• Failure 2: A NaN (Not a Number) is detected during the computation of the new iterate.
Table 2 indicates the number of successes and failures when solving the 1091 problems of the FCLIB collection, with a tolerance fixed to tol = 10 −10 .Each row corresponds to a run of Algorithm 1 with the numerical solution of the indicated linear system.Figure 2 shows the corresponding performance profiles (Dolan and Moré, 2002).With the system (26) the failures are due to a nearly singular system.In these cases, either the algorithm stalls to a spurious solution (13 out of 22 cases) or the convergence becomes very slow (9 out of 22 cases).It should be noted, however, that the "no-scaling" strategy yields an optimal solution for almost 98% of the problems.The systems ( 27) and ( 28) return a large number of failures of type 2. This is mainly due to the ill-conditioning of the matrix Q p 2 when approaching an optimal solution.The reduction of the system worsens the results.Surprisingly, the worst results are obtained with the system (29).A deterioration of the residual of the second linear equation in (29) over the iterations is observed, when the matrix Q p becomes increasingly ill-conditioned.This leads to a loss of the primal feasibility of the iterates.This is mainly due to the scaling of the linear equation −Hd v + d u = u − Hv − w.To overcome this problem, the refinement procedure described in the MA57 documentation can be used to solve (29).We performed a run with a refinement tolerance fixed to the tolerance tol and a maximum of 10 refinement iterations.This results in only six type 2 failures and no more failure of type 1, but it takes longer to run than with (30) as shown in Figure 2. The best performance in terms of robustness is obtained with the system (30).No failures were detected.The average number of iterations to solve one of the 1091 problems is equal to 18, while the minimum and maximum numbers are equal to 8 and 34.The positive definite system (31) gives a good performance in terms of efficiency, but its robustness is not sufficient, even when refinement is applied.2: Performance profiles for seven different linear system choices.For τ ≥ 0, ρ s (τ ) is the fraction of problems for which the performance of a given version of the algorithm is within a factor 2 τ of the best one.
At last, it should be mentioned that all the computations related to scaling are done in C using the floating point datatype long double.This data type is also used in the computation of the steplengths.This always results in a better accuracy, although the computing time is slightly longer.Table 3 shows the results of solving the FCLIB problems with Algorithm 1, the linear system under the form (30), with a stopping tolerance tol = 10 −11 .The comparison is between the use of the datatype long double versus double.We can see that the number of failures of type 2 is more than doubled with the type double.For these runs, even in case of a failure, the residual (left term in (32)) is small, which means that an optimal solution has been found.The column max res indicates the maximum value of the residual norm (32) of all the 1091 residuals and shows that the type long double allows to obtain a better precision.Even with the data type double, all problems are successfully solved for tol = 10 −10 .The last column of this table shows the total computational time needed to solve all the FCLIB problems with this tolerance.It can be seen that the increase in computational time is less than 10% with long double.Table 3: Performance comparison double versus long double when solving the FCLIB problems with system (30) and tol = 10 −11 .
Rolling friction contact problem
We now consider the solution of (4) in the framework of the RF model (d = 5) defined by the rolling friction cones (7) and ( 8).The main difficulty is that an elementary cone R i is no more self-dual and therefore not symmetric.There is no Jordan product such that R i is a cone of squares with respect to this product.A potential approach is to transform the cone of constraints related to problem (9) into the product of Lorentz cones by means of the following usual trick of introducing artificial variables.For real numbers, a ≥ b + c if and only if there exist two real numbers t ≥ b and t ′ ≥ c such that a = t + t ′ .By setting u i,0 = ti + ti for all i ∈ {1, . . ., n}, the primal-dual pair of problems ( 9)-( 11) can be rewritten as and min where d+1) is a block diagonal matrix with n blocks of the form d+1) .
For all i ∈ {1, . . ., n}, we have ȷz i = ( ti + ti , ū⊤ i , ũ⊤ i ) ⊤ and ȷ ⊤ r i = (r i0 , r⊤ i , r i0 , r⊤ i ) ⊤ .The perturbed KKT system associated with the problem (34) is then This optimality system is associated with the barrier problem min r φ µ (r) := 1 2 r ⊤ W r + q ⊤ r − µ n i=1 log(det(r i,0 , ri ) det(r i,0 , ri )). ( 36) As for Proposition 4.1 and by coercivity of the barrier function, under Assumption 3.1, for all µ > 0 the system (35) has a unique solution such that (z(µ), J ⊤ r(µ)) ∈ int(L 4n ).Although the optimal solution of ( 33) is not unique, because J is non-injective, it can be shown, like for Proposition 4.2, that the central path converges to a relative interior point of the primal-dual optimal set.Under the hypothesis of strict complementarity, it can also be shown that the central path r(•) converges to the analytic center of the dual optimal set.For the sake of completeness, we state the result, but without proof in order to lighten the paper.
As in Section 4, under the strict complementarity assumption, the index set {1, . . ., n} can be partitioned into two partitions ( B, N, R) and ( B, Ñ, R), whose definition is a direct extension of the previous one to the current framework.Let Z and R be the primal and dual optimal sets of problems ( 33) and (34).The first partition is defined as follows, the second in a similar way: We can then define the analytic center of the optimal set R as follows.If R is reduced to a singleton, then it is this point, otherwise it is the minimum of the problem min r∈ri( R) The analytic center can be characterized like in Proposition 4.4, by which it can be shown that Theorem 4.5 still holds for the rolling friction framework.
Algorithm 1 is modified in order to solve a RF problem.This is described by Algorithm 2.
The linear system solved at each iteration is obtained by linearizing the system (35).It is reformulated under the form of the following augmented system where Z = Arw(z), R = Arw(J ⊤ r) and µ = r ⊤ Jz n .As in Algorithm 1, the affine scaling direction (d v a , d r a , d z a ) is the solution of (38) without the square bracketed term in the right-hand side, while the full step (d v , d r , d z ) is the solution of the complete system.
The scaling strategy is similar to that described in Section 5.The NT direction p is defined by the formula (25) where u and r are respectively replaced by z and J ⊤ r.The change of variables is done by setting ẑ := Q p z and y := Q p −1 J ⊤ r.
Recall that ẑ = y, which allows to symmetrize the linear system under the form Because of the ill-conditioning of the matrix Q p 2 , an equivalent form of (39) has been considered: where dz = Q p d z .Figure 3 shows the condition number of the three matrices ( 38)-( 40) along the iterations of the numerical resolution of a RF problem.It can be seen that the system ( 40) is better conditioned than (39).We also tried several reductions to a 2 × 2 form as in ( 28) or (30), leading to matrices of the form where Ĥ = P −1 H and P P ⊤ = JQ p −2 J ⊤ .We also tried several ways to compute the matrix P , by performing a Cholesky factorization or by directly exploiting the structure of a quadratic representation matrix.Despite such a reduction and in contrast to the results obtained with the RF problems, the numerical performance has not been improved.Moreover, the computation of the matrix P increases the overall computational cost, without any real improvement.We also observe the same for the 1 × 1 system like (31).The numerical tests were carried out on 526 RF problems of the FCLIB family (Acary et al., 2014), whose characteristics are in Table 4.The numerical results and performances of Algorithm 2 with the three linear systems described previously, are reported in Table 5 and Figure 4.These results show that, with a tolerance tol = 10 −10 , the choice of system (40) gives the best performance.But the performance gap between the systems with the NT scaling is smaller than those observed for the FC problems.It can also be observed that, as for the RF problems, without scaling the failures are of type 1, while with NT scaling the failures only occur when a NaN is detected.Moreover, in the latter case the maximum value of the residual (32) (right column of Table 5), shows that the stopping point of the algorithm is nearly optimal.
Figure 4: Performance profiles for the three linear systems for the RF model primal-dual algorithm, we were not satisfied with its efficiency and robustness.The Nesterov-Todd scaling strategy is a wonderful theoretical tool, but numerically very painful.As the iterates approach the boundary of the second order cones, the conditioning of the linear system explodes, the iterations get stuck on the boundary and divisions by zero occur, producing NaN and thus an emergency stop.We have therefore examined a large number of equivalent formulations of the linear system and found that the one which gives the best results, or shall we say the least bad, is the one in which the quadratic representation matrix which allows scaling is not a direct component of the matrix of the linear system.In addition, special attention must be paid to the way in which the matrix-vector products are performed in order to construct the system to be factorized.Unfortunately, we have not been able to find a formulation for rolling friction problems that is as efficient and robust as for friction cones.The accuracy we have achieved is slightly lower.Nevertheless, the accuracies and computation times we have achieved for both models seem to us to be quite adequate and can be used for real applications..The next step in this research work is to extend the primal-dual algorithm to solve the original model (1).A natural idea is to solve a sequence of systems parametrized by µ > 0: The main issues are that this system is not the optimality system of an optimization problem and that there is a non-smooth equation.
One theoretical question remains from this study.It concerns the characterisation of the limit point of the central path, like the formula (17) which defines the analytic center, but without the assumption of strict complementarity.Wriggers, P. (2006).Computational contact mechanics.Vol. 2. Springer Berlin Heidelberg.[doi], [oa].
A Euclidean Jordan algebra
Let us consider the set K = n i=1 K i where K i is an n i -dimensional Lorentz cone defined by Let N = n i=1 n i .For x ∈ R N , we denote x = (x 1 , . . ., x n ), where x i = (x i0 , xi ).For two vectors x and y in R N , the Jordan product is defined by , for i = 1, . . ., n.
Let x ∈ R N and x 2 = x • x.A fundamental property of the Jordan algebra for interior-point algorithms, is that the Lorentz cone is the cone of squares, that is K = {x 2 : x ∈ R N }, see (Alizadeh and Goldfarb, 2003, pp. 18-19).
For matrices A and B, we define For i ∈ {1, . . ., n}, let det(x i ) = x 2 i0 −∥x i ∥ 2 be the determinant of x i ∈ R ni .If x i is nonsingular, i.e., det(x i ) ̸ = 0, the inverse of x i is the unique vector of R ni such that x i • x −1 i = e i and is given by x −1 i = det(x i ) −1 R ni x i , where R ni is the reflexion matrix defined by If for all i ∈ {1, . . ., n} x i is nonsingular, we have For i ∈ {1, . . ., n}, the spectral decomposition of a vector x i ∈ R ni is defined by x i = λ i c i + λ ′ i c ′ i , where The scalars λ i and λ ′ i are eigenvalues of Arw(x i ), with the corresponding eigenvectors c i and c ′ i .This pair of eigenvectors is called the Jordan frame of x i .It is said that two vectors commute if they share the same Jordan frame.In that case the corresponding arrow matrices commute.From these definitions, it follows that x i ∈ K i (resp.x i ∈ int(K i )) if and only if λ ′ i ≥ 0 (resp.λ ′ i > 0).If x i is nonsingular, then More generally, for a continuous function f we can define At last, the quadratic representation of a vector x i ∈ K i is defined as The scalars λ 2 i and λ ′ 2 i are eigenvalues of Q xi with the eigenvectors c i and c ′ i .In particular, two vectors commute if and only if their quadratic representation matrices commute.For x ∈ K, we define Q x = Q x1 ⊕ . . .⊕ Q xn .For i ∈ {1, . . ., n} and x ∈ int(K i ), we have ∇ x (log det(x i )) = 2x −1 i and ∇ 2 x (log det(x i )) = −2Q x −1 i .These operators Arw and Q • are useful for the design of interior algorithms.See (Alizadeh and Goldfarb, 2003, §4) for a review of their interesting properties.
B Proof of Proposition 3.2
Before giving a proof of Proposition 3.2, we propose some equivalent formulations of Assumption 3.1.They are proved thanks to the following well-known lemma (Skarpness and Sposito, 1982, Lemma 2), called Tucker's theorem of the alternative in the case where K is the nonnegative orthant, see (Mangasarian, 1969, p. 29).The proof can be made by applying a separation theorem of convex sets, see, e.g., (Rockafellar, 1970, Theorem 11.3).
Lemma B.1.Let K ⊂ R n be a closed pointed convex cone, A ∈ R m×n and B ∈ R p×n .One and only one of the following statements is true.
(i) There exists a non-zero x ∈ K such that Ax = 0 and Bx ≤ 0.
The following lemma provides equivalent formulations of Assumption 3.1.
Lemma B.2.Let F be the product of second order cones of the form (5) or (7).Let M ∈ R m×m be symmetric and positive-definite, f ∈ R m , H ∈ R nd×m , w ∈ R nd and W = HM −1 H ⊤ .The following four assertions are equivalent.
(i) There exists v ∈ R m such that Hv + w ∈ int(F * ).
(iii) There does not exist a nonzero d ∈ F, such that H ⊤ d = 0 and w ⊤ d ≤ 0.
(iv) There does not exist a nonzero d ∈ F, such that W d = 0 and q ⊤ d ≤ 0.
Proof.The implication (i) ⇒ (ii) is obvious.The equivalence (ii) ⇔ (iii) is a direct consequence of the fact that F * is a closed pointed convex cone and of Lemma B.1.The equivalence (iii) ⇔ (iv) follows from the definitions of W and q in (10), and of the positive definiteness of M .It remains to prove that (ii) ⇒ (i).Let (v, t) ∈ R m × R + such that Hv + tw ∈ int(F * ).If t > 0, set v ′ = 1 t v. Since Hv + tw = t(Hv ′ + w) and int(F * ) is a cone, Hv ′ + w ∈ int(F * ).Suppose now that t = 0.There exists ϵ > 0 such that Hv + B ϵ ⊂ F * , where B ϵ is the open ball centered at 0 with radius ϵ.Since F * is a cone, for all t > 0, tHv + B tϵ ⊂ F * .Let us choose t > 1 ϵ ∥w∥.We then have tHv + w ∈ int(F * ), which proves (i).We can prove now that the Slater's assumption is equivalent to the non-emptiness and compactness of the optimal set of the reduced problem (11).
Proof of Proposition 3.2.The assertions (i) and (ii) are direct consequences of the weak duality and of the strong convexity of the objective function of (9).The outcome (iii) can be proved by means of some useful tools from asymptotic analysis in convex optimization (Auslender and Teboulle, 2003).Let us define the function
Figure 1 :
Figure 1: Condition number (κ) of the matrix of the linear systems along the iterations when solving two different problems.The left figure is related to a problem with only one contact point, (nd, m) = (3, 3), while the right figure is with sixty-nine contact points, (nd, m) = (207, 450).
Figure 3 :
Figure 3: Condition number (κ) of the matrix of the linear systems along the iterations of the Algorithm 2 when solving the problem PrimitiveSoup-ndof-6000-nc-37-0 with n = 37 contact points.
Table 1 :
Sizes of problems of the FCLIB collectionAlgorithm 1 One iteration of Mehrotra primal-dual algorithm for solving a FC problem Parameters:
Table 2 :
Number of successes and failures when solving the FCLIB problems with tol = 10 −10 .
Table 4 :
Sizes of rolling friction problems of the FCLIB collection
Table 5 :
Number of successes and failures when solving the RF problems with tol = 10 −10 | 12,572 | 2024-01-16T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
DIFFERENT VULNERABILITIES AND CHALLENGES OF QUANTUM KEY DISTRIBUTION PROTOCOL: A REVIEW
Now days the information become the valuable assets and private information must be protected from being compromised. Today we use different sophisticated, robust encryption algorithm which are vulnerable to classical computational attack as well as the powerful parallel quantum computer. In this paper, we examine limitations and vulnerabilities and attacks to which Quantum Key Distribution can be exposed.
INTRODUCTION
A Cryptosystem [3] encrypts the data at the Sender' (commonly referred to as Alice) end and transmits it, through a secure channel, to the Receiver' (commonly referred to as Bob) end. The Sender and the Receiver are assumed to have preassigned Encryption and Decryption key. In Classical Systems [27], Symmetric and Asymmetric (widely used) algorithms are used to generate random keys. Quantum Cryptography [4] [13] while retaining most aspects of Classical Cryptography, uses Quantum Key Distribution [6][9] to generate and transmit the key. The fundamental aspects of Quantum Mechanics, the Uncertainty Principle [1], Entanglement [14] [13] and the Measurement Theory [16], provides a unique set of constraints on the communication channel [22]. The key generation can be distributed using protocols [15] such as BB84 [32], E91 [12], Decoy State Protocol, COW protocol among others. The most famous [10], for historical reasons, are the BB84 and the E91 protocols. A short Study of these two methods and an overview of the laws of Quantum Mechanics are presented below.
A. The Uncertainty Principle
Observables are associated to Hermitian operators [1]. Given one such operator A we can use it to measure the properties of any physical system in state Ψ. If the state Ψ is an Eigen state of the operator A, we have no uncertainty in the value of the observable corresponding to the eigen value of the operator A. Uncertainty in the value of the an observable 'A' exists if the state Ψ is not an Eigen state of A, but rather a superposition of various Eigen states with different eigen values. The uncertainty principle is an inequality that is satisfied by the product of the uncertainties of two Hermitian operators, A and B that fail to commute. The uncertainty of an operator on any state can be either greater than or equal to zero, the product of uncertainties in the two observables is, as a result, greater than or equal to zero [2]. The uncertainty inequality gives us a lower bound for the product of uncertainties. If two operators commute, the uncertainty inequality gives no information: it states that the product of uncertainties must be greater than or equal to zero.
The Uncertainty Inequality [1] ( Before you Entanglement is a basis-independent property [17].
When the state can be factorized into v * ⊗ w * for some basis choice in V and W, it can be factorized for any other basis choice by simply rewriting v * and w * in the new basis. If the state cannot be factorized into v * ⊗ w * for some basis choice in V and U, it cannot be factorized for any other basis choice because factorization with another basis choice would then imply factorization in the original basis choice [2] An entangled state is defined as a state that cannot be separated into a sum of its parts. A separable state is described as a probabilistic distribution over un-correlated states, product states, ρ = ∑ ipiρiV⊗ ρiW. For pure states the above definition is described in the following way: Consider two quantum systems V and W, with respective Hilbert spaces HV and HW. The Hilbert space of the general or entire system is a tensor product HA ⊗ HB. If the state |Ψ⟩AB of the composite system can be represented in the form [14] |Ψ⟩AB = |ψ⟩V ⊗ |ϕ⟩W Where |ψ⟩V ∈ HV and |ϕ⟩W ∈ HW are the states of the systems V and W respectively, then this state is called a separable state. Otherwise, it is known as an entangled state [14].
B. Measurement Theory
The entire theory of Quantum Mechanics, operators and transformations and observables, is simply the mathematical labeling of measurement results. The measurement of some observable of a quantum system, for instance energy and spin, is assumed to be completely accurate [31]. The state of a system before measurement is presumed to be any possible combination of Eigen States. The action of measurement forces the state to "collapses" into an Eigen state of the operator corresponding to the measurement. Redoing the identical measurement without letting the quantum system to evolve will, theoretically, give the same result [1] [13].
However when the preparation of the quantum system is repeated, subsequent measurements will most likely yield completely different results. The expected values of the observables follow a probability distribution, based on the state of the system at the time when measurement is done. This probability distribution can be either continuous (such as momentum) or discrete (such as angular momentum), depending on the quantity being measured. The measurement process is considered to be random. [1].
C. BB84 protocol
This protocol, named after its inventors (Bennett and Brassard) and the year of invention, was initially defined using photon polarization states as a means to send information [9]. Nevertheless, any two pairs of conjugate states is sufficient for the execution of BB84 [32][33] protocol, and many optical fiber centered cryptic systems implement BB84 using phase encoded states. The sender and the receiver apparatuses are linked by a quantum communication channel through which quantum states are transmitted. If photons are used as information carriers then either an optical fiber or simply free space can function as the communication channel. The sender and the receiver communicate via a public classical channel, for instance through broadcast radio or the Web. Neither of these channels needs to be secure; the protocol is designed with the supposition that an eavesdropper can interfere in any way with both [5][25].
D. E91
The Ekert protocol uses entangled pairs of photons which are created by, not necessarily, the Sender or the Receiver. The photons are distributed in such a way that the Sender and Receiver get one photon of each entangled pair [10].
The Ekert protocol [12] uses two fundamental properties of Quantum Entanglement. First, the entangled states are correlated so that if Sender and Receiver measure the polarity, horizontal or vertical, of their particles, they get the same answer with absolute probability. This also holds for any other pair of complementary (orthogonal) polarizations [25]. However the two parties must have exact directional synchronization. The particular results may be random; consequently the Sender and the Receiver cannot predict the polarization of their photon. Second, any attempt at eavesdropping by an intruder destroys the entanglement correlations while revealing the presence of the Attacker [26].
II. LITERATURE SURVEY
Quantum Cryptographic Systems must necessarily work on protocols which describe Quantum Key Distribution [28]. These protocols are reviewed by Heitjema [15] [18] in a summary while the classic texts of Bennet and Bassard [7] detailed the inner workings and the evolution of such systems. The current advancements in the field include laying networks of systems running on Quantum Key Distribution and relaying information through multiple channels [19]. However attacks have been successfully carried out on quantum systems, the latest one uses eavesdropping on a 290m communication channel [6]. On the other hand, the simplistic standard BB84 systems have been installed in metropolitan areas for testing. With the use of high key generation rate most of the usual forms of attack have been thwarted [21] [24] [29]. The growth of Quantum cryptography is intricately linked with the evolution of attacks on it as detailed in [13][14] [25].
III. DIFFERENT ATTACKS
Though Quantum key Distribution Protocol appears to be more powerful in compare to the conventional protocol they also suffers from different types of attack. Here we tried to provide some idea to the different types of attacks on Quantum key distribution protocols.
A. Intercept and Resend
Intercept and Resend is perhaps the simplest form of attack [6]. The Attacker measures the quantum states (photons) sent by Sender and then sends replacement states to Receiver, prepared in the state measured by the Attacker. If this method is used in the BB84 protocol, errors creep in the key which Sender and Receiver share. As the Attacker has no knowledge of the basis of states sent by Sender, he can only guess which basis to measure in. If the Attacker chooses correctly, then he measures the correct photon polarization state as sent by the Sender, and resends the correct encoded state to the Receiver. But if the guess is incorrect, then the state he measures is completely arbitrary, and therefore, the state sent to Receiver can never be identical to the state transmitted by Sender. If the Receiver measures this state in the same basis he gets a random result-since the Attacker has sent him a state in the contrary basis-with a 50% error chance (instead of the correct result he would get without the presence of Eve).
This attack strategy is frequently tested in ideal settings. In a less developed form of the Intercept/Resend (I/R) attack, the Intruder intercepts photons from the Sender who has encoded it with his own predefined basis [13]. Meanwhile, in the perfect environment, the detectors are extremely efficient which help the Intruder to get each incoming photon. In the naïve intercept resend attack, it is assumed that the Attacker is not watching the public channel i.e. sifting phase of BB84 protocol. The information gain in this method is approximately 0.2 bits out of every bit transmitted by Sender.
B. Man-in-the-middle attack
Quantum key distribution is exposed to Man-in-the-Middle attack when used without establishing proof of identities to the same extent as any classical protocol [30]. If Sender and the Receiver have an initial shared secret then they can use an unconditionally secure authentication scheme along with quantum key distribution to exponentially expand this key, using a new key to validate the next session.
Man-in-the-middle (MITM) attacks can be performed in a couple of ways. The earlier MITM attacks do not work on QC systems because laws of quantum mechanics step in. With traditional MITM attacks, the Interceptor would intercept the transmitted messages and send a copy in its place. However this is impossible due to the physics of QC systems, although non-traditional MITM attacks are possible. The first, comprises the Attacker pretending to be "Sender" to the Receiver and "the Receiver" to Sender. He would then communicate with both the Sender and Receiver simultaneously thereby obtaining two keys, one for Sender and one for the Receiver. Sender' key would be used to decrypt a message from Alice then reencrypted by he Receiver's key. This type of attack is possible, but may be prevented if identities can be authenticated.
The other kind of MITM attack involves the method through which photons are transmitted [17]. A single photon for transmission is difficult to realize in real world, most cryptic systems use small bursts of coherent pulses for transmission. Theoretically, the Attacker may split a single proton from the burst without detection. He could then store the stolen photons until the basis used to create them is announced. EPR pairs can be used for a possibly secure three-stage protocol that can avoid man-in the-middle attack. But distributing the EPR pairs might corrupt them during transit.
C. Faked States Attack
This is a special form of Intercept/Resend attack type of attack which focuses on collecting the information by exploiting the imperfections in the Receiver's system [6]. In this method the Attacker sends the self-derived signal to control the entire communication. "Full detector efficiency mismatch" is fundamental in this type of attack. The signal which the Receiver gets from the Attacker after he has intercepted the Sender's signal has a time shift that if the Receiver chooses the basis other than that of the Attacker for that reading signals then, he will not detect signals. In generic terms, his detector will be blinded. And, throughout this process, Attacker remains undetected.
In BB84 protocol, the various steps in this attack are: • Attacker performs the simple Intercept/Resend attack over transmitted signals and measures in his own basis.
• The Attacker then sends a pulse to the Receiver such that it contains opposite bit value in the opposite state. This sets the time shift of the signal so that the Receiver can only measure the signal if the same basis (as that of the Attacker) is used. Measurement in other bases will result in nothing.
• Now the Attacker measures the signal using the Sender's basis. The Receiver will get identical results, identical to that of the Attacker.
• Therefore the Attacker now has complete control on the Receiver's scheme. This attack strategy depends on Synchronization and efficiency of the Detectors [27].
D. Denial of service
The latest implementations of QKD require a dedicated fiber optic line, or a line of sight in free space, between the Sender and the Receiver [3]. A Denial of Service attack can take place by cutting off the cable or obstructing the line of sight. This is one of the motivations for the development of quantum key distribution networks [20], which would route communication via alternate links in case of disruption.
DoS attack in QKD is done in two ways: one, compromises the quantum cryptographic hardware, and second, introduces extra noise in the QKD system. QKD systems which use fiber optic channels can be commissioned out of service by simply cutting off or blocking the optical cable. The fiber-optic channel can be readily made unusable by simply tapping into the line. The QKD equipment itself could be compromised to generate unsecure random photons by using a random number generator algorithm [24]. A DoS attack against a QC system may also be mounted if it is possible to insert noise in the communication system. This noise would be indistinguishable from eavesdropping so that the Sender and the Receiver will be induced to discard a number of photons. If the additional noise can be sustained in the communication channel, then the Sender and the Receiver may increase their error threshold to compensate for noise, which would make render eavesdropping more easily [16].
E. Trojan Horse Attacks
Quantum physics cannot protect Sender' and the Receiver's apparatuses. Indeed, as soon as the information is encoded in a classical physics system, it is vulnerable to security flaws and hacks [11]. The Sender and the Receiver have to protect their instruments through usual defenses. Practical implementations of abstract QKD uses present technology (and are bound by economical constrains). This results in a deviation from the ideal scheme.
In Trojan Horse Attacks, the Intruder focuses on the cryptographic devices used by Sender and the Receiver, unlike the previously defined attack which try to extract the information from the photons that are being transmitted in the channel. This strategy is implemented by sending out the light pulses towards the sender's or receiver's setup, which in return comes as the reflected pulse and enter the detection scheme which is also a possession, of Eve. Eve can use the information of the reflected signal and can intercept the basis used by Alice for the preparation of the photon. Now, if Eve is able to get this information before that photon reaches the the Receiver side, then Eve can perform simple Intercept/Resend attack and measure it to get the exact secret string of qubits. Hence, Eve can get sufficient amount of the information without being detected. It is thus of vital importance for QKD to analyze properly the consequences of these compromises. Indeed, some compromises might render the entire system totally insecure. Trojan Horse attacks works by attacking target vulnerabilities in the operation of a QKD protocol or deficiencies in the components of the physical devices used in construction of the QKD system. If the equipment used in quantum key distribution can be tampered with, it could be made to generate keys that were not secure using a random number generator attack. Trojan Horse attack is also known as light injection attack [20] [23].
F. Photon number splitting attack
Outside experimental settings, a true single photon source is hard to generate, so the Sender uses weak laser pulses (WLP) generators instead. The coherent light pulse so emitted follows a Poisson distribution. The probability of a pulse to comprise of n photons is Pn = µ^n/(n!e^µ) , where µ is the mean photon number , taken to be a number less than 1 to avoid pulses with more than one photon. But multiple photon pulses will still occur with some probability. This results in the possibility of photon-number-splitting (PNS) attack [14] [26].
In this attack, the Attacker replaces the high loss channel used by the Sender and the Receiver with a lossless channel. The Attacker then performs a quantum non-demolition (QND) measurement on each pulse thus obtaining information without disturbing the bases of the encoded pulse. If a pulse with a single photon is transmitted, then the Attacker simulates the loss of the original pulse by blocking a fraction of these pulses. When a pulse with multiple photons arrives, then the Attacker splits and stores a photon from that pulse in a quantum memory. After storing, the Attacker transmits the remaining pulse to the Receiver. When the Sender and the Receiver announce the bases used for each pulse, the Attacker retrieves the photons from the quantum memory and as a result obtains a significant fraction of the key without detection. Typically each signal pulse contains a number of photons. Cryptographic devices generally rely on Weak Coherent Pulses. A WCP is a photon pulse with low mean photon number. PNS attack takes advantage of this limitation and by targeting multiple photons pulses, becomes a potent attack [8]. The PNS attack, however, is quite complex. The probability of a pulse to contain multiple photons is around 5% and the number of dark counts (no photons in the pulse), is quite high. Hence, the Attacker has to check continuously for multiple photon pulses. This requires complex hardware and algorithms. Figure 4. As a function of the mean photon number µ and the transmission efficiency η, we see the area (grey shade) where the original PNS attack yields fewer single-photon signals than the corresponding lossy channel.
G. Spectral Attacks
Quantum key distribution has the property to detect the presence of any third party trying to gain knowledge of the key. This result from a fundamental aspect of quantum mechanics: the process of measuring a quantum system in general disturbs the system. Therefore if the Intruder is trying to eavesdrop on the key, then he must measure it in some way thereby resulting in a disturbance of the system. This leaves traces. By using quantum superposition or quantum entanglement and transmitting information in quantum states, a communication system can be implemented which detects eavesdropping. If the level of eavesdropping is below a certain threshold, a secure key can be generated otherwise no secure key is possible and communication is aborted. However a possible way around this is by measuring the spectral characteristics [34] of the photons involved, instead of their polarization. So, if the Attacker measures the color of each photon then the polarization states will be not perturbed to leave traces.
IV. CONCLUSION
Quantum Key Distribution is not entirely failsafe. Loopholes and errors in its implementation can be used to attack and manipulate the cryptic channels. Each of these attacks work by targeting some vulnerable feature of the system in question. The scope of these attacks is therefore limited by the security surrounding the system. Quantum Cryptographic systems are available for closed circuit small range commercial implementations where the Intruder has limited resources to plug into the system with detection. The most powerful of these attacks are the Trojan Horse attacks and the Photon Number splitting strategy. Man-in-the-Middle, Intercept and Resend, and Faked states attacks works in almost all cryptographic systems. However, Quantum Key Distribution provides a powerful way to communicate securely and as such; it is something to look forward to in the future. Every secure system will have attacks to disable its firewall and even Quantum Key Distribution needs a robust wall of defense. V. | 4,719.4 | 2017-08-30T00:00:00.000 | [
"Computer Science",
"Physics"
] |
Observational evidence for new instabilities in the midlatitude E and F region
Radar observations of the Eand F -region ionosphere from the Arecibo Observatory made during moderately disturbed conditions are presented. The observations indicate the presence of patchy sporadic E (Es) layers, medium-scale traveling ionospheric disturbances (MSTIDs), and depletion plumes associated with spread F conditions. New analysis techniques are applied to the dataset to infer the vector plasma drifts in the F region as well as vector neutral wind and temperature profiles in the E region. Instability mechanisms in both regions are evaluated. The mesosphere–lower-thermosphere (MLT) region is found to meet the conditions for neutral dynamic instability in the vicinity of the patchy Es layers even though the wind shear was relatively modest. An inversion in the MLT temperature profile contributed significantly to instability in the vicinity of one patchy layer. Of particular interest is the evidence for the conditions required for neutral convective instability in the lower-thermosphere region (which is usually associated with highly stable conditions) due to the rapid increase in temperature with altitude. A localized F -region plasma density enhancement associated with a sudden ascent up the magnetic field is shown to create the conditions necessary for convective plasma instability leading to the depletion plume and spread F . The growth time for the instability is short compared to the one described by Perkins (1973). This instability does not offer a simple analytic solution but is clearly present in numerical simulations. The instability mode has not been described previously but appears to be more viable than the various mechanisms that have been suggested previously as an explanation for the occurrence of midlatitude spread F .
The relative importance of plasma and neutral drivers in the midlatitude ionosphere is being debated in the contexts of sporadic E (E s ) layers, medium-scale traveling ionospheric disturbances (MSTIDs), and plasma density irregularities associated with midlatitude spread F .This paper informs the debates with observations from the Arecibo Radio Observatory.Patchy E s layers, MSTIDs, and midlatitude spread F occurred during the night of the 11 to the morning of 12 July 2015, over Arecibo during moderately disturbed, low solar flux conditions.The midnight collapse occurred as well.The event complements one reported on by Hysell et al. (2014b) recently which also exhibited these phenomena, albeit with important differences.These observations represent the most complete set of ground-based plasma and neutral state parameters available and are crucial for ranking the different mechanisms that could be responsible for the ionospheric irregularities, particularly at night.
Midlatitude sporadic E (E s ) ionization layers have been affecting communications since the earliest days of radio (see reviews by Layzer, 1962;Whitehead, 1972;Whitehead, 1989;Mathews, 1998).The dense metallic layers reflect, refract, diffract, and scatter radio signals, facilitating some radio applications and inhibiting others.While the layers can occur on continental spatial scales, the irregularities within that cause coherent radar scatter to extend down to meter scales or less.Deployments of coherent scatter radars in the modern era also highlighted important intermediatescale structuring in the layers (Riggin et al., 1986;Yamamoto et al., 1991Yamamoto et al., , 1992)).The coherent echoes have been termed Published by Copernicus Publications on behalf of the European Geosciences Union.
"quasiperiodic" or QP because of periodicities in their rangetime-intensity representations.
Rocket experiments have shown that QP echoes come from patchy sporadic E layers accompanied by strong polarization electric fields and strong neutral wind shear (e.g., Fukao et al., 1998;Larsen et al., 1998;Yamamoto et al., 2005;Bernhardt et al., 2005, and references therein).Studies using radar interferometry and imaging have shown that the echoes and underlying patchiness tend to be organized along fronts (e.g., Chu and Wang, 1997;Hysell et al., 2002;Saito et al., 2006).The fronts propagate with periods of 5-10 min, wavelengths of a few tens of kilometers, and directions preferentially toward the southwest in the Northern Hemisphere, although directions can vary significantly within and between events.The polarization electric fields spanning the fronts are often large enough to excite Farley-Buneman instability (Haldoupis and Schlegel, 1994), but field-aligned irregularities (FAIs) exist throughout the patchy layers even when the condition for Farley-Buneman instability is not met.Irregularities often come in bursts lasting about 1 h.
Sporadic E layer structuring leading to QP echoes is sometimes attributed to gravity waves (Woodman et al., 1991;Didebulidze and Lomidze, 2010;Chu et al., 2011) or an E s -layer plasma instability forced by neutral wind shear (Cosgrove andTsunoda, 2002, 2004).The billowy appearance of the layers in incoherent scatter radar observations like those presented by Miller and Smith (1978) and Smith and Miller (1980) points to neutral shear (dynamical) instability as the cause (Larsen, 2000;Bernhardt, 2002;Hysell et al., 2004;Bernhardt et al., 2006;Larsen et al., 2007;Hysell et al., 2009).This premise is consistent with results from Larsen (2002) and Hecht et al. (2004), and others which suggest that the mesosphere lower thermosphere region is frequently dynamically if not convectively unstable.Hysell et al. (2012) combined coherent and incoherent scatter radar measurements in a common volume to test the premise.They solved a boundary value problem (the Miles and Howard problem: Miles, 1961;Howard, 1961) for neutral dynamic instability, incorporating neutral wind profiles measured at Arecibo.The fastest-growing eigenmodes were found to have wavelengths, frequencies, and propagation directions comparable to the fronts in radar imagery from a coherent scatter radar on St. Croix.The e-folding time for the most unstable mode was only about 1 min, and the instability was seen to be robust.That neutral rolls could be responsible for intermediate-scale E s -layer structuring is furthermore consistent with green-line optical imagery from Arecibo, which sometimes shows waves at least superficially similar to those seen in the coherent scatter radar imagery (see, e.g., Larsen et al., 2007).
Subsequently, Hysell et al. (2013) simulated plasma instability in patchy E s layers produced by the aforementioned mechanism.The 3-D simulation code was unique in that it did not assume equipotential magnetic field lines, which is essential for studying ionospheric drift waves, for example.
A fast-growing class of collisional drift waves was found to emerge from polarized, patchy layers.These waves, which had kilometer scales, were similar to waves found in E s layers using a high-resolution observing mode at Arecibo (Hysell et al., 2013).These transitional-scale waves were interpreted as being the primary plasma waves necessary to drive the meter-scale FAIs detected by the coherent scatter radar.Field-aligned currents played an essential role in the growth of the waves.
Arecibo data show that patchy E s layers sometimes occur in conjunction with medium-scale traveling ionospheric disturbances (MSTIDs) and, more rarely, with midlatitude spread F (see Behnke, 1979 for early Arecibo MSTID observations).MSTIDs are wavelike variations in the F -region ionization that exhibit periods of the order of 1 h, wavelengths of hundreds of kilometers, and propagation speeds of ∼ 100 m s −1 .They propagate mainly southwestward in the Northern Hemisphere.They are electrically polarized (Shiokawa et al., 2003) and represent the predominant ionospheric irregularities at middle latitudes.Miller et al. (2009) have furthermore shown evidence that traveling ionospheric disturbances (TIDs) propagating to low latitudes can instigate equatorial spread F , something Krall et al. (2011) have been able to reproduce in simulations.Suzuki et al. (2009) showed that small-scale FAIs are also embedded in the phases of MSTIDs as if driven by the associated polarization electric fields under gradient drift instability (see also Otsuka et al., 2009;Ogawa et al., 2009).Small-scale field-aligned irregularities have indeed been detected at the phase nodes of MSTIDs observed in GPS-TEC (total electron content) maps in one event and in the troughs of MSTIDs observed in all-sky imagery in another.The irregularities give rise to spreading in ionogram traces and consequently to midlatitude spread F by definition.However, the term "midlatitude spread F " usually refers to the results of plasma convective instability in the midlatitude F region which is morphologically similar to equatorial spread F (ESF).
The plasma convective instability responsible for equatorial spread F grows from perturbations to an equilibrium force balance between gravity and the J × B force due to Pedersen currents at low latitudes.The perturbation can be thought of as vertical displacements of slabs of plasma in planes parallel to the magnetic meridian plane.Since quasineutrality guarantees that the zonal Pedersen current be continuous in this case, perturbations in the weights of the slabs cannot be balanced, and so denser slabs descend, while less dense slabs ascend.Where the stratification is unstable, descending slabs get heavier, ascending ones lighter, and instability results.The instability is similar to the Rayleigh-Taylor instability in neutral fluids except that inertia need not play an important role in the ionospheric case.Krall et al. (2010) found that depleted plasma slabs or "bubbles" ascend under the aforementioned conditions until the magnetic flux-tube-integrated ion mass density matches that of the background.In their simulations, bubbles stopped rising at magnetic apex altitudes between 1200 and 1600 km where the stratification became stable.While this figure depends on background conditions in nature, bubbles would have to rise considerably higher to impinge on the ionosphere above Arecibo and higher than they have been observed rising over Jicamarca, even during strongly disturbed periods.Perkins (1973) considered a generalization of the plasma convective instability responsible for ESF that permits something like it to operate at middle latitudes.The generalization is to consider field-aligned slabs of plasma that are tilted, i.e., from the vertical at the dip equator.In this case, quasineutrality guarantees that the current density is continuous in the direction normal to the slabs, but since this is no longer the zonal direction, the zonal current density giving rise to the component of J × B that balances gravity can vary between slabs.The tendency is for plasma on lower (higher) flux tubes with greater (lesser) flux-tube-integrated Pedersen conductivities to support smaller (larger) zonal Pedersen currents and to descend (ascend), leading to instability.The treatment was generalized later by Hamza (1999) to include the effects of background density gradients and neutral winds.
Perkins' instability is often cited as playing a role both in MSTIDs and midlatitude spread F .Numerical simulations have been able to reproduce realistic MSTIDs from random noise that conformed to the prescriptions of Perkin's linear theory until saturating (Yokoyama et al., 2008;Duly et al., 2014).In the latter case, the simulations were performed on complete flux tubes.Saturation occurred in that case after about 30 min.
The long e-folding time of the Perkins instability, which is typically of the order of 1 h according to linear theory, together with its tendency to saturate at small amplitudes, suggests that other factors might be involved in MSTIDs and spread F .Kelley and Miller (1997) and Kelley (2011) argued that MSTIDs are induced merely by gravity waves rather than by Perkins instability and that they propagate in the direction for which dissipation is the weakest (see also Ogawa et al., 2009).This could be called the "Perkins stability" effect, whereby the midlatitude ionosphere exerts a damping force unless Perkin's criteria are met, in which case it is neutral.A long time history of airglow observations from Indonesia presented by Fukushima et al. (2012) supports the idea that MSTIDs are the result of gravity waves generated primarily by deep tropospheric convection rather than by plasma instability.Wind patterns in the troposphere were argued to be sufficient to account for the propagation directions observed.Some lines of inquiry hold that Eand F -region phenomena need to be considered together.Cosgrove and Tsunoda (2004) argued that the growth rate of the Perkins instability and E s -layer instabilities acting together is larger than that of either one acting alone (see also Tsunoda, 2006;Cosgrove, 2007).The hypothesis has been tested in a number of numerical investigations (e.g., Yokoyama et al., 2009).Otsuka et al. (2008) and Otsuka et al. (2007) found a correlation between MSTIDs and intense E s layers and E s -layer irregularities, respectively.Helmboldt et al. (2012) recently found evidence that sporadic E layer irregularities and MSTIDs have related occurrence phenomenology in TEC data derived from the Very Large Array (VLA).Saito et al. (2007) also correlated MSTID and QP echo behavior over the MU radar in Japan but could not ascertain the causal relationship, i.e., which is the cause, which is the response?
The mechanism behind midlatitude spread F (in the irregularity sense) and its relationship to MSTIDs and E s layers are not well established.Irregularities accompany MSTIDs during the summer during periods of low solar activity (Fukao et al., 1991) but also emerge at other times when conditions are geomagnetically disturbed (Swartz et al., 2000).Midlatitude spread F has also been associated with the steep bottom-side gradients that can form around the midnight collapse or rapid descent of the F peak driven by meridional tidal winds and wind shears in the thermosphere (Crary and Forbes, 1986).
This paper informs the aforementioned debates with new, comprehensive observations from the Arecibo Radio Observatory.New analytic tools permit neutral diagnostics and stability analysis in the neutral mesosphere-lowerthermosphere (MLT) region.A new simulation code elucidates a heretofore neglected instability mechanism in the midlatitude F region.Below, the data, tools, and simulation are presented and evaluated in a common context.
Observations
The period from 10 to 12 July 2015 was moderately disturbed, with the Kp index exceeding 4 for three contiguous 3 h periods at the start of 11 July.The F10.7 solar flux index was approximately 120.Ionospheric irregularities were observed in turn over Arecibo on the evening of 11 July through the morning of 12 July.
The digisonde in San Juan, Puerto Rico, was detecting a sporadic E layer with a blanketing frequency of 6.5 MHz and a top frequency of 7.25 MHz by 16:15 LT on 11 July 2015.The gap between the two frequencies increased steadily thereafter.By 19:45 LT, the top frequency of what had become a patchy layer was approximately 9 MHz.The sporadic E layer traces would remain strong and variable until about 03:00 LT the following morning.In addition, strong ionosonde spread F emerged at about 23:30 LT.The spread F was strongest through 01:30 LT the following morning and persisted until sunrise.
Arecibo observations for 11/12 July 2015, are shown in Fig. 1.For these experiments, data were collected using both the line feed and Gregorian feed systems.The former was pointed at zenith, and the latter was pointed at a 15 • zenith angle and scanned in azimuth between west and north.Dual-beam data such as these permit inferences about vector plasma and neutral drifts in three spatial dimensions (see below).Data were acquired using the coded long-pulse (CLP) mode developed by Sulzer (1986b).This represents a departure from synoptic experiments run in the recent past at Arecibo, which usually divided time between the CLP mode and a multi-frequency mode optimized for F -region plasma drift estimation (Sulzer, 1986a).The change was made to facilitate the acquisition of more plasma-line data.F -region plasma drifts can still be estimated from CLP data, albeit at a lower cadence.The cadence for the processed data shown here is once per minute.
Incoherent scatter (ion-line) power profiles acquired with the line feed system are shown in Fig. 1.The backscatter power has been corrected for range and serves as a proxy for electron density.The upper panel spans the E and F regions, whereas the lower panel presents an exploded view of the E region.
Between 19:00 and 21:30 LT, the F -region densities at all altitudes underwent quasiperiodic modulation with a dominant period of a little less than 1 h.The phenomenon suggests the passage of medium-scale traveling ionospheric disturbances (MSTIDs).
Beginning about 30 min after midnight, the F region fell rapidly.This is suggestive of the midnight collapse, a recurring dynamical feature in Arecibo datasets (e.g., Gong et al., 2012, and references therein).We will not focus attention on the midnight collapse but refer to it to establish context for some other dynamical events.
Starting 1 h before midnight, the height of the bottom side fell and then rose even more sharply.The sharp rise is also a recurring feature over Arecibo that can be regarded as preconditioning for eventual collapse.Just before midnight on this occasion, a narrow, depleted channel extending from the bottom side to the topside passed over the observatory.For a period of about 1 h after the passage of the depleted channel, the bottom-side F region exhibited considerable structuring at scale sizes down to the finest that could be resolved by this observing mode.The depleted channel and the fine structure that followed were morphologically similar to what is observed at the magnetic equator during periods of equatorial spread F .Later in the paper, we will explicitly link the depleted channel to the large-density perturbation that preceded it.
Irregular layers in the E region that appear concurrently are highlighted in the lower panel of Fig. 1.A sporadic E layer was present through sunset on 11 July.After sunset, as very often occurs in the summer months, the sporadic E layer persisted but became irregular.Between 20:00 and 21:30 LT, the layer became patchy and vertically developed.Its billowy appearance suggests a connection with neutral dynamic instability, as demonstrated by Bernhardt (2002).As mentioned above, coherent scatter radar observations in a common volume have shown that ionized patches like these are usually organized along elongated wavefronts with wavelengths of a few tens of kilometers (e.g., Hysell et al., 2012).In this case, the patchy layer was observed at about the same time as the MSTIDs which preceded it by about 1 h.Note that previous investigations at Arecibo have not revealed a universal relationship between patchy sporadic E layers and MSTIDs.In the event documented by Hysell et al. (2014b), for example, MSTIDs and patchy E s layers occurred on the same night but were mutually exclusive in time.In the present case, the MSTIDs and E s layers exhibit dissimilar periodicities.
Whereas the aforementioned feature was a typical example of sporadic E layer morphology, the feature in evidence between 00:00 and 02:00 LT is unusual.The feature was more than 10 km thick at times and morphologically suggestive of a single roll.The feature was moreover precisely coincident with the elevation of the F layer and the most intense period of midlatitude spread F .
Density profiles from the Gregorian feed (not shown) are substantially similar to Fig. 1 except for modulation in time due to azimuth swinging.The modulation is barely detectable in the F region before 23:30 LT but severe thereafter.This indicates that horizontal fine structuring on the scale of the beam separation was largely absent before the emergence of spread F and widespread afterward.The second E s -layer patch was also free of modulation, implying that the sporadic E layer blob was relatively unstructured on these scales.
More comprehensive measurements and derived parameters are shown in Fig. Plasma temperature and composition are estimated from the autocorrelation functions using the procedure described by Hysell et al. (2014b).A simple composition model that allows for O + , NO + , and O + 2 ions and used temperature-dependent rate coefficients is incorporated in the parameter fitting process iteratively.The resulting temperature estimates are shown in panel e.These estimates are expected to be valid in the sunlit E region but not in the sporadic E layers, which are reanalyzed differently below.Between 16:00 and 18:00 LT, there is the suggestion of significant dynamics and mixing in the E layer, i.e., the isotherms are not vertically stratified.Figure 2f presents F -region plasma drift estimates derived from the line feed and Gregorian line-of-sight drift measurements.Here, an inverse method incorporating a forward model of the (anisotropic) plasma mobility along with regularization is used to estimate drifts in the parallel-to-B, perpendicular-east, and perpendicular-north directions, which are consistent with the available line-of-sight drifts while having minimal curvature.The method is described in detail in Hysell et al. (2014b) and is similar to that introduced by Sulzer et al. (2005).Two main features are present in the drifts.The first feature is a sharp ascent anti-parallel to B peaking at 23:30 LT followed by a rapid descent along B peaking broadly at 01:00 LT.The ascent followed a brief plunge in the F -layer height.The descent is consistent with the apparent midnight collapse of the F -layer height.
The second feature is rapid westward plasma advection between 23:30 and 01:30 LT.Notably, this is nearly coincident with the appearance of the sporadic E layer blob.
Lastly, Fig. 2g-i show estimates of the zonal, meridional, and vertical winds in the sunlit E region from 16:00 to 18:30 LT.The winds are also estimated using an inverse method that incorporates a forward model of the plasma mobility together with regularization and curvature minimization, as described by Hysell et al. (2014b).The electric fields deduced from the F -region analysis are assumed to be representative of the E region as well and incorporated into the wind-estimation problem.Strong zonal and meridional shear is indicated, with the shear node being closely collocated with the thin sporadic E layer.Atypically large vertical winds with a periodicity of about 1 h are also indicated in the highest altitudes at which the wind analysis could be carried out.
Figure 3 presents data in a manner similar to Fig. 2 but focuses on the first patchy sporadic E layer.The incoherent scatter data were processed differently in two respects.First, no composition modeling was performed.Instead, the composition was assumed to be a combination of iron and magnesium ions, and temperature and composition were estimated through ordinary nonlinear least-squares fitting of the autocorrelation functions.Second, to accommodate the sparseness of the sporadic E echoes in range, a global fitting strategy was used, similar to that described by Cabrit and Kofman (1996) (see also Hysell et al., 2009).In this strategy, state parameters for all altitudes are fit simultaneously, and an additional penalty for the curvature of the parameters with altitude is imposed.This produces parameter profiles that are consistent with all available data while varying gradually with altitude.Continuity in time is not enforced in the global fitting.
Figure 4e shows temperature profiles within the patchy sporadic E layers, where the aforementioned methodology is expected to be applicable.(Ion temperatures are what have been measured, but we equate these with neutral temperatures; the ion cooling time due to nonresonant collisions with neutrals is much shorter than the timescale for modifying the ion temperature dynamically.)Temperatures within the first patch are almost uniform or slightly increasing with altitude.Within the second patch, however, the temperatures appear to decrease with altitude much of the time.
Panels g-i show wind estimates within the first patchy sporadic E layer between 20:12 and 21:15 LT.The methodology used here was the same as described above and in Hysell et al. (2014b) except that conductivities for metallic ions based on collision frequency formulas found in Schunk and Nagy (2004) were utilized in the inversion.The panels reflect planar shear flow, with winds reversing from southwest at low altitudes to northeast at high altitudes.Vertical winds are very small, as is usually the case.The flows are consistent over time.Little turning shear is present.
Lastly, Fig. 4 focuses on the second patchy sporadic E layer.The new information here is in panels g-i, which show the wind estimates between 23:55 and 01:15 LT.The winds are rapid and toward the southeast for the most part.Shear is evident but is not particularly strong except toward the end of the event.Vertical winds are again small.The zonal and meridional winds are similar in shape in the first patch, meaning that the shear, which is significant, is planar as opposed to turning shear.The temperatures increase throughout the layer at a rate of 2-3 K km −1 .The layer is convectively stable but possibly dynamically unstable.The case of the second layer is more unusual.The zonal and meridional wind profiles have rather different shapes.The shear is relatively modest except at first when U varied strongly with altitude.Most remarkably, the temperature profile is inverted.Around 100 km altitude, the lapse rate is even comparable to the adiabatic lapse rate of ∼ 9.5 K km −1 .This suggests the possibility even of convective instability and all but guarantees dynamic instability.
Analysis
We focus our analysis on two aspects of irregularities in the Arecibo dataset: the patchy sporadic E layers and the spread F plume.For the first, we analyze the possible role of neutral dynamic instability.For the second, we consider the possibil- ity of plasma convective instability.These are the avenues of investigation with the most support in the available data.
E s layers
We solve the Miles-Howard problem (Miles, 1961;Howard, 1961) to assess the dynamical stability of the MLT region in the vicinity of the two E s -layer patches observed over Arecibo.This is a boundary value problem that derives from the linearized equations of mass and momentum conservation in a vertically stratified, non-Boussinesq fluid.The inputs to the problem are horizontal wind profiles spanning the strata of interest and the local Brunt-Väisälä frequency.Dirichlet boundary conditions are imposed at the upper and lower altitude limits.The eigenvalue sought is the complex phase speed.Real solutions imply propagating waves, and imaginary solutions imply growing mode shapes.The problem is solved numerically using a relaxation method (Press et al., 1988).Details regarding the problem and its solution are given by Hysell et al. (2012).
Figure 7 shows results of the analysis for the first of the two E s -layer patches.For this analysis, the Brunt-Väisälä period is taken to be 5 min.The model wind profiles used to represent the conditions in the patch are plotted in the lowerleft panel.The upper-left panel shows the minimum Richardson number (R i ) for those profiles as well as the propagation angle (in the horizontal plane measured in degrees east of north) for which R i is a minimum.The curve suggests that a necessary condition for dynamic instability, viz., R i < 1 4 , is satisfied, if just barely, in a narrow stratum.
The upper-right panel of the figure shows the growth rate for dynamical instability for different wave vectors.Here, x and y denote eastward and northward directions, respectively.Unstable solutions exist for northeast-southwest propagating modes with wavelengths of about 20 km.The efolding time for the fastest-growing modes is of the order of 15 min.Marginal instability, limited by the stabilizing effect of buoyancy, is therefore indicated.
The three sets of curves superimposed in the upper-right panel are mode shapes calculated at the three points indicated.The mode shapes are concentrated close to the strata of maximum shear, implying only shallow mixing.The phase velocities for the unstable modes are small, as indicated by the lower-right panel in the figure.Rolls caused by the instability would be expected to drift with the winds at the shear node, which were small in this case.
Figure 8 shows the results of a similar analysis applied to the second E s -layer patch.In view of the lapse rate within this patch, which approaches the adiabatic rate over some spans of altitude, the Brunt-Väisälä period is taken to be very long for this analysis.The removal of the buoyancy stabilization gives rise to a number of families of fast-growing solutions that would not otherwise be present.The upper-right The fastest-growing waves have e-folding times of about 2 min and wavelengths of 20-30 km.This is considerably shorter than predictions for E s -layer instability or its variants under the same forcing conditions (e.g., Cosgrove and Tsunoda, 2004;Tsunoda, 2006).The mode shapes for the most unstable waves span large vertical distances, implying the possibility of deep overturning, particularly in the case of waves propagating in the meridional directions.The predicted phase speeds for the fastest-growing modes are again small.Irregularities are expected to drift with the background winds, which are mainly eastward in this case.
Spread F plume
The introductory discussion about midlatitude F -region plasma instability focused on mechanisms involving motions of planar slabs of ionization.Such mechanisms are easy to visualize, amenable to linear, local analysis, and were prime candidates for explaining midlatitude spread F when it was first being observed at Arecibo.However, contemporary computational tools permit the exploration of a wider range of candidate mechanisms that can tap the free energy in the unstably stratified nighttime ionosphere.
Aside from the depletion plume observed at 23:30 LT, the most prominent feature in the F region in Fig. 1 is the zone of enhanced ionization in the bottom side at about 300 km altitude that immediately preceded the plume.According to the F -region drift estimates in Fig. 2, this feature was accompanied by rapid, widespread motion of ionization up and down magnetic field lines.The displacement of ionization along the magnetic field lines at middle latitudes signifies a drastic redistribution of conductivity.The enhancement in Fig. 1 represents a bulge in field-line-integrated Pedersen conductivity with significant dynamical consequences.It could well have been the seed for instability and midlatitude spread F onset.
We have used a 3-D numerical simulation to examine the effects of a conductivity bulge in the midlatitude ionosphere.The simulation code is a modified version of the one described by Hysell et al. (2014a).It evolves the number density of four ion species (O + , O + 2 , NO + , and H + ) in time, incorporating the effects of background electric fields, winds, pressure gradients, gravity, and recombination chemistry.Initial conditions for the plasma number density and composition are derived from a combination of empirical models.The code solves for the electrostatic potential fully in three dimensions by enforcing quasi-neutrality.The code is cast in tilted magnetic dipole coordinates (see Swisdak, 2006, for discussion).The simulation space encompasses the E and F regions in the Northern Hemisphere in the American sector and is bounded by p ∈ [1.03, 1.08], q ∈ [0, 0.24], and φ ∈ [−2.0, 2.0] where q ≡ cos θ/r 2 , p ≡ r/sin 2 θ , θ is magnetic co-latitude, and φ is degrees longitude.The simulation space therefore spans a zonal distance of just over 400 km at the magnetic equator and L shells from 1.03 to 1.08.
For this simulation, a zonal wind profile with a hyperbolictangent profile shape drives eastward plasma drifts approaching 100 m s −1 at the top of the simulation volume.Note that the direction of the winds is immaterial and that the phenomena described below occur given eastward or westward winds and background plasma drifts.A constant background zonal electric field drives plasma ascent at 30 m s −1 .Ascent in the plane perpendicular to B is required for instability in the scenario depicted.The drifts in the v ⊥n direction were positive most of the time in the hours before the appearance of the spread F plume and so consistent with this assumption.
The left column of Fig. 9 depicts the initial conditions.The upper panel shows the plasma number density in the plane perpendicular to B in the meridional midpoint of the simulation.Red, green, and blue tones represent molecular ions, atomic oxygen ions, and hydrogen ions, respectively.The panel reflects the addition of an ellipsoidal blob of enhanced density in the bottom side.The plasma density is doubled where the blob is densest.
The middle panel shows the current density in the same perpendicular-to-B plane.Superimposed on the current density are equipotential contours in units of kV.These are approximate streamlines of the transverse-to-B flow.The streamlines indicate the well-known behavior of plasma dynamics in the vicinity of a Pedersen conductivity enhancement or blob.The effects are twofold.First the blob drifts with the zonal wind faster than the background plasma.Second, the blob descends in the background plasma frame of reference.Since the background plasma is ascending, the net effect is that the blob nearly maintains its altitude.
Since the transverse-to-B flow is nearly incompressible, the plasma flow surrounding the blob is deflected around it.The resulting vertical motion perturbs the background plasma density gradient, producing depletions to the east and west of the blob.These depletions are subsequently prone to normal E × B instability.After 37.5 min of simulation time, the depletion on the leading edge of the enhancement exhibits vertical drifts in excess of 200 m s −1 in this simulation.The ascent rate moreover increases rapidly toward the end of the simulation as the depletion channel becomes more narrow and structured.Our simulation was terminated as the depletion propagated outside of the simulation space.In nature, the depletion would be expected to propagate well into the topside F region.
We can estimate a growth rate for the instability by dividing the ascent speed of the depletion by the vertical density gradient scale length L ∼ 20 km.The corresponding efolding time of 100 s is much shorter than can be expected from the Perkins instability or its variants.
The bottom panel in the right column of Fig. 9 shows the current density in the plane of the magnetic meridian.The particular plane shown here is coincident with the westward wall of the depletion.Very strong downward field-aligned currents are seen to flow on the meridional edges of the depleted region.The currents close in the E region.We can speculate that the currents would provide the free energy for secondary instabilities in sporadic E layers sharing magnetic field lines with similar depletions in nature.Details regarding the secondary instabilities in question are provided by Hysell et al. (2013).
The spread F event described by Hysell et al. (2014b) was similar to the one described here in several respects.It occurred between sunset and midnight and was preceded by a sharp, downward displacement of plasma along B and an attendant density enhancement.The major difference was that the earlier event produced an ascending plume of enhanced rather than depleted plasma, something that is never observed at equatorial latitudes.We argue that the mechanism proposed here could account for that event if (1) the initial conductivity enhancement occurred in the topside and (2) the background plasma was descending rather than ascending.(These conditions were met in the earlier event.)In that case, the deflection of plasma around the blob would tend to pull dense ionization from the F peak upward and push more rarefied plasma from the topside downward.The dense plasma would then continue to ascend in the topside and become unstable under the action of the westward background electric field.The phenomenology is almost symmetric with that observed in the present event albeit more slowly growing.Enhanced density plumes are inherently less unstable than depleted ones, and the corresponding mechanism is not as robust.Both mechanisms appear to be viable, however.
Summary and conclusions
We have presented observations from the Arecibo Observatory of the Eand F -region ionosphere during moderately disturbed conditions.The observations exhibit the most common manifestations of space weather at middle latitudes: patchy E s layers, MSTIDs, and spread F depletions.Although the phenomena occurred at about the same time, clear www.ann-geophys.net/34/927/2016/Ann.Geophys., 34, 927-941, 2016 connections between them are not obvious.Indeed, a conclusion of the paper is that there appear to be viable instability mechanisms in the midlatitude E and F regions that are not directly coupled.Detailed analysis of the Arecibo data involving statistical inverse methods allow the inference of neutral dynamics in the MLT and the thermosphere.Neutral dynamics appears to play a key role in ionospheric instability at middle latitudes.Significant shear flow appears to be nearly ubiquitous in the midlatitude E region.In the patchy E s layers examined here, the neutral flow was marginally or robustly unstable in the Richardson number sense.In the latter case, instability benefited from regions of less stable lapse rates, which negated the stabilizing effect of buoyancy.The predicted e-folding time for the stratum in question was only about 2 min and much shorter than what is anticipated for plasma instability acting alone.Once patchy E s layers have been induced, fast-growing secondary plasma instabilities should be able to function and produce irregularities on intermediate and small scales.
An interesting feature of the observations presented here is the occurrence of temperature profiles with lapse rates close to the adiabatic value.Adiabatic or near-adiabatic lapse rates are common in the mesosphere where the ambient lapse rate is generally negative.Relatively small perturbations associated with passing wave motions or various types of heating or cooling can easily change the lapse rate, making it more negative and at times getting close to or exceeding the adiabatic value.At altitudes above 100 km where the background lapse rate is generally isothermal or the temperature increases with height, perturbing the temperature profile sufficiently to reach the adiabatic value is difficult, however.With increasing height in the thermosphere, diffusive effects and radiational cooling effects become more important, but in the lower thermosphere such effects are still relatively small.This suggests that the flow is, at least to a fairly good approximation, nearly adiabatic in that part of the atmosphere.One way to produce an adiabatic layer is therefore to have strong upwelling in the layer, due to mechanical forcing, for example.The motion has to be nearly or entirely vertical since slantwise convection will not lead to an adiabatic lapse rate.The general assumption is often made that the vertical winds in the thermosphere are small because of the highly stable stratification, i.e., the positive or isothermal lapse rates in that part of the atmosphere, but a number of studies have shown that not to be the case.Larsen and Meriwether (2012) summarized a large number of ground-based and in situ vertical wind measurements and found that vertical winds exceeding 10 m s −1 are common and often extend over large altitude ranges and over periods of 1 h or more.With respect to observations specifically from Arecibo, Larsen et al. (2004) presented observations of a number of overturning events in the lower thermosphere, including some from Arecibo.The more recent Hysell et al. (2014a) vertical neutral wind measurements obtained with the Arecibo incoherent scatter radar showed large vertical motions throughout the multi-hour observing period, consistent with the conclusions of Larsen and Meriwether (2012).The forcing responsible for the observed thermospheric vertical winds is not clear, but the occurrence of layers with near-adiabatic lapse rates is to be expected, given that the large vertical winds are there.
In the F region, large-scale plasma irregularities in the bottom side appear to be induced by thermospheric motion with a component parallel to the geomagnetic field in the manner suggested by Crary and Forbes (1986).Once present, the irregularities seed additional irregularities and provide the conditions necessary for plasma convective instability leading to spread F .The e-folding time for this process appears to be much shorter than that predicted for Perkins instability.
An analytic formulation of the equations for an instability is always attractive, in part because it makes the instability mechanism easier to understand and in part because the stability boundaries can often be identified more easily.The various midlatitude spread F instability theories that have been proposed have either relied entirely on analytic theories or used the analytic theories as a starting point for further numerical calculations.The results presented here are not represented in a simple analytic form but nonetheless show the existence of a robust and fast-growing instability mechanism that appears to account for the observed characteristics of the midlatitude F -region plumes that were observed.
Data availability
The data used for this study are available from the NSF Madrigal Database (http://madrigal.haystack.mit.edu/madrigal/).
Figure 2 .
Figure 2. Detailed examination of the Arecibo dataset from 11/12 July 2015.The panels are described in the text.
Figure 3 .
Figure 3. Detailed examination of the Arecibo dataset from 11/12 July 2015.The focus here is on the first patchy sporadic E layer.The panels are described in the text.Note that panels (a) through (d) in Fig. 2 apply to the entire experiment and are not reproduced in Figs. 3 and 4.
Figure 4 .
Figure 4. Detailed examination of the Arecibo dataset from 11/12 July 2015.The focus here is on the second patchy sporadic E layer.Panels (e)-(i) are described in the text.
Figures 5 and 6 present measurements within the first and second E s layers (respectively) in still greater detail.The figures show the progression over time of the zonal (U ) and meridional (V ) winds.Different colors, progressing from black to red to green to blue, represent different times in the given intervals.The violet curves are models selected to represent typical profiles in analyses carried out in the next section of the paper.Also shown in Figs. 5 and 6 are temper-atures averaged over the intervals in question.Lines through the plotted points are sample variances.In the case of the first patch, there is little natural variability, and the sample variances reflect mainly statistical uncertainty in the measurement.Variability is greater in the second patch, and the sample variances are consequently larger.
Figure 5 .
Figure 5. State-parameter estimates within the sporadic E layer patch seen between 20:12 and 21:15 LT.Left: zonal winds.Middle: meridional winds.Right: temperatures.The change in color, from black to red to green to blue, indicates the passage of time.The violet curves are models reflecting conditions at 20:24 LT used in subsequent analysis.
Figure 7 .
Figure 7. Eigenfunction analysis for wind profiles in first patchy E s layer.(a) Minimum Richardson number parameter and propagation angle at which the minimum occurs.(b) Linear growth rate of the fastest-growing eigenmode with some representative mode shapes.(c) Zonal (solid) and meridional (dashed) wind profiles.(d) Phase speed of fastest-growing mode.
Figure 8 .
Figure 8. Same as Fig. 7 except for the second E s -layer patch.
Figure 9 .
Figure 9. Numerical simulations of midlatitude spread F .The left, center, and right columns depict simulation times of 0, 20, and 37.5 min, respectively.The top panels of each column show plasma number density in the plane perpendicular to B in the meridional center of the simulation.The colors indicate the abundances of molecular ions (blue), atomic oxygen ions (green), and protons (red).The middle panels show vector current density in the same plane according to the indicated color legend (full scale: 20 µA m −2 ).For clarity, contributions from diamagnetic currents are not shown.Equipotential curves in kilovolt are superimposed.The bottom panels show current density in a magnetic meridional plane at a 50 km zonal ground distance according to the indicated color legend (full scale: 200 µA m −2 ).
Incoherent scatter observed by the line feed system at the Arecibo Radio Observatory on 11/12 July 2015.Grayscales represent range-corrected power, which serves as a proxy for electron number density.The top panel shows Eand F -region data, whereas the bottom panel shows E-region data in more detail. | 9,716 | 2016-11-03T00:00:00.000 | [
"Physics",
"Environmental Science"
] |
Modulation of Skin Inflammatory Responses by Aluminum Adjuvant
Aluminum salt (AS), one of the most commonly used vaccine adjuvants, has immuno-modulatory activity, but how the administration of AS alone may impact the activation of the skin immune system under inflammatory conditions has not been investigated. Here, we studied the therapeutic effect of AS injection on two distinct skin inflammatory mouse models: an imiquimod (IMQ)-induced psoriasis-like model and an MC903 (calcipotriol)—induced atopic dermatitis-like model. We found that injection of a high dose of AS not only suppressed the IMQ-mediated development of T-helper 1 (Th1) and T-helper 17 (Th17) immune responses but also inhibited the IMQ-mediated recruitment and/or activation of neutrophils and macrophages. In contrast, AS injection enhanced MC903-mediated development of the T-helper 2 (Th2) immune response and neutrophil recruitment. Using an in vitro approach, we found that AS treatment inhibited Th1 but promoted Th2 polarization of primary lymphocytes, and inhibited activation of peritoneal macrophages but not bone marrow derived neutrophils. Together, our results suggest that the injection of a high dose of AS may inhibit Th1 and Th17 immune response-driven skin inflammation but promote type 2 immune response-driven skin inflammation. These results may provide a better understanding of how vaccination with an aluminum adjuvant alters the skin immune response to external insults.
Introduction
The skin, the primary interface with the environment, functions as an important barrier, protecting the body against pathogens, chemicals, allergens, and mechanical insults. Prolonged exposure of epidermal keratinocytes to environmental insults may lead to activation of the immune system, and ultimately the development of inflammatory skin disorders such as psoriasis and atopic dermatitis (AD), the most common dermatologic conditions [1,2].
T cells, the central component of adaptive immunity, play a critical role in host defense against pathogens. During infections, the cytokine milieu modulates the differentiation and polarization of CD4 + T cells into distinct effector phenotypes, including interferon γ (IFNγ) producing-T-helper 1 (Th1) cells that mediate the clearance of infected cells, interleukin 4 (IL4)-and IL13-producing Th2 cells that play a role in parasite expulsion and driving allergic response, and IL17-producing Th17 cells that mediate anti-fungal response and Pharmaceutics 2023, 15, 576 2 of 17 promote autoimmunity [3,4]. Psoriasis and atopic dermatitis are considered to be T cellmediated chronic relapsing inflammatory skin diseases, mediated by different effector mechanisms. Psoriatic inflammation is primarily driven by the Th17 immune response, although Th1 cells are also present [5]. In contrast, the overactivation of Th2 cells plays a dominant role in driving acute skin inflammation in atopic dermatitis [1,[5][6][7][8].
In the early 1900s, investigators found that injecting aluminum-precipitated antigen induced a stronger antibody and protective immune response compared to the response generated by free antigen injection [9,10]. Aluminum salts have now become one of the most commonly used adjuvants in human vaccines. Despite their longstanding use, the mechanisms by which aluminum adjuvants enhance immune responses are still not fully understood. Studies have shown that aluminum adjuvants promote the differentiation of CD4 T cells into Th2 effector cells but do not support Th1 cell differentiation in vivo or in vitro [11][12][13][14].
Dysregulation of T cell differentiation is the central disease mechanism for both psoriasis and atopic dermatitis, while aluminum salts exhibit their potential immunomodulatory effect through effector T cell differentiation/activation. Therefore, we aimed to determine whether the administration of aluminum salt influences the activation of the immune system in mouse models of psoriasis or atopic dermatitis. We also tested the in vitro effect of aluminum salt on effector T cell differentiation and the activation of key myeloid cells (neutrophils and macrophages). The results of our study provide insights into how vaccination may lead to the development of skin side effects by altering the skin's immune response to external insults.
Animal Cares and Animal Models
C57BL/6 mice used in this study were purchased from GemPharmatech (Nanjing, China), then bred and maintained in the standard pathogen-free (SPF) environment of the Laboratory Animal Center in Xiamen University. All animal experiments were approved by the Institutional Animal Care and Use Committee of Xiamen University. For the IMQinduced psoriasis-like model, 7~8-week-old female C57BL/6 mice were anesthetized, shaved, and depilated. Then, the backs of the mice were treated daily with a topical dose of 50 mg IMQ cream for 7 days. A single dose of aluminum salt (500 µL volume containing 840 µg AL 3+ ) was injected peritoneally at day 2.5 post IMQ application, and daily i.p. injection of methotrexate (MTX), a commonly used systemic anti-inflammatory drug to treat psoriasis, was used as a positive control [18]. For the MC903-induced dermatitis-like model, the backs of the mice were shaved, depilated, and treated daily with a topical dose of 45 µL MC903 (100 µM, dissolved in ethanol) for 10 days. A single dose of aluminum salt was injected peritoneally at day 4 post-MC903 application. Daily administration of methotrexate (MTX) at 1 mg/kg was used as a positive anti-psoriatic agent [18]. The appearances of the lesions were recorded using a digital camera.
Histology and Immunohistochemistry (IHC)
The biopsies of lesional skin were embedded in OCT (#4583, SAKURA, Torrance, CA, USA) followed by frozen sectioning. Frozen mouse skin sections were subjected to hematoxylin and eosin staining according to the manufacturer's protocol. For IHC staining, OCT-embedded sections were permeabilized with 0.1% saponin (#47036, Sigma, Tokyo, Japan) for 10 min, then blocked in 5% BSA (#4240GR100, Biofroxx, Einhausen, Germany) solution for 1 h. Blocked sections were incubated with the indicated primary antibodies at 4 • C overnight, followed by appropriate fluorophore-coupled secondary antibodies in the dark for 4-6 h at 4 • C. Finally, the sections were mounted and observed using a Zeiss LSM 880 Laser Confocal Microscope (Zeiss, Jena, Germany).
Flow Cytometry and Analysis (FACS)
FACS analysis was performed to analyze immune cell populations of innate immunity and adaptive immunity in inflammatory skin. Briefly, skin tissues were digested with collagenase D (V900893, Sigma) and Dnase 1 (D8071, Solarbio, Beijing, China) to prepare a single cell suspension. The cell suspension was then stained with zombie violet viability dye, and blocked non-specific Fc-mediated interactions with CD16/CD32 Monoclonal Antibody. Then, cells were incubated with the indicated antibody cocktail mix (Panel A or B, listed in Table 1) for 1 h at 4 • C with shaken every 10 min gently. Finally, the cells were resuspended in stabilizing fixative buffer (#338036, BD biosciences, San Jose, CA, USA). FACS analysis was performed using the Thermo Attune NxT machine (Waltham, MA, USA) and further analyzed using FlowJo V10 software.
Quantitative Reverse Transcription-Quantitative PCR (qRT-PCR) Analyses
Total cellular RNA was extracted from skin tissues or cultured cells using Trizol (#T9424, Sigma), chloroform, and the RNAExpress Total RNA Kit (#M050, NCM Biotech, Newport, UK), and purified RNA was reversed transcribed to cDNA using the HiScript II Q RT SuperMix kit. Quantitative PCRs were performed by SYBR Green qPCR Master Mix (#B21202, Bimake) on the Qtower real-time machine (Analytikjena, Swavesey, Cambridge, UK). All primers used in our study were designed to span exon-exon junctions and are shown in Table 2. The expression of the Tbp (TATA-Box Binding Protein) gene was used as a housekeeping gene to normalize the target gene expression. The ratio of target mRNA to Tbp was calculated using the −2 ∆CT (∆C T = C T of target gene-C T of Tbp) method based on the published relative quantification method [21].
Neutrophil and Macrophage Cultures
To isolate neutrophils, bone marrow cells were first flushed from mouse femurs and tibias using RPMI-1640 medium containing 10% FBS. Bone marrow cells were treated with red blood cell lysis buffer and washed once with PBS. Cells were then overlaid on top of the Histopaque gradient (1077-1119) and centrifuged for 30 min at 872× g at room temperature, without a break. Neutrophils were collected at the interface of the Histopaque 1119 and Histopaque 1077 layers, and isolated neutrophils were cultured in RPMI-1640 medium containing 10% FBS and 1% penicillin/streptomycin, and then treated as indicated. The purity of neutrophils was >90% as determined by flow cytometry. For treatment, cells were pretreated with AS (1:250 dilution from the 1680 µg/mL stock = 6.74 µg/mL Al 3+ ) for 2 h and then stimulated with FSL (50 ng/mL) for 6 h.
Peritoneal macrophages were isolated by injecting the mice with 5 mL of PBS containing 1% FBS, gently massaging the belly, and then aspirating the fluid. This process was repeated three times in total. The cells were treated with red blood cell lysis buffer and resuspended in DMEM containing 10% FBS and 1% penicillin/streptomycin. Isolated macrophages were seeded at a cell density of 5 × 10 5 cells per well of a 24-well plate. Twenty-four hours after being seeded, nonadherent cells were removed, and the adherent macrophages were incubated with DMEM containing 10% FBS and 1% penicillin/streptomycin for all experiments. For treatment, cells were pretreated with AS (1:250 or 1:1000 dilution from the 1680 ug AL 3+ /mL stock) for 2 h and then stimulated with LPS (0.5 µg/mL) for 12 h.
Statistics
Experiments were repeated at least 3 times independently. All statistical analyses were performed using GraphPad Prism version 9 software. For comparisons between more than two groups, statistical analysis was performed by one-way analysis of variance (ANOVA) followed by a Dunnett test [22], or two way ANOVA followed by a Bonferroni test [23], to correct for multiple comparisons as listed in the legend. For Figure 1b, a one-way ANOVA analysis was performed, followed by a two-stage linear step-up procedure of Benjamini, Krieger, and Yekutieli to correct for multiple comparisons by controlling the False Discovery Rate [24]. Quantitative results are presented as mean ± standard error of mean (SEM). A p value less than 0.05 was considered statistically significant and indicated with asterisks, * p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001. (l,m) qRT-PCR analysis of the mRNA expression levels of Defb4 and Defb3 (ratios to HK gene Tbp were shown, n = 6~7/group). All error bars indicate the mean ± SEM. * p < 0.05, and statistical analysis was performed by one-way ANOVA analysis, followed by Dunnett test (d,e,g,f,i,k,l,m) or a two-stage linear step-up procedure of Benjamini, Krieger, and Yekutieli (c) for multiple comparisons. * p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001, ns, no-significant.
Administration of Aluminum Salt Alleviates the Development of Psoriasis-like Skin Inflammation in Mice
To determine the effect of an aluminum adjuvant (see characterization results in Figure S1a-d) on the development of skin inflammation, we first employed an imiquimod (IMQ)-induced psoriasis-like skin inflammation mouse model (Figure 1a), in which both Th1 and Th17 cell-mediated adaptive immunity play a role in driving epidermal hyperplasia and scaling [25]. A dose of 840 µg aluminum/mouse was used in our animal study, and this dose is within the recommended range in vaccines for clinical use [26]. It is also a common practice for determining vaccine immunogenicity in a mouse model as an in vivo potency assay, in which a full human dose or even doubled the human dose was used for intraperitoneal (i.p.) injection into each mouse [27,28]. The injection of this high dose of AS had a minimal systemic cytotoxic effect in mice (Figure S1e,f). We found a single i.p. injection of aluminum salt (AS) during the application of IMQ was effective in alleviating the development of psoriasis phenotypes characterized by erythema, thickness, and scales ( Figure 1b). However, it was less effective than the daily i.p. injection of methotrexate (MTX) (Figure 1b), a commonly used systemic anti-inflammatory drug for psoriasis. In addition, AS injections blocked the development of systemic inflammatory responses, as shown by the reduced enlargement of the lymph nodes and spleen tissues (Figure 1d,e). AS was more effective than MTX in reducing lymph node enlargement ( Figure 1d).
Histological analysis of the skin sections revealed that epidermal thickness and dermal cell infiltration in IMQ-treated skin were significantly lower in AS-and MTX-treated mice (Figure 1f-i). Immunostaining for Ki67, a proliferation-associated nuclear antigen, showed that IMQ treatment increased the number of Ki67+ basal and/or supra-basal keratinocytes, and this increase in epidermal cell proliferation was partially inhibited by either AS or MTX injections (Figure 1j,k). Antimicrobial peptides, including defensins (DEFBs), are strongly induced in activated keratinocytes in psoriasis, and defensins participate in cutaneous inflammation by promoting keratinocyte migration, proliferation, and the production of inflammatory cytokines [29]. Analysis of the expression of key defensins, including Defb3, Defb4, and Defb14, showed that AS or MTX partially inhibited the IMQ-mediated induction of defensin genes (Figures 1l,m and S1g). Interestingly, we found that the expression of type 1 collagen (Col1a1) was significantly reduced in IMQ-treated skin, and this effect was reversed by AS or MTX injections (Figure S1h), suggesting that IMQtriggered dysregulation of dermal homeostasis can be restored by AS. Together, these results demonstrate that the administration of aluminum salt alleviates the development of psoriasis-like skin inflammation in mice.
Administration of Aluminum Salt Inhibited the Development of Th1 and Th17 Immune Responses in the IMQ-Induced Psoriasis Model
Next, we aimed to investigate the effect of AS on lymphocyte activation in an IMQinduced psoriasis model. In mouse skin, γδ T cells are the major IL17-producing cells after infection, wounding or IMQ application. On the other hand, αβ T cells, either CD4 + or CD8 + , are capable of producing large amounts of the Th1 cytokine IFNγ or Th2 cytokines (IL4 and IL13) under different inflammatory conditions [8,30]. FACS analysis of skin CD4 T cells revealed that IMQ treatment inhibited Th2 but promoted Th1 polarization of CD4+ T cells in the skin, and increased the percentage of IL17A-producing γδ T cells (Figure 2a-c). These effects were largely reversed by either AS or MTX injections (Figure 2a-c). In line with these FACS results, qRT-PCR analysis showed that the IMQ-dependent induction of Il17a and Il17f was also significantly inhibited by either AS or MTX injections (Figure 2d,e). However, the IMQ-mediated induction of Il23 and Il22 did not appear to be influenced by AS or MTX injections ( Figure S2a,b).
Administration of Aluminum Salt Reduced the Recruitment and Activation of Myeloid Cells
Abnormal dermal infiltration and activation of myeloid cells, including neutrophils and macrophages, is one of the histologic hallmarks of psoriasis [31]. Flow cytometry (FACS) analysis showed that AS or MTX injections reduced the percentage of infiltrated CD11B + Ly6G + neutrophils in IMQ-treated skin samples (Figures S3a and 3a,b). In line with showing the percentage of IL4/IL13 + , IFNγ + or IL17A + cells in CD45 + CD4 + T cells (b) or CD45 + γδ T cells (c) in the skin (n = 3/group); (d,e) qRT-PCR analysis of the mRNA expression levels of Defb4 and Defb3 (ratios to HK gene Tbp were shown, n = 6~7/group). All error bars indicate mean ± SEM, and statistical analysis was performed by two-way ANOVA with Bonferroni-corrected multiple comparisons (b,c) or one-way ANOVA analysis with Dunnett-corrected multiple comparisons (d,e). * p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001, ns, non-significant.
Administration of Aluminum Salt Reduced the Recruitment and Activation of Myeloid Cells
Abnormal dermal infiltration and activation of myeloid cells, including neutrophils and macrophages, is one of the histologic hallmarks of psoriasis [31]. Flow cytometry (FACS) analysis showed that AS or MTX injections reduced the percentage of infiltrated CD11B + Ly6G + neutrophils in IMQ-treated skin samples (Figures S3a and 3a,b). In line with this FACS result, qRT-PCR analysis showed that the expression levels of neutrophil marker genes, including Ly6g and S100A8, were also reduced in skin samples from AS or MTX-treated mice (Figure 3c,d). FACS analysis of CD11B+F4/80+ macrophages revealed that in IMQ-treated skin, macrophages expressed Ly6C (Figures S3a and 3a,e), a phenotypic marker for pro-inflammatory macrophages [32], indicating that macrophages shift from a resting to an inflammatory state during the development of psoriasis. Furthermore, we found that the IMQ-mediated increase in Ly6C hi macrophages was inhibited by either AS or MTX injection (Figure 3a,e). Additionally, IMQ-mediated induction of Il1b, Cxcl1, and Saa3, key genes related to myeloid cell activation and/or chemotaxis [33,34], was also significantly inhibited by AS or MTX injections (Figures 3a,e and S3b). Together these results show that the systemic administration of aluminum salt potently suppresses the activation of myeloid cells, including neutrophils and macrophages, in the IMQ-induced psoriasis model. this FACS result, qRT-PCR analysis showed that the expression levels of neutrophil marker genes, including Ly6g and S100A8, were also reduced in skin samples from AS or MTX-treated mice (Figure 3c,d). FACS analysis of CD11B+F4/80+ macrophages revealed that in IMQ-treated skin, macrophages expressed Ly6C (Figures S3a and 3a,e), a phenotypic marker for pro-inflammatory macrophages [32], indicating that macrophages shift from a resting to an inflammatory state during the development of psoriasis. Furthermore, we found that the IMQ-mediated increase in Ly6C hi macrophages was inhibited by either AS or MTX injection (Figure 3a,e). Additionally, IMQ-mediated induction of Il1b, Cxcl1, and Saa3, key genes related to myeloid cell activation and/or chemotaxis [33,34], was also significantly inhibited by AS or MTX injections (Figures 3a,e and S3b). Together these results show that the systemic administration of aluminum salt potently suppresses the activation of myeloid cells, including neutrophils and macrophages, in the IMQ-induced psoriasis model. (ratios to HK gene Tbp were shown, n = 6~7/group). All error bars indicate mean ± SEM, and statistical analysis was performed by one-way ANOVA analysis with Dunnett-corrected multiple comparisons. *** p < 0.001, **** p < 0.0001.
Administration of Aluminum Salt Promoted the Development of MC903-Induced Atopic Dermatitis-like Skin Inflammation
To determine whether aluminum salt promotes the development of the Th2 immune response in a mouse model of dermatitis, we adapted the MC903-induced dermatitis model, one of the most well-characterized murine models of atopic dermatitis in which T cells are preferentially polarized toward the Th2 phenotype [35]. Daily topical application of MC903 to the back skin led to reddening and scaling of the skin, which could be partially inhibited by systemic administration of MTX; in contrast, MC903-induced redness and scaling were markedly increased by the injection of aluminum salt (Figure 4a-d). Histological analysis of skin sections revealed that epidermal thickness and dermal cell infiltration in MC903treated skin were significantly higher in the AS injected mice (Figure 4e-g). In addition, qRT-PCR analysis showed that AS injection enhanced the MC903-mediated induction of genes associated with epidermal keratinocyte activation (Defb4 and Defb3) (Figure 4h,i). Similar to the IMQ-induced psoriasis model, MC903 application also led to suppression of Col1a1 expression in the skin, but AS injection failed to restore Col1a1 expression in MC903-treated skin ( Figure S4a). To determine whether aluminum salt promotes the development of the Th2 immune response in a mouse model of dermatitis, we adapted the MC903-induced dermatitis model, one of the most well-characterized murine models of atopic dermatitis in which T cells are preferentially polarized toward the Th2 phenotype [35]. Daily topical application of MC903 to the back skin led to reddening and scaling of the skin, which could be partially inhibited by systemic administration of MTX; in contrast, MC903-induced redness and scaling were markedly increased by the injection of aluminum salt (Figure 4a-d). Histological analysis of skin sections revealed that epidermal thickness and dermal cell infiltration in MC903-treated skin were significantly higher in the AS injected mice (Figure 4eg). In addition, qRT-PCR analysis showed that AS injection enhanced the MC903-mediated induction of genes associated with epidermal keratinocyte activation (Defb4 and Defb3) (Figure 4h,i). Similar to the IMQ-induced psoriasis model, MC903 application also led to suppression of Col1a1 expression in the skin, but AS injection failed to restore Col1a1 expression in MC903-treated skin ( Figure S4a). Atopic dermatitis-like skin inflammation was triggered by daily topical application of MC903 to mouse dorsal skin for 9 days with or without AS injection between day 3~4 or daily MTX injection from day 3~4 as indicated; (b) Representative skin images for each group at day 9; (c) Bar graphs showing quantified redness scores for each group (n = 5 mice/group); (d) Bar graphs showing quantified scaling scores for each group; (e) HE staining of skin sections from each group (n = 5 mice /group); (f) Bar graphs showing quantified epidermal thickness for each group (n = 15~20 fields/group); (g) Bar graphs showing quantified dermal cell density for each group (n = 8 fields/group); (h,i) qRT-PCR analysis of the mRNA expression levels of Defb4 and Defb3 (ratios to housekeeping gene Tbp were shown, n = 6~8/group). All error bars indicate mean ± SEM, and statistical analysis was performed by one way ANOVA analysis with Dunnett-corrected multiple comparisons. * p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001, ns, non-significant. Atopic dermatitis-like skin inflammation was triggered by daily topical application of MC903 to mouse dorsal skin for 9 days with or without AS injection between day 3~4 or daily MTX injection from day 3~4 as indicated; (b) Representative skin images for each group at day 9; (c) Bar graphs showing quantified redness scores for each group (n = 5 mice/group); (d) Bar graphs showing quantified scaling scores for each group; (e) HE staining of skin sections from each group (n = 5 mice /group); (f) Bar graphs showing quantified epidermal thickness for each group (n = 15~20 fields/group); (g) Bar graphs showing quantified dermal cell density for each group (n = 8 fields/group); (h,i) qRT-PCR analysis of the mRNA expression levels of Defb4 and Defb3 (ratios to housekeeping gene Tbp were shown, n = 6~8/group). All error bars indicate mean ± SEM, and statistical analysis was performed by one way ANOVA analysis with Dunnett-corrected multiple comparisons. * p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001, ns, non-significant.
Administration of Aluminum Salt Promoted the Development of Type 2 Immune
Response in the MC903-Induced Dermatitis Model FACS analysis of skin cells revealed that AS injection significantly enhanced the expression of Th2 cytokines (IL4 and IL13) from CD45 + SSC lo T cells in MC903-treated skin (Figure 5a,b). In addition, qRT-PCR analysis showed that MC903-mediated induction of Il4 was further enhanced by AS injection but inhibited by MTX injection (Figure 5c). These results showed that the administration of aluminum salt promoted the development of type 2 inflammation in the MC903-induced dermatitis mouse model.
Administration of Aluminum Salt Promoted the Development of Type 2 Immune Response in the MC903-Induced Dermatitis Model
FACS analysis of skin cells revealed that AS injection significantly enhanced the expression of Th2 cytokines (IL4 and IL13) from CD45 + SSC lo T cells in MC903-treated skin (Figure 5a,b). In addition, qRT-PCR analysis showed that MC903-mediated induction of Il4 was further enhanced by AS injection but inhibited by MTX injection (Figure 5c). These results showed that the administration of aluminum salt promoted the development of type 2 inflammation in the MC903-induced dermatitis mouse model. Figure S5A, n = 3/group). (e) qRT-PCR analysis of the mRNA expression level of Ly6g (n = 6~8/group). All error bars indicate mean ± SEM, and statistical analysis was performed by two-way ANOVA with Bonferroni-corrected multiple comparisons (b,d) or one-way ANOVA analysis with Dunnett-corrected multiple comparisons (c,e). * p < 0.05, *** p < 0.001, **** p < 0.0001, ns, non-significant. Figure S5A, n = 3/group). (e) qRT-PCR analysis of the mRNA expression level of Ly6g (n = 6~8/group). All error bars indicate mean ± SEM, and statistical analysis was performed by two-way ANOVA with Bonferroni-corrected multiple comparisons (b,d) or oneway ANOVA analysis with Dunnett-corrected multiple comparisons (c,e). * p < 0.05, *** p < 0.001, **** p < 0.0001, ns, non-significant.
Administration of Aluminum Salt Altered Myeloid Cell Activation in the MC903-Induced Dermatitis Model
FACS analysis of myeloid cells showed that, in contrast to daily IMQ application, daily application of MC903 led to only a small increase in CD11B + Ly6G + neutrophils and no increase in CD11B + F4/80 + Ly6C + pro-inflammatory macrophages (Figures S5a and 5d). AS injection increased MC903-mediated infiltration of neutrophils, but not the inflammatory macrophages ( Figures S5a and 5d). In line with the FACS results, qRT-PCR analysis showed that AS injection increased the MC903-mediated induction of Ly6g (neutrophil marker gene) (Figure 5e). These results showed that the administration of aluminum salt promoted the infiltration of neutrophils, but not inflammatory macrophages, into MC903-treated skin.
The In Vitro Effect of Aluminum Salt in Modulating T Cell Differentiation and Myeloid Cell Activation
We showed that injection of AS promoted the development of type 2 skin inflammation, and differentially altered myeloid cell activation in two distinct dermatitis models in vivo. Next, we aimed to determine whether AS directly alters the activation of lymphocytes and myeloid cells in vitro.
Aluminum Salt Inhibited Th1 but Promoted Th2 Polarization during In Vitro T Cell Differentiation
First, to determine whether aluminum salt can directly alter Th1 or Th2 polarization in differentiating lymphocytes, we subjected lymph node-derived primary lymphocytes to in vitro differentiation assays in the presence of CD3/CD28 antibody under Th1, Th2, or Th17 polarizing conditions. We found that the addition of AS significantly inhibited IFNγ production under Th1 polarizing conditions from CD4+ T cells, but promoted IL4 and IL13 production under Th2 polarizing conditions from CD4+ T cells (Figures 6a,b and S6a,b). In addition, we found that under Th2 polarizing conditions, AS treatment promoted a Th17 to Th2 shift in γδ T cells, the major IL17-producing cell type in the skin ( Figure S6c,d). However, under Th17 skewing conditions, in which γδ T cells were robustly shifted into IL17A producing cells, AS treatment only mildly reduced the expression of IL17A in γδ T cells without altering the expression of IFNγ or IL4/IL13 ( Figure S6e,f).
Aluminum Salt Inhibited Macrophage but Not Neutrophil Activation In Vitro
Next, primary neutrophils derived from bone marrow were stimulated with FSL, and primary peritoneal macrophages were stimulated with LPS with or without AS (Figure 6c,d). We found that while AS had no effect on neutrophil activation (Figure 6c), it significantly suppressed macrophage activation even at low concentration (1:1000 dilution from the original stock), as shown by qRT-PCR analysis of Il1b, Cxcl1 and Nos2 (Figure 6d). IL1β and NOS2 are well-established markers for inflammatory M1 macrophages [36], indicating that AS may directly inhibit macrophage polarization toward the pro-inflammatory state. , and cells were subjected to qRT-PCR analysis of Il1b, Cxcl1, or Nos2 mRNA expression as indicated (n = 3/group). All error bars indicate mean ± SEM, and statistical analysis was performed by two-way ANOVA with Bonferroni-corrected multiple comparisons. ** p < 0.01, *** p < 0.001, **** p < 0.0001, ns, non-significant. , and cells were subjected to qRT-PCR analysis of Il1b, Cxcl1, or Nos2 mRNA expression as indicated (n = 3/group). All error bars indicate mean ± SEM, and statistical analysis was performed by two-way ANOVA with Bonferroni-corrected multiple comparisons. ** p < 0.01, *** p < 0.001, **** p < 0.0001, ns, non-significant.
Discussion
Injection of vaccines, composed of immunogens, preservatives, adjuvants, and byproducts, can elicit adverse skin reactions, such as localized or generalized eczema vaccinia, in susceptible individuals [37,38]. Conflicting results have been reported regarding the relationship between vaccination and the development of atopic diseases [39][40][41], but none of these studies investigated the specific immunomodulatory effects of the individual component of the vaccines.
Aluminum salts are widely used as adjuvants in preventive vaccines, enhancing their immunogenicity and effectiveness by stimulating a type 2 immune response [11,12]. Theoretically, AS could increase the risk of type 2 lymphocyte-mediated allergic and hyperresponsive diseases, such as atopic dermatitis. On the other hand, AS application may have a therapeutic effect against psoriasis, dominated by Th1 and Th17 cells, which are antagonistic to Th2 cells. To rest this theory, in the present study, we investigated the immuno-modulatory effect of aluminum salt in two distinct murine models of inflammatory skin diseases. We found that the injection of aluminum salt into the IMQ-induced psoriasis mouse model promoted T cell polarization from the Th1/Th17 to Th2 phenotype, suppressed neutrophil recruitment and macrophage activation, and therefore suppressed the development of psoriasis-like skin inflammation. In contrast, injection of aluminum salt promoted the development of the Th2 immune response and clinical phenotype of dermatitis in the MC903-induced atopic dermatitis-like mouse model.
Our results indicate that AS application may inhibit the Th1/Th17-mediated activation of autoimmunity but enhance the Th2-mediated activation of the allergic immune response in the skin. In line with our results, it has been reported that patients receiving subcutaneous allergen-specific immunotherapy with aluminum adjuvants are associated with a lower risk of autoimmune diseases, including psoriasis [42]. In contrast, several reports have shown that although rare, patients can develop delayed hypersensitivity or allergic cutaneous reactions to vaccines containing aluminum salts [43][44][45][46][47].
By in vitro primary culture, we showed that the addition of aluminum salt promoted Th2 differentiation and inhibited the Th1 differentiation of naïve lymphocytes. To our knowledge, this is the first study investigating the direct effect of aluminum salt on naive T cell differentiation/polarization. Furthermore, we found that the addition of aluminum salt inhibited macrophage polarization toward the pro-inflammatory state but had no direct effect on neutrophil activation. AS-mediated differential effects on neutrophil recruitment in the IMQ-or MC903-induced dermatitis models were likely indirectly mediated by AS-dependent changes in T cell effector immune responses.
It has been shown that aluminum hydroxide has a high adsorption capacity for endotoxins and LPS (283 µg/mg of Al), whereas endotoxins are electrostatically repelled by aluminum phosphate [48]. As a result, the adsorption capacities of phosphate-treated aluminum hydroxide or aluminum phosphate are only 23 and 3 µg endotoxin/mg of Al, respectively [48]. Here we report that the AS solution (which is a mixture of aluminum hydroxide and aluminum phosphate) can inhibit the inflammatory activity of LPS (0.5 µg/mL) even at low concentration (~1.7 µg AL 3+ /mL), which should only absorb 5~29 ng/mL LPS in theory. Therefore, AS-mediated absorption/neutralization of LPS may contribute to the inhibitory effect of AS against LPS, but it is unlikely the major mechanism for this inhibition. Future studies are still needed to determine the mechanism underlying the anti-inflammatory effect of AS against the LPS-mediated inflammatory response in macrophages.
A limitation of our study is that we administered aluminum salt as a single intraperitoneal injection instead of multiple intramuscular injections as performed in routine vaccinations in clinics. In addition, we investigated the immunomodulatory effect of only one high dose of AS in mouse dermatitis models, although this dose was chosen based on our and others' previous studies in the mouse potency assay of human vaccines with aluminum adjuvants [15][16][17]27,28]. Our study offers a clue for the immunomodulatory role of aluminum, not an effective therapy for human disease. Future studies are needed to determine the optimal injection method and dose to observe the immunomodulatory function of AS in animal dermatitis models.
Together, our results provide new mechanisms underlying the immunomodulatory effect of aluminum salt on immune cell activation. Systemic injection of a high dose of aluminum adjuvant may inhibit or promote skin inflammation, depending on the involvement of specific effector T cells and/or myeloid cells during disease pathogenesis.
Supplementary Materials: The following supporting information can be downloaded at: https://www. mdpi.com/article/10.3390/pharmaceutics15020576/s1, Figure S1: Characterization of the aluminum salt solution, and investigation of the therapeutic effect of AS on the development of psoriasis-like skin inflammation in mice (Supplemental for main Figure 1 | 7,263.2 | 2023-02-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
ResearchonCrystal StructureandFungicidalActivityof theAmide Derivatives Based on the Natural Products Sinapic Acid and Mycophenolic Acid
Structural optimization based on natural products is an important and effective way to discover new green pesticides. Here, two series of amide derivatives based on sinapic acid and mycophenolic acid were designed in combination with the fungicidal natural product piperlongumine and synthesized by preparing the carboxylic acid into acyl chloride and then reacting with the corresponding aromatic amines, respectively. .e resulting structures were successively characterized by H NMR, 13 C NMR, and HRMS. .e crystal structures of molecules I-4 and II-5 were analyzed for structure validation. .e in vitro inhibitory activity indicated that most of the target products exhibited fungicidal activity equivalent to or even better than fluopyram against Physalospora piricola. .e in vivo fungicidal activity demonstrated that the compounds I-5 and II-4 displayed almost the same preventative activity as carbendazim and fluopyram at 200 μgmL. .e TEM observation revealed that the fungicidal activity of the target molecules against Physalospora piricola may be due to the influence on the mitochondria in the cell structure. .ese results will provide valuable theoretical guidance for developing the new green fungicides.
Introduction
Agrochemicals are important production materials for agricultural production, and their development plays an important role in ensuring national food security, agricultural product quality, ecological environment safety, and public health [1]. However, the continuous application of traditional chemical fungicides has produced many negative effects including ecological environment pollution, pathogen resistance, and poison for beneficial insects and microorganisms [2,3]. erefore, the development of efficient, safe, low residual, and environmentally friendly green fungicides has become the inevitable trend for pesticide innovation [4,5]. e discovery of lead compounds and the exploration of mechanism of action are the key to the development and innovation of fungicides. Structural optimization based on natural products has become an effective way to develop new green fungicides, which has important guiding significance for practicing new development concepts and promoting green development of agrochemicals [6][7][8][9][10][11]. For example, coumoxystrobin was successfully developed based on natural products strobilurin and coumarin ( Figure 1) [12]. Moreover, natural product strobilurin A-derived methoxyacrylate fungicides have occupied the top position in the market sales of fungicides [6].
Mycophenolic acid (MPA) was discovered by Gosio in 1893 in a strain of Penicillium fungus and was found to possess broad biological activity such as antifungal, antiviral, anticancer, and antipsoriasis properties [13,14]. Sinapic acid is widely distributed in the plants and belongs to hydroxycinnamic acid. Furthermore, commercial fungicides dimethomorph, pyrimorph, and flumorph were successively developed based on natural product cinnamic acid (Figure 2) [15][16][17][18]. In this project, combined with the structure of fungicidal natural product piperlongumine [19,20], two series of amide compounds based on natural products sinapic acid and mycophenolic acid were designed, synthesized, and evaluated their fungicidal activity against the common agricultural pathogens (Figure 3).
Materials and Equipment.
e materials and reagents used in the organic synthesis reactions were of analytical grade and purchased from Energy Chemical and Bide Pharmatech Ltd. Melting points were measured on a X-5 binocular microscope (Yuhua Co., Ltd., China). 1 H NMR and 13 C NMR were provided on a AVANCE NEO-500 MHz spectrometer (Bruker, Germany). HRMS was recorded on a Xevo G2-XS QTof spectrometer (Waters, USA). X-ray crystal structure was determined on a D8 Venture diffractometer (Bruker, Germany). e purification of target compounds was performed by the column chromatography on silica gel (200-300 mesh).
X-Ray Crystal Structure Determination.
e crystals of the target compounds I-4 and II-5 were cultivated from a mixed solvent of methanol, ethyl acetate, and n-hexane, respectively. All measurements were made on a Bruker D8 Venture diffractometer with Mo-Kα radiation (λ � 0.71073Å). e crystal data of the compound I-4 were collected at 298 K, and the colorless crystal is of monoclinic system, space group C 2/c, with a � 28.208 (3)
Fungicidal Activity Measurement.
With fluopyram and carbendazim as positive controls, the mycelial growth inhibition method was used to determine the in vitro inhibitory activities of the target compounds against common agricultural pathogens according to the previously reported procedures [21,22], and each treatment was repeated at least three times. e tested pathogens include Rhizoctonia solani (RS), Gibberella zeae (GZ), Botrytis cinerea (BC), Physalospora piricola (PP), Cercospora circumscissa Sacc. (CS), Colletotrichum capsici (CC), Alternaria kikuchiana Tanaka (AK), and Alternaria sp. (AS). e in vivo fungicidal activity of compounds I-5 and II-4 against Physalospora piricola was performed on apples referring to literature methods [23]. e target molecule (5.0 mg) was dissolved in dimethyl sulfoxide (30 μL) and diluted with 0.1% Tween-80 aqueous solution to provide the test stock solution (200 μg·mL −1 ), which was sprayed with the same volume on healthy apples. Subsequently, the fungi cake containing Physalospora piricola with a diameter of 7 mm was inoculated. After cultivation at 25°C for 5 days, the average lesion area was measured to calculate the preventative activity. Each in vivo fungicidal activity screening was carried out for at least five repeats.
Transmission Electron Microscope (TEM) Investigation.
Physalospora piricola hyphae were obtained by incubation in PDB medium at 25°C for 72 h and centrifugation at 7000 rpm for 3 min, which were then resuspended in PDB medium to treat with compounds I-5 and II-4 (200 μg ·mL −1 ) for 24 h, respectively. Subsequently, the treated hyphae were provided by centrifugation and fixed with 2.5% glutaraldehyde. e ultrastructure observation of the hyphae treated with compounds I-5 and II-4 (200 μg·mL −1 ) was performed by Shiyanjia Lab on a TEM according to the standard procedures.
Organic Synthesis.
Herein, the important intermediates and target molecules I-1-I-5 and II-1-II-5 were provided referring to the reported procedures (Scheme 1). In the preparation of target compound I-1-I-5, the phenolic hydroxyl group in sinapic acid was firstly reacted with acetic anhydride to produce (E)-3-(4-acetoxy-3,5-dimethoxyphenyl)acrylic acid [24], which was further reacted with thionyl chloride under the catalyzed condition of DMF (3 drops) to provide (E)-3-(4-acetoxy-3,5-dimethoxyphenyl) acrylic chloride. Finally, the target compound I-1-I-5 was synthesized by reacting the acyl chloride 2 with the corresponding aromatic amines, respectively. In addition, the condensing reagents such as EDCI-HOBt, HATU-DIEA, or TBTU-DIEA were also taken to explore the condensation of the carboxylic acid 1 and substituted aromatic amines; however, the yields of the products were low. e target compound II-1-II-5 was obtained according to the same steps described above. Subsequently, the obtained structures were identified and characterized by 1 H NMR, 13 C NMR, and HRMS.
Crystal Structure Analysis.
e crystal structure analysis is beneficial in investigating the physical and chemical properties of the molecules. In this study, several crystal structure characteristics were also illustrated through the crystal structures and packing of molecules I-4 and II-5 ( Figure 4, CCDC numbers 2095769 and 2095768). e selected bond lengths and angles are presented in Table 1, and the selected dihedral angles are shown in Table 2. From the crystal packing, the π-π interactions occurred between the benzene rings of the adjacent molecules, which strengthen the integration of the crystal molecules (Figures 4(c) and 4(d)).
Fungicidal Inhibitory Activity.
e in vitro inhibitory activities of the target compounds against the common agricultural pathogens were investigated, and the results are shown in Table 3. From the data, most of the target compounds exhibited weak-to-moderate fungicidal activity against Gibberella zeae, Rhizoctonia solani, Botrytis cinerea, Cercospora circumscissa Sacc, Alternaria kikuchiana Tanaka, Colletotrichum capsici, and Alternaria sp. However, all compounds showed moderate-to-good fungicidal activity against Physalospora piricola, even better than fluopyram. For example, compounds I-1, I-4, and I-5 exhibited higher inhibitory activity than fluopyram, with the inhibitory rates of 76.2%, 73.3%, and 73.5%, respectively. It can be concluded that the compounds I-1-I-5 and II-1-II-5 displayed high selectivity for the fungicidal activity against Physalospora piricola. In terms of the relationship between the structures and the initial inhibitory activity, the structural modification had different effects on the inhibitory activities of target compounds against the different pathogens. For example, compared with the electron-withdrawing trifluoromethyl and chlorine groups, the introduction of the electron-donating methyl, methoxy, or isopropoxy group at the benzene ring was beneficial to improving the fungicidal activity of I-4 and I-5 against Physalospora piricola. For instance, the inhibition rates of compounds I-4 and I-5 were 73.3% and 73.5%, respectively, which were apparently higher than those of compounds I-2 and I-3. However, this structural modification had no significant effects on the inhibitory activity of the compounds II-2-II-5 against Physalospora piricola. To further investigate the fungicidal activity of the target compounds against Physalospora piricola, the EC 50 values were measured and the results are exhibited in Table 4. It could be found that most of the target compounds exhibited fungicidal activity equivalent to or even better than
TEM Observation.
To further explore the effects of molecules I-5 and II-4 on the hyphae, the ultrastructure of Physalospora piricola hyphae treated with distilled water, compounds I-5 and II-4 (200 μg·mL −1 ), was observed on a TEM, and the results are illustrated in Figure 6. From the data, the cell wall and plasma membrane of the cell structures in the control and tested groups were normal, and mitochondria could be clearly observed in the control group. However, the mitochondria in the cell structure treated with I-5 and II-4 were blurred or even disappeared. Based on this, it could be speculated that the fungicidal activity of the target compounds against Physalospora piricola may be due to the influence of I-1-I-5 and II-1-II-5 on the mitochondria in the cell structure.
Conclusion
In summary, two series of sinapic acid-derived and mycophenolic acid-derived amide derivatives were designed and synthesized. e obtained structures were characterized by 1 H NMR, 13 C NMR, HRMS, and X-ray crystal diffraction. e in vitro and in vivo fungicidal activity screening indicated that compared with other tested pathogens, most of the target compounds exhibited excellent fungicidal activity against Physalospora piricola, of which the compounds I-5 and II-4 displayed almost the same preventative activity as carbendazim and fluopyram at 200 μg·mL −1 . e TEM observation further revealed that the fungicidal activity of the target compounds against Physalospora piricola may be due to the influence on the mitochondria in the cell structure.
Data Availability
e data used to support the findings of this study are included within the article and the supplementary information file(s). Crystallographic data for the structures reported in this manuscript have been deposited with the Cambridge Crystallographic Data Centre under the CCDC numbers: 2095769 (compound I-4) and 2095768 (compound II-5). Copies of these data can be obtained free of charge from http://www.ccdc.cam.ac.uk/ data_request/cif.
Conflicts of Interest
ere are no conflicts of interest to declare.
Supplementary Materials
e supporting information contained X-ray crystal data of the compounds I-4 and II-5, and 1 H NMR, 13 C NMR, and | 2,483.6 | 2021-12-21T00:00:00.000 | [
"Chemistry",
"Environmental Science"
] |
Sub-Chronic Effects of Slight PAH- and PCB-Contaminated Mesocosms in Paracentrotus lividus Lmk: A Multi-Endpoint Approach and De Novo Transcriptomic
Sediment pollution is a major issue in coastal areas, potentially endangering human health and the marine environments. We investigated the short-term sublethal effects of sediments contaminated with polycyclic aromatic hydrocarbons (PAHs) and polychlorinated biphenyls (PCBs) on the sea urchin Paracentrotus lividus for two months. Spiking occurred at concentrations below threshold limit values permitted by the law (TLVPAHs = 900 µg/L, TLVPCBs = 8 µg/L, Legislative Italian Decree 173/2016). A multi-endpoint approach was adopted, considering both adults (mortality, bioaccumulation and gonadal index) and embryos (embryotoxicity, genotoxicity and de novo transcriptome assembly). The slight concentrations of PAHs and PCBs added to the mesocosms were observed to readily compartmentalize in adults, resulting below the detection limits just one week after their addition. Reconstructed sediment and seawater, as negative controls, did not affect sea urchins. PAH- and PCB-spiked mesocosms were observed to impair P. lividus at various endpoints, including bioaccumulation and embryo development (mainly PAHs) and genotoxicity (PAHs and PCBs). In particular, genotoxicity tests revealed that PAHs and PCBs affected the development of P. lividus embryos deriving from exposed adults. Negative effects were also detected by generating a de novo transcriptome assembly and its annotation, as well as by real-time qPCR performed to identify genes differentially expressed in adults exposed to the two contaminants. The effects on sea urchins (both adults and embryos) at background concentrations of PAHs and PCBs below TLV suggest a need for further investigations on the impact of slight concentrations of such contaminants on marine biota.
Introduction
Sediments are composed of soluble, insoluble (rock and soil particles) and biogenic matter, which can be naturally transported from lands to oceans due to coastal erosion and windblown dust [1]. Sediment represents an essential and dynamic part of marine environments and may accumulate organic and/or inorganic compounds deriving from natural and anthropogenic sources, such as industrial, commercial, agricultural and urban activities [2,3]. Contamination associated with (re-)suspended sediment is a concern for human health, mainly due to its tendency to accumulate in bottom-feeder organisms and biomagnify through marine food webs [4,5]. Worldwide governments are promoting sediment assessment, restoration and valorization as a key compartment of water bodies [6] (i.e., the European Union via the Water Framework (2000/60/EC) (WFD) and the Marine Strategy Framework Directives (2008/56/EC)). Natural (i.e., tides, bioturbation) and artificial (i.e., dredging) perturbative events can remobilize sediment and dissolve the associated contaminants into the water column, including polycyclic aromatic hydrocarbons (PAHs) and polychlorinated biphenyls (PCBs), causing short-and long-term effects on marine organisms [7]. PAHs consist of a large group of widespread organic compounds of high environmental concern, occurring mainly in relation to human activities, such as combustion by-products (i.e., atmospheric deposition) [8] or oil spillage (approximately 15% w/w of PAHs); nevertheless, crude oil water-soluble fraction effects (i.e., mixture) are still largely unexplored [9,10].
It has been estimated that direct discharges of PAHs in marine environments can range from <1 µg/L to over 625 µg/L, with concentrations in industrial effluents up to 4.4 mg/L and 170.000 ng/g in sediment (dry weight; [11]). PCBs are a group of anthropogenic compounds, classified as persistent organic pollutants (POPs) by the Stockholm Convention (2001). Less than 1% of PCBs released in the environment volatilize from soil/sediment to the atmosphere, while most of them accumulate into the water column and in aquatic organisms, and they reach up to 4601 ng/g dry weight in the sediment [12].
Besides pollution hotspots (i.e., industrial and commercial sites), marine sediments are generally only slightly contaminated by PAHs and PCBs, with concentrations below nationally and internationally set threshold limit values (TLVs) (TLV PAHs = 900 µg/L, TLVPCBs = 8 µg/L; Legislative Italian Decree 173/2016), but our knowledge about the potential side effects of this background contamination on human health and the environment is still insufficient.
This research investigated the potential negative effects of PAH (to simulate postcombustion products) and PCB slightly contaminated sediment on the sea urchin Paracentrotus lividus Lamark. Ad hoc experimental mesocosms [13] were set up to expose adult sea urchins to a reconstructed marine habitat (i.e., sediment and water) purposely spiked with PAHs and PCBs [14,15]. We tested two hypotheses proposing that the effects of slightly polluted sediment could result in the following: (i) morphological changes in the development of sea urchin embryos, deriving from adults exposed to these contaminants and (ii) variation in the expression level of genes involved in stress response, skeletogenesis, detoxification and development/differentiation. Specifically, sea urchin endpoints included adult's mortality; gonadal index; sensitivity of embryos (up to pluteus stage) generated from the exposed organisms; contaminant accumulation in adult thecae and spines, gonads and intestine; genotoxicity; and de novo transcriptome assembly.
Sediment Grain Size and Water Features and Spiking Levels
The sediment showed a typical sandy profile. Sandy fraction represented 99.9%, where the coarse sand (0.5 mm-1 mm, representing 41.1%) was the dominant component (Supplementary Figure S1). The fine and medium sands (from 0.25 mm to 0.5 mm) were 2.1% and 15.7%, respectively, whereas the mud fraction represented a small percentage (about 0.1%). During two months of exposure to PAH-and PCB-contaminated sediments, the physical and chemical values of the seawater in the mesocosm were almost constant. PAHs and PCBs detected in all sediment and water samples from all mesocosms were below the relative detection limit values (see Supplementary Table S1-S8 for more details) considering all investigation times (t0, t1 and tf).
Effects of Contaminated Sediment on Adult Growth, Gonadal Index and Sea Urchin Development
None of the conditions imposed in the negative controls (W and W + SED) negatively affected sea urchins, suggesting that all the subsequently observed effects could be attributed to the treatments. After the exposure period (two months), a mortality rate of 1% was detected in all experimental conditions (W, W + SED, W + SED + PAHs and W + SED + PCBs), revealing good health conditions of the sea urchins after two months of exposure (Supplementary Figure S2). After two months of exposure, no significant differences in growth rates were found between adults exposed to PAHs and PCBs as compared to organisms collected in the field at the beginning of the experiment (p > 0.05), similar to GI values (p value > 0.05; Supplementary Table S9).
After gamete collections, three important endpoints of sea urchin embryonic development were detected: (i) fertilization success; (ii) first mitotic division; and (iii) the pluteus stage, occurring at 48 hpf. Exposure to both contaminants, PAHs and PCBs, did not show significant effects on the percentages of fertilization success and first mitotic cleavage with respect to the controls (in tanks with seawater (W) and in tanks with seawater + sediment (W + SED) without contaminants; p > 0.05). Observation of the embryos at the pluteus stage revealed that PAH and PCB treatments induced malformations, mainly affecting arms, spicules, apices and the entire body shape as compared to control embryos ( Figure 1).
Figure 1.
Examples of malformations observed in (B-E) p. lividus plutei deriving from adults exposed to PAHs and PCBs and in (F,G) embryos still at the gastrula stage deriving from adults exposed to PCBs in comparison with (A) control embryos deriving from adults reared in a tank with sediment without contaminants. (B) poorly-formed apex; (C) crossed at the apex with wider aperture of the arms; (D) degraded arms; (E) delayed and abnormal body; (F,G) malformed gastrulae.
In particular, at the pluteus stage, an increase in malformed embryos was observed in larvae deriving from sea urchins exposed to contaminated sediments with respect to the controls, represented by water and water + sediment without contaminants ( Figure 2).
Figure 2.
Percentage of normal plutei and malformed embryos at the pluteus and gastrula stages from sea urchins, deriving from adult sea urchins exposed to sediment contaminated with PAHs (water + sediment + PAHs) and PCBs (water + sediment + PCBs) and in control conditions represented by adults reared in control tanks (water and water + sediment). Data are reported as mean ± standard deviation one-way ANOVA by Holm-Sidak test (** p < 0.01, *** p < 0.001).
PAHs induced an increased percentage of malformed embryos (about 42%) with respect to control water + sediment (about 10%, p < 0.001). The exposure to PCBs generated approximately 27% (p < 0.001) of malformed plutei and developmental delays, with some embryos still at the gastrula stage (about 24%; p < 0.001), which were also malformed.
These results, valid for the above-mentioned doses applied to P. lividus, demonstrated that PAHs are more harmful than PCBs, being supported also by chemical analyses of the contaminant's bioaccumulation.
After two months of exposure, the bioaccumulation of PAHs and PCBs was also detected in three sea urchin tissues: thecae (including spines), gonads and guts. Chemical results showed that (i) 12.4 µg/kg of PAHs (including acenaphthylene, acenaphthene, fluorene, anthracene, phenanthrene, 9-methylanthracene and benzo[a]anthracene) were accumulated in the theca, including the spine (Table 1); (ii) 16.3 µg/kg of total PAHs (including acenaphthylene, acenaphthene, fluorine, anthracene, phenanthrene, fluoranthene, pyrene and benzo[a]anthracene) were accumulated in the gonads; and (iii) no PAH accumulation was found in the guts. The target body compartments in sea urchins were the body wall and the spines when individuals were exposed to contaminated water and the guts when they were exposed to contaminated foods [16]. However, the accumulation in these marine organisms was more efficient when exposed via water than via the food. No detectable events of PCB bioaccumulation were observed in the analyzed tissues (Supplementary Table S10). PCB bioaccumulation data on marine organisms are scarce, impeding an effective assessment of their toxicity. Zeng et al. [17,18] studied the uptake patterns of PCB congeners in the sea urchin Lytechinus pictus. More than 66 days are necessary for some congeners to attain steady state concentration in L. pictus gonad, whereas 28-42 days are required [19] in such marine organisms as bivalves, polychaetes and amphipods. Evidence of toxicity with changes in total or gonad weight was only detected at 647 mg/g. Studies on fish indicated that embryos and developing larvae were negatively affected by PCBs at 0.12 mg/g-12 mg/g [20,21]. Monosson et al. [20] observed that PCB effects were due to the congener 3,3 ,4,4 tetrachlorobiphenyl, which has a greater toxicity than that of the congeners' mixture [16], and exposed adult P. lividus to 14C-labelled PCB#153 via seawater and food, observing that the bioaccumulation efficiency was similar in the body wall, spines, gut and gonads.
Transcriptomic Assembly and Differentially Expressed Genes in Plutei from Adults Exposed to PAHs and PCBs (RNA-seq)
Another interesting result was the large-scale genomic information herein reported, which greatly improved the few molecular tools available for the sea urchin P. lividus, despite its importance as a marine model organism. For this reason, the de novo transcriptome obtained in this work represents a promising tool to identify new P. lividus genes, which can be considered general biomarkers placed in motion from the sea urchin to deal with environmental pollution.
All the results obtained by RNA sequencing are summarized below. BLASTx top-hit species distribution of matches for all the transcriptomes with known sequences indicated (Supplementary Figure S3) that the majority of P. lividus contigs (reads) showed the highest similarity with Strongylocentrotus purpuratus (BLAST hits = 1000). The other most represented species included Apostichopus japonicas (BLAST hits 50) and Acanthaster planci (BLAST hits 45). All alignments were carried out setting the E-value thresholds at a value of ≤1 e −5 .
To perform the RNA-seq assembly de novo, Trinity was used [22]. We obtained the trinity assembly with the statistics reported: Counts of transcripts: Total "trinity genes": 216864, Total "trinity transcripts": 611356, Percent GC: 38.26%. Then, we performed a differentially expression analysis in Trinity, selecting Deseq2 R package [23], and we obtained the genes differentially expressed with respect to the several conditions (less than 3000 genes and 8000 isoforms). Of the isoforms differentially expressed, we performed a BLASTx alignment with respect to the nucleotide non-redundant database in NCBIi, using OmicsBox (version1.2.4) [24]. Differentially expressed genes were identified between the three conditions: embryos at the pluteus stage spawned by adults exposed for two months to sediment contaminated with (i) PAHs or (ii) PCBs, with comparisons made with (iii) those exposed in tanks with sediment without contaminants as the control, including three biological replicates for each treatment.
The score plot showed that the replicates for the controls were very similar, with a clear separation from the treated samples, suggesting a greater number of down-and up-regulated genes in the treated samples compared to that of the controls (Supplementary Figure S4).
As reported in Supplementary Table S11, (i) 1898 genes were differentially expressed (DE) genes with a false discovery rate (FDR) of ≤ 0.05, of which 993 genes were upregulated (FC ≥ 1.5) and 965 were down-regulated (FC ≤ 1.5) in plutei deriving from adult P. lividus exposed to sediment contaminated with PAHs (indicated as Treated_1); (ii) 2396 genes were DE with a false discovery rate (FDR) of ≤ 0.05, of which 1079 genes were up-regulated (FC ≥ 1.5) and 1317 were down-regulated (FC ≤ 1.5) in plutei deriving from adult P. lividus exposed to sediment contaminated with PCBs (indicated as Treated_2) compared to the control; (iii) 1356 genes were DE with a false discovery rate (FDR) of ≤ 0.05, of which 755 genes were up-regulated (FC ≥ 1.5) and 601 were down-regulated (FC ≤ 1.5), considering Treated_1 compared to Treated_2. After the annotation, (i) 488 genes were found up-regulated (with a FC range between 1.6 and 99) and 271 genes down-regulated (with a FC range between 1.7 and 95) for Treated_1 vs. Control, and, of these, some genes showed very high values of fold changes, such as the four up-regulated genes (RNAdirected DNA polymerase from mobile element jockey-like, calmodulin-like protein 4, arylsulfatase A and fibropellin-1-like isoform X6) and the three down-regulated genes (beta-1,3-galactosyltransferase 1-like, isocitrate dehydrogenase (NADP) cytoplasmic isoform X2 and fibrillin-1-like); (ii) 311 genes were found up-regulated (with a FC range between 1.7 and 95) and 420 genes down-regulated (with a FC range between 1.6 and 95) for Treated_2 vs. Control, and, of these, some genes showed very high values of fold changes, such as the three up-regulated genes (dnaJ homolog subfamily B member 13, rho guanine nucleotide exchange and factor 39ATP-dependent RNA helicase DHX8-like) and the two down-regulated genes (betaine-aldehyde dehydrogenase and serine/threonine-protein kinase TNNI3K); (iii) 177 genes were found up-regulated (with a FC range between 1.9 and 90) and 239 genes down-regulated (with a FC range between 1.5 and 98) for Treated_1 vs. Treated_2, and, of these, some genes showed very high values of fold changes, such as the three up-regulated genes (actin-related protein 2/3 complex subunit 3, acyl-CoA dehydrogenase and arylsulfatase I) and the two down-regulated genes (cyclin-dependent kinase 11B and glucose-6-phosphate 1-dehydrogenase isoform X1).
This large-scale genomic information represents a significant finding, being the first molecular attempt to define PAH and PCB effects on sea urchin P. lividus by molecular approaches. PAHs and PCBs targeted different genes and had several common targets, as shown in the Venn diagrams considering up-regulated genes and down-regulated genes, comparing the groups "Treated_1 (plutei deriving from adults exposed for two months to sediment contaminated with PAHs) versus Control (plutei from adults sea urchin P. lividus reared for two months in tanks with sediment without contaminants)", "Treated_2 (plutei deriving from adults exposed for two months to sediment contaminated with PCBSs) versus Control" and "Treated_1 versus Treated_2" (Figure 3 and Supplementary Tables S12 and S13). . Venn diagrams considering up-regulated genes and down-regulated genes, comparing the groups "Treated_1 (plutei deriving from adults exposed for two months to sediment contaminated with PAHs) versus Control (plutei from adults sea urchin P. lividus reared for two months in tanks with sediment without contaminants)", "Treated_2 (plutei deriving from adults exposed for two months to sediment contaminated with PCBSs) versus Control" and "Treated_1 versus Treated_2". PAHs (Treated_1) and PCBs (Treated_2) induced an increase in the expression of 335 (48.5%) and 122 (17.7%) genes, respectively, compared to the Control; they also induced the down-regulation of 114 (18.5%) and 178 (28.9%) genes, respectively. The two contaminants had several common targets (see also Supplementary Tables S12 and S13 for the names of the common genes): (i) for up-regulated genes, 74 common genes (10.7%) comparing the groups "Treated_1 versus Control" and "Treated_2 versus Control"; 18 common genes (2.6%) comparing the groups "Treated_1 versus Control", "Treated_2 versus Control" and "Treated_1 versus Treated_2"; 4 common genes (0.6%) comparing "Treated_1 versus Control" and "Treated_1 versus Treated_2"; 62 common genes (9.0%) comparing "Treated_2 versus Control" and "Treated_1 versus Treated_2". (ii) for down-regulated genes, 104 common genes (16.9%) comparing the groups "Treated_1 versus Control" and "Treated_2 versus Control"; 12 common genes (2.0%) comparing the groups "Treated_1 versus Control", "Treated_2 versus Control" and "Treated_1 versus Treated_2"; 4 common genes (0.7%) comparing "Treated_1 versus Control" and "Treated_1 versus Treated_2"; 52 common genes (8.5%) comparing "Treated_2 versus Control" and "Treated_1 versus Treated_2".
Transcriptomic results indicate that PAHs and PCBs affected genes differently, mainly increasing their gene expressions, supporting those differences observed at the morphological level. In fact, the highest percentage of malformed plutei caused by exposure to PAHs can be linked to the up-regulation of the majority of the studied genes. An example was represented by nodal and nectin genes (data also confirmed by real-time qPCR experiments, see below). The nectin gene is involved in cellular adhesion [25,26], whereas nodal gene controls the left-right asymmetry in the sea urchins, regulating the expression level of the BMP2 gene [27][28][29][30]. The exposure to PCBs caused not only the up-regulation of the nodal gene, but also the up-regulation of the frizzled gene and the down-regulation of the PLC gene. The function of the frizzled gene is similar to that of the nodal gene. Binding to the Wnt6, this receptor is responsible for endoderm specification [31,32]. Instead, the PLC gene is involved in egg activation in the events immediately following fertilization and during embryo development in sea urchins [33][34][35]. Its down-regulation can be one of the causes of the delay effect shown after PCB treatment. Transcriptome is generally dynamic, and it is a good indicator of the cell's state. In addition, in this case, the ease of genome-wide profiling made the transcriptome analysis an integral part of understanding the biological processes affected by PAHs and PCBs. In fact, to identify the pathways in which the genes affected by these two contaminants were involved, a Gene Ontology (GO) term enrichment analysis was performed using DE genes (Figure 4). Seventy-seven GO terms were enriched, including 20 in "Biological Process" followed by 23 in "Molecular Function" and 24 in "Cellular Component" (p < 0.05). Over-represented GO categories included the oxidation-reduction process, regulation of transcription, DNA integration, cytoskeleton organization, nucleic acid binding, metal ion binding, DNA binding, zinc ion binding and DNA-binding transcription factor activity. Moreover, these genes are integral components of the membrane and were mainly localized in the cytoplasm, nucleus, extracellular region and microtubule.
Effects of PAHs and PCBs on Gene Expression by Real-Time qPCR
The expression levels of 62 genes [36][37][38][39][40] involved in different physiological processes were followed by a real-time qPCR (reported in Supplementary Figure S5). These genes were previously selected in [36,38,39,41], and their expression levels were studied in response to natural toxins produced by marine diatoms. We proposed these genes as possible biomarkers to detect the consequences of the exposure of marine invertebrates to different environmental pollutants [38]. In particular, these genes were defined as a part of the defensome, which was placed in motion by the sea urchin to protect themselves from environmental toxicants [42].
At the pluteus stage at 48 hpf ( Figure 5, for the numerical values see also Supplementary Table S14), PAHs and PCBs had several common targets.
Stress Genes
Eighteen genes were analyzed, and all were targeted by PAHs and PCBs with the exception of GRHPR, hsp60, hsp70, NF-kB and p38MAPK. Both contaminants, PAHs and PCBs, increased the expression levels of five genes (CASP8, cytb, MTase, PARP1 and SDH) and decreased that of p53. Moreover, treatment with PAHs also down-regulated ARF1 and caspase 3/7 and up-regulated ERCC3, whereas the exposure to PCBs up-regulated GS, HIF1A, hsp56 and 14-3-3 ε.
Skeletogenic Genes
Among the eight genes analyzed, only three genes (BMP5-7, C-jun and p16) were not targeted by the two contaminants. Both PAHs and PCBs increased the expression levels of Nec, p19, SM30 and SM50. Furthermore, PAHs decreased the expression level of the uni gene. The effects of the variations of expression of these genes directly affect the formation of the skeleton of sea urchin embryos. These data were supported by the genes identified in the transcriptomic analysis that belong to biological processes, such as cytoskeleton organization and its structural constituent and the microtubule-based process (see GO terms in Figure 5).
Genes Involved in Detoxification
All eight genes analyzed were targeted by the contaminants with the only exception being the CAT gene. MDR1, MT, MT4, MT6, MT7 and MT8 represented common targets for PAHs and PCBs and were able to increase their expression levels. In addition, the MT5 gene was only up-regulated by PAHs. PAHs targeted 36 genes and PCBs 40 genes, 31 of which were common molecular targets between them. Genes involved in the detoxification process were also detected in the GO term analysis (see Figure 5).
PAHs and PCBs mainly up-regulated the targeted genes (as in the case of transcriptomic results; Supplementary Figure S5 and Supplementary Table S14) involved in skeletogenesis, developmental/differentiation and detoxification processes, supporting the morphological findings, which revealed that the majority of embryonic malformations affected the skeleton and the developmental plan (Supplementary Figure S6).
To the best of our knowledge, no studies to date have been performed to investigate the effects of PAHs and PCBs on sea urchin P. lividus by molecular approaches, with the only exception being Ruocco et al. [44], where the effects of highly contaminated sediments from the site of national interest Bagnoli-Coroglio (Tyrrhenian Sea, western Mediterranean) were detected. Suzuki et al. [45] reported on the effects of benz[a]anthracene and 4-OHBaA on the sea urchin H. pulcherrimus plutei, showing that the expression of mRNAs (spicule matrix protein and transcription factors) in the 4-OHBaA-treated embryos was also more strongly inhibited. These results were very similar to those found in our experiments, because P. lividus embryos after PAH treatment showed spicule malformation, and the expression levels of the SM30 and SM50 genes were also affected.
These molecular results, completed and deepened by de novo transcriptome, well supported our morphological findings, revealing that the majority of affected genes by both PAHs and PCBs were involved in skeleton formation, in the developmental plan and differentiation of sea urchin, as well as the observed malformations of the embryos, as reported in the GO term analysis (see Figure 4). The up-regulation of these genes identified by real-time qPCR experiments, as well as the up-regulation of genes identified in the de novo transcriptome, lead to the morphological effects detected in embryos deriving from adults exposed to these two contaminants.
Each of the 12 testing mesocosms, located at the Stazione Zoologica Anton Dohrn, was characterized by an independent and closed seawater recirculation system (Supplementary Figure S6).
A pump (Micra 400 L/h, SICCE, Italy) promoted seawater circulation from the filtration compartment (containing porous ceramic filters, synthetic sponge and Perlon wool) to the other compartment containing the sediment. Each tank (50 × 36 × 48 cm) was filled with 55 L of natural seawater pre-filtered through a 200-micron mesh sock filter, collected from the Gulf of Naples and treated with zeolite and activated carbon for one week to remove most pollutants prior to chemical analyses (see below; Supplementary Tables S15 and S16). The seawater volume was kept constant during the experiment. It was checked daily and, if necessary, topped up with distilled water. The artificial sediment (10 L) present in each mesocosm was produced by mixing 36.5% of quartz sand (0 mm-3 mm, G. Build, s.r.l.) and 62.5% of coarse sand (grain size 0.4-0.8, Arena Silex, Manufacturas Gre, S.A.) and 1% calcium carbonate) [46]. Before spiking and prior to adding sea urchins, mesocosms were aged for one week. Spiking occurred by adding contaminants directly to the mesocosm water column (i.e., simulating a discharge event). One week after spiking, organisms were added to the mesocosms: seven females and three males for each tank. To evaluate the compartmentalization of PAHs and PCBs in sediment and seawater, their concentrations were evaluated: (i) before the addition of contaminants (t0), (ii) after the addition of contaminants (t1) and (iii) at the end (tf) of the experiment. PAH spiking (acenaphthene (ACE), acenaphthylene (ACY), anthracene (ANT), benzo(a)anthracene (BaA), chrysene (CHR), fluoranthene (FLT), fluorine (FLR), phenanthrene (PHE), pyrene (PYR)) required the addition (in each 55 L mesocosm) of 330 µL of a solution prepared by weighing 30 mg of each PAH in 10 mL of acetone/n-hexane (1:1 v/v; the nominal concentration of water stock solution is 35.3 g/L). PCB spiking required the addition (in each mesocosm) of 82 µL of a certified standard solution of 100 µg PCB/mL (Ultra Scientifc, Italy). We did not carry out control experiments with PAH diluents (acetone/n-hexane) due to the insignificant volume added (as compared to mesocosm's volume) and their high volatility.
Grain Size of Sediment
After one week of aging, 50 mL samples were collected from each tank and treated with 10% H 2 O 2 and distilled water (2:8) for 48 h at room temperature in order to remove salts and organic matter. After drying (24 h at 105 • C), sediment fractions were mechanically separated with multiple vibrating sieves (Ro-Tap Particle Separator, Giuliani, HAVER & BOECKER Oelde Germany) with a 63 µm mesh to distinguish between sandy and silt-clay fractions [47,48]. Each fraction was weighted separately. Gain size data were analyzed with GradiStat software (version 8.0, [49]) and expressed as a percentage of the total dry weight.
For PAHs and PCBs analyses, seawater samples were extracted by a solid-phase extraction (SPE): 1.0 l of water was filtered and preconcentrated on a C18 disk (ENVI, -18 DSK SPE Disk, diam. 47 mm). The analytes were eluted with a solution of 1:1 dichloromethane and n-hexane. The determination of PAHs and PCBs in the sediment was performed by considering 5 g of dry sediment extracted with acetone/n-hexane 1:1 v/v (10 mL), using an ultrasonic disruptor (Brason, US). The extract was concentrated to 1 mL in Multivap under nitrogen flow (Multivap, LabTech, Italy). A total of 10 µL of a 1 mg/L solution of internal standard (mixture of deuterated PAHs) was added to the extract and injected to a gas chromatography-mass spectrometry (GC-MS) (MS-TQ8030-Shimadzu, Japan). The limits of detection (LOD) and quantification (LOQ) were calculated, and the average values for the seawater samples were 0.02 µg/L and 0.05 µg/L for PCBs and 0.004 µg/L and 0.01 µg/L for PAHs, respectively. For sediment samples, LOD and LOQ values were 0.03 µg/kg and 0.1 µg/kg for PCB and 0.16 µg/kg and 0.1 µg/kg for PAHs, respectively. Data quality was ensured by certified reference materials (ERM-CA100 (European Commission) for PAHs and QC1033 (Supelco) for PCBs). The recovery percentage was 70%-110% for PAHs and 65%-120% for PCBs [50,51].
For the determination of PAHs and PCBs in sea urchin tissues (thecae, spines, gonads and guts), approximately 3 g of tissues were homogenated and placed in an automatic extractor, under reflux, at 80 • C for 2 h with a 2 M KOH solution in methanol. After extracting with 20 mL of cyclohexane three times, the extract was purified on sodium sulphate, dried in a rotary evaporator and recovered with a 1 mL mixture of hexane/acetone (1:1 v/v). The extract was analyzed by GC-MS. The limit of quantification (LOQ) was of 0.4 µg/kg w.w. and 2 µg/kg w.w. for PAHs and PCBs, respectively. The average recoveries of PAHs and PCBs were >70% [52].
Sea Urchin Collection and Exposure, Gamete Collection for Morphological and Molecular Analysis by Real-Time qPCR
Methods for sea urchin collection (according to Italian laws (DPR 1639/68, 09/19/1980 confirmed on 01/10/2000) and the conditions of their exposure in the mesocosms are reported in Ruocco et al. [44]. Animals were fed Ulva rigida according to Ruocco et al. [53].
After two months of exposure, sea urchins were collected and their gametes were obtained. Fertilization, embryonic growth until the pluteus stage (48 hpf) and morphological observations were carried out according to Romano et al. [37]. In particular, the percentage of embryos still at the gastrula stage, as well as normal and malformed plutei, were determined 48 h post-fertilization by counting at least 200 embryos for each sample under light microscopy (Zeiss Axiovert 135TV). Pictures were taken using a Zeiss Axiocam connected to the microscope.
Gonadal indices (GIs) were initially evaluated on gonads from five adult sea urchins (t0) (representing the starting point), and evaluations were repeated on five specimens after two months of exposure to PAHs and PCBs as well as in control sediments (W + SED). Animals were weighed, sacrificed and dissected; gonads were weighted for the GI determined (where GI indicates gonadal wet weight (g)/sea urchin wet weight (g) x 100 according to [54]).
Collection of embryos at the pluteus stage (about 5000 sea urchin plutei) for realtime qPCR was performed according to Ruocco et al. [44]. After total RNA extraction was performed using Aurum™ Total RNA Mini kit (BioRad), its amount and integrity were estimated according to Ruocco et al. [55]. Real-time qPCR experimental protocols as reported in Ruocco et al. [56] and Ruocco et al. [57] were followed (Supplementary Figure S4 reported all the analyzed genes). In particular, about 1 µg of RNA was used for cDNAs synthesis by iScript™ cDNA Synthesis kit (Bio-Rad, Milan, Italy), following the manufacturer's instructions. The expression of each gene was analyzed and normalized against the housekeeping genes Ubiquitin and 18S rRNA, using REST software (Relative Expression Software Tool, Weihenstephan, Germany) based on the Pfaffl method. Relative expression ratios greater than ±1.5 were considered significant. Each real-time qPCR plate was repeated at least twice.
De novo Transcriptome Assembly and Data Analysis
The sequencing was carried out in Genomix4Life S.r.l. (Baronissi, Salerno, Italy) using Illumina Truseq mRNA stranded 2 × 150-NextSeq500. De novo transcriptome assembly and annotation of 9 samples (1-3 triplicates for the control condition, embryos at the pluteus stage deriving from adult sea urchin reared in the mesocosm in tanks with seawater plus sediment (W + SED) without contaminants, indicated with "Control"; 4-6 triplicates for embryos at the pluteus stage deriving from adult sea urchins exposed to PAHs, indicated as "Treated_1"; and 7-9 triplicates for embryos at the pluteus stage deriving from adult sea urchins exposed to PCBs, indicated as "Treated_2") were carried out to discover differentially expressed genes between the two treatments and to perform functional analysis (Supplementary Table S17).
RNA sequencing was performed in paired-end mode. Fastq underwent quality control using the FastQC tool [1]. The tool Trinity (Trinity Release v2.10.0; [22]) was used to perform transcriptome assembly. Expression analysis was performed by RSEM (version 1.1.21) using default parameters, and expression values were converted to FPKM (fragments per kilobase of exon per million fragments mapped; Roberts et al. 2011). DESeq2 [23] was used to perform the normalization matrix and differentially expressed genes of all samples were considered. OmicsBox (version1.2.4) uses the Basic Local Alignment Search Tool (BLAST) to find sequences similar to the query set in FASTA format. The Gene Ontology (GO) terms were assigned based on annotation with an E-value of 10-5. The full dataset of raw data is deposited in the Sequence Read Archive (SRA database; available at https://www.ncbi.nlm.nih.gov/sra; accession number: SUB6701449; accessed on 15 February 2021).
Statistical Analyses
Morphological data were reported as means ± standard deviations (SD). These data were analyzed by the Shapiro-Wilk normality test and F-test. The statistical significance between groups was performed by one-way ANOVA followed by the Holm-Sidak test (GraphPad Prism version 8 for Windows, GraphPad Software, La Jolla, California, USA, www.graphpad.com, accessed on 15 February 2021) for multiple comparisons, indicating ** p < 0.01, *** p < 0.001. Statistical differences of GI values between t0 and after two months were evaluated by the Mann-Whitney U test (GraphPad Prism version 8 for Windows, GraphPad Software, La Jolla, California, USA, www.graphpad.com, accessed on 15 February 2021). P values > 0.05 were considered not significant.
Conclusions
We investigated for the first time the subchronic effects on P. lividus of slight PAH and PCB contamination in mesocosms (sediment and water) considering a multi-endpoint approach. Generally, the attention is focused on sediment hot spots (i.e., highly spiked sediment from industrial and commercial areas) with long-term historical pollution (i.e., black samples). We decided to refocus on the so-called "blank" samples with very low concentrations of PAHs and PCBs below national and international threshold limit values (TLVPAHs = 900 µg/L, TLVPCBs = 8 µg/L, Legislative Italian Decree 173/2016). Our reconstructed spiked mesocosms always presented PAH and PCB levels below the respective detection limit values (LODs) for both sediment and water samples, meaning that they soon compartmentalized (in less than one week) between sediment, water, biota, air and mesocosm surfaces. Nevertheless, significant biological effects were detected ranging from bioaccumulation and embryotoxicity (PAHs) to the up-and down-regulations of genes (PAHs and PCBs). Variation of gene expression is directly translated at the morphological level in the malformations observed in the embryos, leading to the identification of genes responsible for those defects. However, de novo assembly is a necessary step to assess differential gene expression and also provides an important resource for researchers working with this sea urchin species. In fact, the transcriptional changes detected in this study are corollary, and, in the future, functional studies will need to clearly establish that these genes can be considered as universal biomarkers involved in the response to contaminants in the marine environment.
Finally, the results evidenced that the combination of morphological and molecular approaches can efficiently support a deeper understanding of how marine species can react to the widespread background sediment contamination levels.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/ijms22136674/s1, Table S1: Chemical analyses of total PAHs and total PCBs in sediment, Table S2: Chemical analyses of total PAHs and total PCBs in seawater, Table S3: Chemical analyses of total PAHs, total PCBs and Zn in sediment, Table S4: Chemical analyses of total PAHs, total PCBs and Zn in seawater, Supplementary Table S5: Chemical analyses of total PAHs, total PCBs and Zn in sediment, Table S6: Chemical analyses of total PAHs, total PCBs and Zn in seawater, Table S7: Chemical analyses of total PAHs, total PCBs and Zn in sediment collected at tf, Table S8: Chemical analyses of total PAHs, total PCBs and Zn in seawater collected at time zero (t1), Table S9: Adult sizes and gonadal index of adults of sea urchin P. lividus, Table S10: Quantity (µg/kg) of PCBs detected in thecae (including spines), gonads and gut from adult sea urchin P. lividus, Table S11: Number of differentially expressed (DE) genes and isoforms, Table S12: Common up-regulated genes in the Venn diagrams, Table S13: Common down-regulated genes in the Venn diagrams, Table S14: Data of expression levels in embryos at the pluteus stage, Table S15: Chemical analyses of heavy metals in sea water collected at time zero, Table S16: Chemical analyses of ammonia, nitrates, nitrites and phosphates on sea water, Table S17: Sample names, conditions and paired reads for each sample, Table S18: The number of reads obtained for the samples, Figure S1: Data of the sediment grain size analyzed with GradiStat software, Figure S2: Mortality index, Figure S3: BLASTx top-hit species distribution, Figure S4: Principal component analysis (PCA), Figure S5: Summary of the 62 genes analyzed by real-time qPCR, Figure S6: Schematic overview of P. lividus genes affected by artificial contaminated sediment with PAHs and PCBs under analysis, Figure S7: Schematic representation (frontal view) of 1 of the 12 experimental tanks of the mesocosms, Figure S8: Physical parameters (dissolved oxygen, pH, redox, temperature and salinity) of sea water, Figure S9: Physical parameters (dissolved oxygen, pH, redox, temperature and salinity) of sea water, Figure S10: Chemical parameters (ammonia, nitrates, nitrites and phosphates) of sea water Figure S11: Chemical parameters (ammonia, nitrates, nitrites and phosphates) of sea water. | 8,411.2 | 2021-06-22T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
MyDiabetes—The Gamified Application for Diabetes Self-Management and Care
: Gamified applications are regarded as useful for patients in facilitating daily self-care management and the personalization of health monitoring. This paper reports the development of a gamified application by considering a design that had previously been investigated and reported. Numerous game elements were installed in the application, which covered several tasks aimed at managing diabetes mellitus. The development process utilized the Rapid Application Development (RAD) methodology in terms of system requirements, user design, construction, and cutover; this paper refers to the user design and cutover processes. The developed application was tested through system testing and usability testing. The usability testing adopted the Software Usability Scale (SUS) to assess the usability of the application. Twenty participants were involved in the testing. The result showed that the gamified application is easy and practical to use for an individual with or without diabetes. All the provided functions worked as designed and planned, and the participants accepted their usability. Overall, this study offers a promising result that could lead to real-life implementation.
Introduction
Patients with long-term health conditions must adapt to a new routines and lifestyles, mainly involving their daily activities and dietary intake. Recent technology has revolutionized such activities by creating incredible tools and resources, and putting useful information at our fingertips. Despite the growing prevalence of smartphones, health-focused digital learning using the gamification approach has only been sparsely implemented in daily life. This omission may hinder an individual's efforts to self-care and manage, particularly those living with health conditions like diabetes mellitus.
Diabetes mellitus (DM) is one of the most common non-communicable diseases worldwide. It has become an important item on the agenda of healthcare providers. In recent years, researchers have focused on developing intervention tools that can foster self-care management. Unmanageable levels of blood glucose led to health complications and a decline in quality of life.
DM is an endocrine disease characterized by a person's blood glucose level [1,2]. A significant increase above the normal level will lead to a diagnosis of type 1 or type 2 diabetes mellitus. Healthcare services should promote the awareness of a healthy lifestyle and disseminate diabetic literacy and knowledge widely to prevent the condition from increasing. In Malaysia, DM has been increasing annually; a particular issue is the rise in Type 2 DM among Malaysian adults over 30 years old [2,3]. This increment is due to poor self-care management, minimal awareness of the disease, a lack of medication adherence, and dietary issues [2]. Individuals with DM require careful monitoring and care from the patients themselves, the primary care clinic, and the hospital. Currently, healthcare task related to a patient's condition. The application summarizes these logs into a simple visualization chart/graph. The chart provides the resultant trends, with any unusual trends alerting the patient, carers, and doctors if the system is connected to them. Moreover, the self-management of diabetes is enhanced through educational games, apps monitoring, and in-game motivational feedback [9,10]. Included in the self-management functions of most mobile apps on the market, as reviewed by Priesterroth et al. [10], are the functions of a diary for insulin doses, and logs of food intake, activity, weight, and blood pressure. The reviewed articles researched these functions utilizing fun learning games and gamification over the years. However, those articles only focus on a single diabetes issue, for example, problems of blood glucose [8,9], medication intake [7], and diet management [5,14].
Research by Klaassen et al. [9] and Lewis et al. [15] use the PERGAMON framework, which utilizes wearable sensors, games, and gamification for their diabetes selfmanagement application. The framework is designed as a gamification platform. It has many functions, such as a diary, mini-games, user profiles, and personalization. It uses a virtual coach, goals, and tasks as the main game elements, apart from points, badges, and levels. The game mainly involves empowering patients with the knowledge of controlling blood sugar levels through their dietary intake and carbohydrate counting tasks. Patients can also log any activities related to diabetes through the application, for example, exercise activities and daily water intake. Furthermore, the application allows the patients to create and customize their profiles. Nevertheless, these applications require external devices to be paired with the application.
In terms of the application of gamification for diabetes, Klaassen et al. [9] discussed the difficulty and complexity of diabetes games that are suitable for self-learning. They must be designed to be straightforward and simple. Moreover, the design of the games and applications must be aligned with the targeted users. Vassilakis et al. [16] further emphasized that in diabetes self-management, learning through digital applications or games is an alternative method used by healthcare personnel to support and motivate a patient to enhance their level of learning. In this view, giving feedback and guidance are essential, and ease of use should be considered in the design of applications. Other than using a gamified platform application, the literature reports that applications that are solely games have been utilized as tools to educate diabetes patients (such as [5,8,16]). Even though such applications have a specifically targeted diabetes solution, they are designed and built with multiple activities that facilitate the acquisition of the knowledge and skills related to diabetes. However, various aspects of diabetes knowledge and the scope of the related skills have yet to be discovered, and further research should be expanded to include diabetes-specific features. Vassilakis et al. [16] suggested that more games should be designed to educate patients in various aspects of diabetes, including the promotion of a healthy lifestyle, the need to avoid smoking, and encouragement to take up physical activities. Moreover, as reviewed by Priesterroth et al. [10], the gamified applications have frequently been implemented to promote self-management. However, gamification features are not systematically implemented in such applications.
Meanwhile, Brzan et al. [12], in their review of the 65 mobile apps for diabetes selfmanagement, argued that the mobile apps development should employ user-centered design, which will specifically include individual patient's requirement in the application. The review also showed that only a small amount of content related to learning and selfmanagement was embedded in mobile applications. The self-management indications used in [12] include monitoring blood glucose, nutrition intake, physical exercises, and body weight. However, it should not be limited to those functions only. Other self-management features suggested in [12], such as reminders, notes, tracking functions, and personal messaging, should be designed and applied in the mobile application. One particular issue in mobile apps implementation that should be considered is the feature of diabetes self-management in mobile apps is limited [11,12]. Huang et al. [11] advocate that many diabetes mobile apps lacked medication management features and had less emphasis on basic reminder features. Thus, designing the self-management components is crucial and needs to carefully consider the requirement for diabetes and the requirement from patients. Therefore, even though the available intervention tools utilizing gamification elements have been researched and invented, they represent different targets in improving a person's health condition. As self-management and care are the keys to health improvement, features related to these aspects must be designed accordingly. Thorough approaches to the design and implementation of gamification tailored to diabetes self-management requirements are needed to produce a more practical intervention tool.
Materials and Methods
This section focuses on the methodology and materials used in building the gamified application. A stepwise explanation of the processes involved in each phase is presented. Likewise, the materials employed are also described; those involved in this study are (1) the tools utilized in the development phase and (2) the gamified application itself. Moreover, this section outlines the gamification features and the implementation of the gamified application features.
RAD Methodology
The development of integrated playful games for diabetes mellitus follows the Rapid Application Development (RAD) methodology. RAD consists of 4 phases: requirement planning, user design, construction, and cutover. RAD is a method that focuses on system development through a prototype and reusable codes. Using the prototype, developers can obtain rapid responses from users during the development cycles. Meanwhile, the reusable codes from available open-source repositories enable the development period to be shortened. Moreover, this method accelerates the development period without impairing the quality of the application. Thus, RAD was chosen on the basis that developers and users work synchronously to create a functional product that follows the user's requirements. The RAD phases are presented in Figure 1. particular issue in mobile apps implementation that should be considered is the feature of diabetes self-management in mobile apps is limited [11,12]. Huang et al. [11] advocate that many diabetes mobile apps lacked medication management features and had less emphasis on basic reminder features. Thus, designing the self-management components is crucial and needs to carefully consider the requirement for diabetes and the requirement from patients. Therefore, even though the available intervention tools utilizing gamification elements have been researched and invented, they represent different targets in improving a person's health condition. As self-management and care are the keys to health improvement, features related to these aspects must be designed accordingly. Thorough approaches to the design and implementation of gamification tailored to diabetes self-management requirements are needed to produce a more practical intervention tool.
Materials and Methods
This section focuses on the methodology and materials used in building the gamified application. A stepwise explanation of the processes involved in each phase is presented. Likewise, the materials employed are also described; those involved in this study are (1) the tools utilized in the development phase and (2) the gamified application itself. Moreover, this section outlines the gamification features and the implementation of the gamified application features.
RAD Methodology
The development of integrated playful games for diabetes mellitus follows the Rapid Application Development (RAD) methodology. RAD consists of 4 phases: requirement planning, user design, construction, and cutover. RAD is a method that focuses on system development through a prototype and reusable codes. Using the prototype, developers can obtain rapid responses from users during the development cycles. Meanwhile, the reusable codes from available open-source repositories enable the development period to be shortened. Moreover, this method accelerates the development period without impairing the quality of the application. Thus, RAD was chosen on the basis that developers and users work synchronously to create a functional product that follows the user's requirements. The RAD phases are presented in Figure 1. Based on the RAD methodology in Figure 1, each phase is described as follows: (1) Requirement Planning phase: The requirements of a system are gathered and analyzed by the developer, after which a response from the users is received to confirm the requirement specifications. In this study, research previously conducted by the authors gathered the users' requirements through a user experiences approach. The authors implemented mock-up applications and storyboards to present the design ideas. This process involved a group of users from whom the researchers obtained their requirements through a focus group discussion. The Based on the RAD methodology in Figure 1, each phase is described as follows: (1) Requirement Planning phase: The requirements of a system are gathered and analyzed by the developer, after which a response from the users is received to confirm the requirement specifications. In this study, research previously conducted by the authors gathered the users' requirements through a user experiences approach. The authors implemented mock-up applications and storyboards to present the design ideas. This process involved a group of users from whom the researchers obtained their requirements through a focus group discussion. The results from the discussion were translated into a low-fidelity prototype, for which confirmation was sought from the users. (2) User Design phase: The confirmed requirements from the previous phase are processed further by the designer or developers. Generally, this phase involves the iterative process of a detailed system design. Following the detailed design, a prototype is developed, tested, and refined based on the users' quick responses. In the context of this study, the authors created a high-fidelity prototype and obtained responses from the users to gain final confirmation of the design from the previous phase. Then, the approved design was transformed into a complete system in the construction phase. (3) Construction phase: Then, the high-fidelity prototype from the previous phase is improved into a fully developed system. In this phase, the essential aspects-the functions, interfaces, and databases-are integrated and completed. In the context of this study, the authors developed and tested the gamified application. The developed application was created with full system functionality, interfaces, and the integration of complete databases. Details of the game development approach are presented in Section 3.3, and the testing method is outlined in Section 3.5. (4) Cutover phase: In this phase, the functions, interfaces, and databases are confirmed as a final system. In the context of this study, the authors conducted system testing to certify that the system worked as designed and required by the users.
Following the RAD methodology, phases 1 and 2 were successfully conducted and published, and this paper reports the work related to phases 3 and 4. The system's construction is presented in Section 3.3, and the system testing in the cutover phase is presented in Section 3.5.
The Development Tool
The diabetes gamified application in this study was developed using the PHP framework. The developers used the PHP code in coding the applications, the CSS for the interfaces, the JSON and JavaScript for the gamification element, and phpMyAdmin for the database. Hostinger.my was subscribed to as the server and hosting platform.
The Gamification Design and Development Approach
In developing the mini games, the game development approach by Hendrick [17] must be followed, which is a process that includes the prototype, pre-production, production, beta, and live. The prototype involves a process of translating the concepts into low-fidelity and high-fidelity designs. Pre-production involves the documentation of the game design. Production is the game development process, whereby the game assets, design, and code are constructed into a fully functional game. In beta, the game is tested to obtain feedback from the users. Once tested, the game is ready to go live. In this study, the process of prototyping was conducted in the user phase (stage 2) of the RAD methodology. The mini games planned for this study lay in the game production, beta, and live processes, which were the processes conducted in the construction phase (stage 3) of the RAD methodology. In the selection of the game elements and mechanics of the diabetes gamified application in this paper, two considerations were made.
First, the design is based on self-management elements in the gamification for the chronic illness framework in [13] and application of fun elements in motivating a person to sustain their engagement with a health-based gamified application [4,10,13]. With this in mind, the implemented game elements were a logbook (record-based), data visualization (graphs), and alerts. A logbook is any recorded data that relates to given features, such as data concerning medication, appointments, or tasks. For each type of data, the rate of completion is visualized in the form of percentages, using a circle graph on the user's dashboard. Alert messages pop up to remind the user when any of the data is reached or if the given due date has passed. Meanwhile, the selected fun elements are missions, the progression bar, avatar, and badges, as well as the challenges in the mini games. The element of missions in the gamified application allows users to set targets to improve their health condition. The achievement of the missions is visualized through the progression bar. Users will be intrigued to see their progress over time. When any mission is achieved through completing several tasks, a badge is awarded. This badge shows the specific achievement of the users, after which a different user status (novice, intermediate, advanced) is displayed on the user's account. This situation is anticipated to influence the users' engagement with and behavior toward better health self-management. The simulation model is illustrated in Figure 2. The simulation is using machination diagram. As shown in the simulation, the sources are the data log from the users, in which the data are pooled according to its purpose and indicated by the progress mode. In the simulation, ten data pools were set to be achieved. Once completed, the data are pushed automatically (*) to another pool to indicate the data have been visualized on the application.
improve their health condition. The achievement of the missions is visualized through th progression bar. Users will be intrigued to see their progress over time. When any mission is achieved through completing several tasks, a badge is awarded. This badge shows th specific achievement of the users, after which a different user status (novice, intermediate advanced) is displayed on the user's account. This situation is anticipated to influence th users' engagement with and behavior toward better health self-management. Th simulation model is illustrated in Figure 2. The simulation is using machination diagram As shown in the simulation, the sources are the data log from the users, in which the dat are pooled according to its purpose and indicated by the progress mode. In the simulation ten data pools were set to be achieved. Once completed, the data are pushed automaticall ( ) to another pool to indicate the data have been visualized on the application. Second, the gamified application should follow a particular design pattern. In th literature, there are few available guidelines or frameworks that researchers or developer can use to assess the creation of the gamification design. For example, there is th Mechanic, Dynamics and Aesthetics (MDA) framework [18], Octalysis gamificatio framework [19], game design guideline [20], and the software engineering of gamification [21]. Each of the frameworks provides a different focus of how a researcher should develop a certain gamified application. However, each of them has a set of rules for good practices in gamification development and implementation. Among them, this stud follows the game design guideline by Gallego-Durán [20]. The guideline was chose because it helps the researcher design the gamified application design by analyzing th strength and weaknesses of the application according to the given characteristic. Th guideline has ten characteristics of game-design-based gamification. The rubric of each characteristic is rated between point 0 (low), 1 (medium), and 2 (high). With that in mind the gamified application yields the following scores: (2) Open decision space. Users are in total control of the action taken in the gamified application (continuous space). (1) Challenge. The mini games in the gamified application are composed of a serie of levels with increased difficulty to challenge the user. (2) Learning by trial and error. The mini games are instilled with features that enabl users to keep on trying to gain knowledge related to their condition by playing th games. The users can play the games, complete the games regardless of the los points, and live in the game. Second, the gamified application should follow a particular design pattern. In the literature, there are few available guidelines or frameworks that researchers or developers can use to assess the creation of the gamification design. For example, there is the Mechanic, Dynamics and Aesthetics (MDA) framework [18], Octalysis gamification framework [19], game design guideline [20], and the software engineering of gamification [21]. Each of the frameworks provides a different focus of how a researcher should develop a certain gamified application. However, each of them has a set of rules for good practices in gamification development and implementation. Among them, this study follows the game design guideline by Gallego-Durán [20]. The guideline was chosen because it helps the researcher design the gamified application design by analyzing the strength and weaknesses of the application according to the given characteristic. The guideline has ten characteristics of game-design-based gamification. The rubric of each characteristic is rated between point 0 (low), 1 (medium), and 2 (high). With that in mind, the gamified application yields the following scores: (2) Open decision space. Users are in total control of the action taken in the gamified application (continuous space).
(1) Challenge. The mini games in the gamified application are composed of a series of levels with increased difficulty to challenge the user.
(2) Learning by trial and error. The mini games are instilled with features that enable users to keep on trying to gain knowledge related to their condition by playing the games. The users can play the games, complete the games regardless of the lost points, and live in the game.
(2) Progress assessment. The gamified application assesses the user's self-management activities' progress through graph visualization on the user's dashboard. Users who are progressing well and having good achievement will receive a badge.
(1) Feedback. Users receive feedback from the gamified application in the form of messages and reminders for incomplete tasks. (1) Randomness. Some of the features are predicted. However, there will be a surprise movement in one of the mini games in which enemies will come out, and users need to avoid them.
(1) Discovery. A completed mission will unlock a new badge, new avatar selection, and new mission (health task) to accomplish.
(1) Emotional entailment. The mini games have a simple story and related character to target user emotion in learning about their condition.
(1) Playfulness enabled. Playing with the mini games may invoke playfulness with limited room for playing outside the rules set in the system.
(2) Automation. Even though users need to feed their data manually into the application, the progression, mission, badge, and achievement are automated.
For that, the gamified application gets a score of 14 points in total. The points show that the gamification design can plausibly be considered as accepted, as each of the characteristics is available in the application. However, the gamified application can still be added with more features in the future.
The Diabetes Gamified Application
The gamified application was designed by the developers (the authors) following the requirements collected in phases 1 and 2. The application interfaces were designed to be user-friendly. The application emphasized certain gamification features that were specifically designed for the users to take advantage of.
In the gamified diabetes application, several functions enable a person to manage their condition. The application requires a person to be registered. Once registered, they need to input and set the necessary information, such as their medication, appointments, tasks related to health targets, and other related treatment. Personal information and healthrelated data were also needed, for example, emergency contact details, physician details, allergies, and other co-morbid medical conditions. The application also implemented the concept of a personal dashboard, which was designed with the element of progression. This element shows the percentages completed monthly for each component. Visually displaying individual progress at a particular stage makes patients aware of their health status, particularly how well they are coping with their blood sugar control and current existing condition.
In the personal information feature, an element of badges and missions is included. A person receives a badge when he/she has completed or reached 100% on a particular component of the application. Meanwhile, the mission is another game element through which a person can track their health goals. This element of missions is also associated with the element of badges. For example, one individual health mission is to maintain their HbA1c reading at an average level in three consecutive months. From the recorded results, the application rewards the individual with a badge if he/she manages to achieve their mission. Another game element to be implemented in future designs is the element of points, which are received and collected from playing mini games. By playing a series of such games, a person can learn about their condition and obtain points, which can later be used to redeem rewards and items to customize their avatar. Figure 3 illustrates all the functions of the gamified application.
Based on the illustrated functions in Figure 3, the gamified application has three main sections: the user profile, self-management functions, and mini games. Users can manage their basic information via their user profile and add an emergency contact number and medical information (health condition) (refer to Figure 4). Self-management functions are presented through the dashboard (refer to Figure 5a). Users must manage their medication, appointments, and tasks related to their condition (refer to Figure 5b-d). Following the user design phase, three mini games will be installed, consisting of a memory game, an action game, and a role-playing game. The memory game involves memorizing matching pictures about food intake, the essential tools for diabetes, and healthy activity (refer to Figure 6a). Meanwhile, in the action game, users play an adventure activity in a given environment in which they have to collect essential items for a person with diabetes (refer to Figure 6b). For the role-playing game, a rogue-like game will be installed. Based on the illustrated functions in Figure 3, the gamified application has thre sections: the user profile, self-management functions, and mini games. Users can m their basic information via their user profile and add an emergency contact numb medical information (health condition) (refer to Figure 4). Self-management functio presented through the dashboard (refer to Figure 5a). Users must manage medication, appointments, and tasks related to their condition (refer to Figure Following the user design phase, three mini games will be installed, consistin memory game, an action game, and a role-playing game. The memory game in memorizing matching pictures about food intake, the essential tools for diabete healthy activity (refer to Figure 6a). Meanwhile, in the action game, users p adventure activity in a given environment in which they have to collect essential ite a person with diabetes (refer to Figure 6b). For the role-playing game, a rogue-like will be installed. Based on the illustrated functions in Figure 3, the gamified application has three main sections: the user profile, self-management functions, and mini games. Users can manage their basic information via their user profile and add an emergency contact number and medical information (health condition) (refer to Figure 4). Self-management functions are presented through the dashboard (refer to Figure 5a). Users must manage their medication, appointments, and tasks related to their condition (refer to Figure 5b-d).
Following the user design phase, three mini games will be installed, consisting of a memory game, an action game, and a role-playing game. The memory game involves memorizing matching pictures about food intake, the essential tools for diabetes, and healthy activity (refer to Figure 6a). Meanwhile, in the action game, users play an adventure activity in a given environment in which they have to collect essential items for a person with diabetes (refer to Figure 6b). For the role-playing game, a rogue-like game will be installed.
Software Testing Method
In this study, white box testing was conducted for each function in the gamified application. The testing begins with unit testing, which was followed by integration testing. Following that, once each module has been completely developed, the developer generates automated PHP testing. By generating the test, developers can ensure all units are programmed accordingly, and, more importantly, no errors have occurred in the application. The program codes can be identified as practical during the testing, thus minimizing the usage of computer memory resources during the operation (run time).
Software Testing Method
In this study, white box testing was conducted for each function in the gamified application. The testing begins with unit testing, which was followed by integration testing. Following that, once each module has been completely developed, the developer generates automated PHP testing. By generating the test, developers can ensure all units are programmed accordingly, and, more importantly, no errors have occurred in the application. The program codes can be identified as practical during the testing, thus minimizing the usage of computer memory resources during the operation (run time).
Software Testing Method
In this study, white box testing was conducted for each function in the gamified application. The testing begins with unit testing, which was followed by integration testing. Following that, once each module has been completely developed, the developer generates automated PHP testing. By generating the test, developers can ensure all units are programmed accordingly, and, more importantly, no errors have occurred in the application. The program codes can be identified as practical during the testing, thus minimizing the usage of computer memory resources during the operation (run time).
Empirical Research Method
Apart from the software automated test, preliminary user testing was also conducted to ensure the programs ran as designed and planned, and all transactions were successfully made without error. This testing was deemed necessary before the researcher could conduct acceptance testing with the potential users (diabetics). In this testing context, the errors were identified from users' misconceptions in determining the system flows. Feedback was also collected regarding the application interface and the way the system worked. For this purpose, the researcher utilized the established Software Usability Scale [22] to determine user perspectives from their use of the application. All the testing processes are illustrated in Figure 7.
Computers 2021, 10, x FOR PEER REVIEW
Empirical Research Method
Apart from the software automated test, preliminary user testing was also con to ensure the programs ran as designed and planned, and all transaction successfully made without error. This testing was deemed necessary before the res could conduct acceptance testing with the potential users (diabetics). In this context, the errors were identified from users' misconceptions in determining the flows. Feedback was also collected regarding the application interface and the w system worked. For this purpose, the researcher utilized the established S Usability Scale [22] to determine user perspectives from their use of the applicat the testing processes are illustrated in Figure 7.
Participants and Research Design
The study recruited 20 individuals for the preliminary user testing. Thes randomly selected based on their level of familiarity with technology. They ha familiar with online applications and have a higher internet usage in their day activities. This number of participants was considered plausible to ena identification of a reasonable proportion of problems in heuristics usabili Participation was voluntary, and no compensation was given for involveme participants were of mixed backgrounds and included persons with and without d The testing was designed to be conducted by the users themselves. A participation was made via social media and the project website. Interested part were randomly selected and formally emailed to gain their consent and provid with participation details. Following the first email, a second email was sent participants giving detailed instructions about the testing. The instructions inclu step-by-step process for conducting the test and the documentation needed for the The testing was undertaken individually using online resources, and the testers refer to the given test cases and complete the testing within the allotted time.
Result and Discussion
Twenty individuals participated in the testing. The demographics of the part are summarized in Table 1. Among the 20 participants, 16 were female and fou male. Their age distribution was between 30 and 40. Five of them had diabetes and not. However, their diabetes condition was controllable and not severe. These part were also categorized as being familiar with technology and spent more than thre per day doing online activities.
Participants and Research Design
The study recruited 20 individuals for the preliminary user testing. These were randomly selected based on their level of familiarity with technology. They had to be familiar with online applications and have a higher internet usage in their day-to-day activities. This number of participants was considered plausible to enable the identification of a reasonable proportion of problems in heuristics usability [23]. Participation was voluntary, and no compensation was given for involvement. The participants were of mixed backgrounds and included persons with and without diabetes.
The testing was designed to be conducted by the users themselves. A call for participation was made via social media and the project website. Interested participants were randomly selected and formally emailed to gain their consent and provide them with participation details. Following the first email, a second email was sent to the participants giving detailed instructions about the testing. The instructions included the step-by-step process for conducting the test and the documentation needed for the testing. The testing was undertaken individually using online resources, and the testers had to refer to the given test cases and complete the testing within the allotted time.
Result and Discussion
Twenty individuals participated in the testing. The demographics of the participants are summarized in Table 1. Among the 20 participants, 16 were female and four were male. Their age distribution was between 30 and 40. Five of them had diabetes and 15 did not. However, their diabetes condition was controllable and not severe. These participants were also categorized as being familiar with technology and spent more than three hours per day doing online activities. The results of the testing are presented according to the testing activities conducted. There are two parts, the automated system testing and the preliminary user testing. These results suggest the automated unit testing shows no significant errors. The application performance based on the server-side evaluation can be credibly interpreted as successful and suitable for the application environment. With ten simultaneous usages, only 1% of the server memory was utilized (out of 512 MB server memory) with an average response time of 0.335 per second. Based on the results, the application performance was manageable. Users could use the application widely with minimum delay, subject to their network and server performance. Table 2 shows the results of the application server-side performance. The system testing was conducted with 20 participants, and according to the test cases, all functions worked accordingly. The participants were able to follow the system's flow correctly. Thus, no system errors were found during the testing. Apart from conducting the test, these 20 participants also provided additional responses that reflected their opinions of the application. The responses were based on the given questionnaire adopted from the Software Usability Scale (SUS). The questionnaire used a Likert scale of 1 (Strongly Disagree) to 5 (Strongly Agree). There were ten questions, and the results of the mean and standard deviation (SD) of each question are shown in Table 3. The perceived usability of the gamified application was found to be highly reliable (10 items, α = 0.98). Based on the results of the responses, as shown in Table 3, the positive questions (Q1, Q3, Q5, Q7, Q9) received a mean value of 4.0 and above. Furthermore, the negative questions (Q2, Q4, Q6, Q8, Q10) received a mean value of 2.5 and below. Additionally, based on the SUS scores interpretation, the total scores for each participant were multiplied by 2.5 to convert the scores into a 0-100 range. Scores above 68 were considered above average, indicating acceptable usability. Users rated the gamified application as very positive, with an average score of 76.87; the obtained score was above the average SUS score. The obtained SUS score recorded a median score of 76.25; the minimum score was 70; and the maximum score was 85. Figure 8 shows the boxplot of the SUS scores for all users. The results in Table 3 and Figure 8 show that all participants agreed with functions provided in the application, that the application was not complicated, and learning to use the application was easy without the need of a technical support pe Thus, the users gained a reasonable level of confidence in using the gamified applica The users understood the process and were willing to use it further. Therefore assumed that the application was ready for actual user acceptance testing (tested diabetes patient). Although a direct comparison with the previous study in [10] ma be applicable, due to the different focus of the diabetes self-management implementa the findings show that the gamified application implemented in this study is system with consistent but not complex features. A comparison with the outcome of prev studies in [9,24] reveals a similar pattern in the need for a simple application w functions are not overly difficult and whose design is relevant in its gamification elem and techniques. Hence, this finding has established the underpinning concep applying gamification for health self-management.
Nevertheless, certain limitations of this study were noted. First, the conducted te was a self-regulated activity, which was conducted online due to the pandemic situ and movement restriction order. This resulted in a limited level of observation of The results in Table 3 and Figure 8 show that all participants agreed with the functions provided in the application, that the application was not complicated, and that learning to use the application was easy without the need of a technical support person. Thus, the users gained a reasonable level of confidence in using the gamified application. The users understood the process and were willing to use it further. Therefore, we assumed that the application was ready for actual user acceptance testing (tested by a diabetes patient). Although a direct comparison with the previous study in [10] may not be applicable, due to the different focus of the diabetes self-management implementation, the findings show that the gamified application implemented in this study is systematic, with consistent but not complex features. A comparison with the outcome of previous studies in [9,24] reveals a similar pattern in the need for a simple application whose functions are not overly difficult and whose design is relevant in its gamification elements and techniques. Hence, this finding has established the underpinning concepts of applying gamification for health self-management.
Nevertheless, certain limitations of this study were noted. First, the conducted testing was a self-regulated activity, which was conducted online due to the pandemic situation and movement restriction order. This resulted in a limited level of observation of user behavior during the activity. Moreover, the testing activity was conducted over a short period. Thus, future work should consider longer experimental periods and evaluate a person's improvement in their health condition when the application is used. Second, one mini-game (the role-play) still needs further improvement, as it received several comments during the testing. Most comments concerned the patient's avatar (role-play character). It was suggested that the avatar should reflect the level of a patient's condition in the gameplay and gradually change as the condition of the patient improves. This suggestion is in line with the avatar implementation in a previous study [5] in which the avatar changes further explain the positive effect on the user's engagement in the gameplay. Meanwhile, other comments were directed toward the interface designs, which have been altered by the developers. Nevertheless, any comments and suggestions on the current functionalities could inform further improvement.
Conclusions and Future Work
The application of gamification for diabetes mellitus is gradually receiving attention as a tool and part of an individual's daily life activities. Providing an application that could help individuals learn more about their health condition indirectly teaches and encourages them to self-care and self-manage. Providing such an application also allows individuals with diabetes to adapt to their daily routine by themselves. However, individuals with little or no familiarity with using the Internet and technology find such applications challenging to use. This scenario could occur with older adults who are more accustomed to manual book records, nurse call reminders, and the physical diabetes awareness program. Nevertheless, personalized healthcare monitoring, such as the developed gamified application presented in this study, has been created for anyone who requires assistive tools in self-managing their diabetes condition.
This research reports the development and testing work related to the completion of a gamified application. The work was grounded in the RAD methodology, with the requirement and design phases having been completed. The developed application underwent preliminary user testing to assess the application's usability with the Software Usability Scale (SUS), and the results were encouraging. The results from the usability study show that the gamified application is generally easy and practical to use whether the individual is living with or without diabetes. The users also indicated that they would like to use the application frequently. However, currently, there is no proof that the system could improve a person's health condition. This should be taken into consideration in future studies. Therefore, in future work, the researchers will conduct acceptance testing and assess the application's effectiveness for prospective users. A longitudinal study inspecting how a person could benefit from the gamified application, as well as how the application could affect the condition of a person's diabetes, will be further researched. The longitudinal study is considered necessary to measure any medical impact on a person when using a particular system application. The system effectiveness requires time and a monitoring method, such as a diary, to acquire comprehensive results. In the interim, suggestions for application improvements will be put into action. Meanwhile, other application improvements, such as developing a mobile apps version and adding more mini games, will be considered for future work.
Generally, the developed gamified application in this study can be considered a possible future solution for modern healthcare services. The application is an open platform, which currently involves diabetes as the subject of interest. Applying other health conditions as subjects of a gamified application can also be further explored. | 9,559.2 | 2021-04-13T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Neurophysiological evidence of sensory prediction errors driving speech sensorimotor adaptation
The human sensorimotor system has a remarkable ability to quickly and efficiently learn movements from sensory experience. A prominent example is sensorimotor adaptation, learning that characterizes the sensorimotor system’s response to persistent sensory errors by adjusting future movements to compensate for those errors. Despite being essential for maintaining and fine-tuning motor control, mechanisms underlying sensorimotor adaptation remain unclear. A component of sensorimotor adaptation is implicit (i.e., the learner is unaware of the learning process) which has been suggested to result from sensory prediction errors–the discrepancies between predicted sensory consequences of motor commands and actual sensory feedback. However, to date no direct neurophysiological evidence that sensory prediction errors drive adaptation has been demonstrated. Here, we examined prediction errors via magnetoencephalography (MEG) imaging of the auditory cortex (n = 34) during sensorimotor adaptation of speech to altered auditory feedback, an entirely implicit adaptation task. Specifically, we measured how speaking-induced suppression (SIS)--a neural representation of auditory prediction errors--changed over the trials of the adaptation experiment. SIS refers to the suppression of auditory cortical response to speech onset (in particular, the M100 response) to self-produced speech when compared to the response to passive listening to identical playback of that speech. SIS was reduced (reflecting larger prediction errors) during the early learning phase compared to the initial unaltered feedback phase. Furthermore, reduction in SIS positively correlated with behavioral adaptation extents, suggesting that larger prediction errors were associated with more learning. In contrast, such a reduction in SIS was not found in a control experiment in which participants heard unaltered feedback and thus did not adapt. In addition, in some participants who reached a plateau in the late learning phase, SIS increased (reflecting smaller prediction errors), demonstrating that prediction errors were minimal when there was no further adaptation. Together, these findings provide the first neurophysiological evidence for the hypothesis that prediction errors drive human sensorimotor adaptation.
Introduction
The sensorimotor system shows a remarkable ability to quickly and efficiently learn movements based on sensory feedback.Soon after perceiving sensory errors that arise from movements, the system updates future movements to compensate for the errors, a phenomenon called sensorimotor adaptation.What drives such an elegant learning process?Previous studies suggested that adaptation can be driven by both task errors (i.e., discrepancy between the action and the goal) and sensory prediction errors (i.e., mismatches between the actual sensory consequences of a movement and those predicted from the motor commands driving that movement).
In the speech domain, however, multiple lines of evidence suggest that speech sensorimotor adaptation to altered auditory feedback is implicit (i.e., participants are unaware of the learning), which is thought to be driven mainly by sensory prediction errors (Mazzoni & Krakauer, 2006).In previous adaptation studies, participants showed no difference in the 3 amount of learning in response to formant-perturbed auditory feedback when instructed to compensate, to ignore the feedback, or to avoid compensating (Keough et al., 2013;Munhall et al., 2009).Although behavioral studies have suggested that this unconscious minimizing of auditory prediction errors is the signal that drives speech sensorimotor adaptation, direct neurophysiological evidence of this process has not been demonstrated.
One neural representation of auditory prediction errors is speaking-induced suppression (SIS) of the auditory cortex.Studies have reported that the auditory responses to self-produced speech are smaller (i.e., suppressed) than the responses to playback of the same speech sound, consistent with the idea that auditory responses arise from auditory prediction errors, which are small in the self-produced case (i.e., auditory feedback is predictable) and large in the passively heard case (i.e., auditory feedback is unpredictable).Thus, SIS demonstrates that, during speaking, the auditory system predicts and anticipates the arrival of auditory feedback of speech onset, resulting in a suppressed feedback comparison response, as compared to auditory responses during passive listening to playback when speech onset cannot be predicted/anticipated.Consistent with the idea, SIS was reduced when participants spoke with pitch-perturbed auditory feedback (e.g., Behroozmand & Larson, 2011;Chang et al., 2013) or voice-manipulated auditory feedback ("alien voice", e.g., Heinks-Maldonado et al., 2005, 2006;Houde et al., 2002).Importantly, this reduction in the suppression of auditory areas in response to perturbed auditory feedback are not unique to human speech, as they have also been observed in marmoset monkey vocal production (e.g., Eliades & Tsunada, 2018).
Previously, reduction in a similar suppression effect (i.e., suppressed neural response in active movements compared to passive movements) has been found in Rhesus monkey cerebellum during sensorimotor adaptation (Brooks et al., 2015), but no such evidence has been documented in humans to date.One previous study that examined SIS during adaptation to first formant frequency shifts via electroencephalography (EEG) reported that SIS amplitude in the learning phase (i.e., during perturbed first formant) was not reduced compared to the preadaptation baseline (Sato & Shiller, 2018).However, the negative finding could result from masking of SIS changes across all 80 feedback perturbation trials, as opposed to changes that may have occurred in early trials (e.g., initial 20 to 40 feedback perturbation trials) when most adaptation occurs (e.g., Kim & Max, 2021).Here, we used magnetoencephalography (MEG) imaging during repeated speech adaptation sessions to test the hypotheses that (1) SIS reduces during early phases of speech sensorimotor adaptation, and (2) the early SIS reduction may be distinct from SIS changes found in later phases of adaptation.
Results
Participants lay supine on the scanner bed of a whole-head, 275-channel biomagnetometer system (MEG; Omega 2000, CTF, Coquitlam, BC, Canada) for a total of four sessions (first and second speaking sessions, first and second listening sessions).During the first two sessions, participants were asked to read "Ed," "end," or "ebb" (60 trial blocks for 3 different words = 180 total trials) that appeared on the screen.During these speaking sessions, participants heard their speech with the first formant frequency (Formant 1 or F1) shifted upward for some trials, which made their speech to sound like "add," "and," and "abb," respectively.Specifically, after the first 20 trial blocks (i.e., baseline) which had no perturbation, the 150 Hz up-shift perturbation was present from trial block 21 to 50.We categorized the first 15 trial blocks of the perturbed trials (21 -45) as the early learning phase and the second 15 trial blocks (36-50) as the late learning phase.
After the first session, participants were given a few minute-long break that included conversations with the experimenter, which allowed additional exposure to their unaltered auditory feedback (Figure 1).We then asked participants to repeat another speaking session.The rationale for this repeated session was that most adaptation occurs quickly, often in the first 10-30 trials of the perturbation phase, but such a low number of trials does not provide enough power for the evoked potential analyses.Thus, to ensure an adequate number of trials for the early and late learning phases, an additional session was recorded.After completing two speaking sessions, participants were asked to listen to their recorded speech in the first two speaking sessions across the subsequent two sessions (i.e., listening sessions).During the listening sessions, participants saw the same stimuli (i.e., words) that they saw in the speaking sessions (see Methods for more details).We averaged the acoustic and MEG data across the repeated sessions.As shown in Figure 2, source localization of trial-averaged data for each condition (speak, listen) and phase (baseline, early learning, and late learning) was conducted to determine peak activity (M100) location within the auditory cortex.We then computed the M100 amplitude differences between the listen and speak sessions to determine SIS for each condition and phase (see Methods for more details).The amount of SIS reduction in the early learning phase was significantly correlated with the amount of early adaptation (middle).The amount of additional SIS reduction in the late learning phase also significantly correlated with the additional amount of adaptation in the phase (right, r(20) = 0.644, p = 0.001).
SIS reduction was positively correlated with early adaptation
Nearly all participants adapted in both speaking sessions (Fig. 3A), except for three participants who adapted in only one of the two sessions.Given that there was no evidence of savings (i.e., changes in the baseline or learning behavior from repeating the task, see Supplementary Information 1), these participants were included in the analyses.The SIS analyses revealed that there was no right hemisphere SIS (see Supplementary Information 2), which is known to be variable across tasks and individuals (see Discussion for more details).On the other hand, most participants showed a clear suppression of left auditory activity in the speaking condition (compared to the listening condition) during the baseline phase (Fig. 3B, left).Hence, SIS refers to suppression of left auditory activity hereafter unless specified otherwise.
We also found that the SIS response changed across the baseline, early, and late learning phases (Fig. 3B, middle and right), F(2, 44) = 4.788, p = 0.013.The post-hoc pairwise comparison test indicated that SIS response in the early learning phase did not differ from the baseline (Fig. 3C, left), t(46.1)=1.829, padj = 0.171.Interestingly, we found that 14 participants who adapted more than 10 Hz significantly reduced SIS in the early learning phase, t(13) = -2.903,padj = 0.025, whereas 8 participants who adapted less than 10 Hz showed no reduction in SIS, t(7) = -0.478,padj = 0.647.Indeed, the correlation coefficient between the amount of SIS reduction in the early learning phase and the amount of learning (in the early learning phase) across participants was significant, r(20) = 0.621, p = 0.002 (Fig. 3C, middle).
Further SIS reduction was positively correlated with (additional) late learning
The SIS amplitude in the late learning phase was also significantly reduced compared to the baseline (Fig. 3C, left), t(46.1)= 4.339, p_adjust = 0.002.We found that the SIS reduction from the baseline was not significantly correlated, though trending, with the final amount of adaptation in the late learning phase, r(20) = -0.416,p = 0.054 (see Supplementary Information 3).This result was somewhat consistent with our hypothesis that most learning typically occurs in the early phase, and thus the late phase SIS reduction from baseline would not be able to capture most of the adaptation extent.Rather, late SIS reduction that accounts for early SIS changes (i.e., additional late SIS reduction from early SIS) is likely a predictor for late (additional) learning behaviors.Indeed, we found that additional SIS reduction in the late learning phase (i.e., late SIS relative to the early SIS) was significantly correlated with additional late adaptation, i.e., late adaptation relative to early adaptation r(20) = 0.644, p = 0.001.
Another related finding is that there were 9 participants whose late learning SIS was not reduced compared to early SIS, but rather increased in the late learning phase, t(8) =5.539, p = 0.001.This SIS increase resulted in a near-complete SIS recovery to the baseline level (i.e., the late learning SIS response did not differ from the baseline SIS response), t(8) = -0.804,p = 0.445.Importantly, these participants also did not show a significant amount of additional learning in this late learning phase, t(8) = -1.425,p = 0.192, even though adaptation remained largely incomplete in these participants (i.e., adapted 13.6% of the perturbation size).Consistent with this view, 13 participants whose late learning SIS was reduced compared to early SIS (t(12) = 3.869, p = 0.002) showed continual adaptation (i.e., late learning significantly different from early learning), t(12) = -4.627,p = 0.001.
Taken together, the relationship between additional SIS reduction and adaptation in the late learning phase also followed the same trend found in the early learning phase.That is, individuals who showed more reduction in SIS, also tended to show more learning, suggesting that larger adaptation was associated with larger prediction errors.In contrast, less learning or no learning behavior (e.g., reaching a plateau) was associated with smaller prediction errors (i.e., increases in SIS).To ensure that SIS reduction was related to learning behaviors, we designed a control experiment in which there was no auditory perturbation (and thus no learning was expected).
SIS remained unchanged when there was no learning
Here, participants also completed two speaking and two listening sessions.Other than the absence of the perturbation, the experimental setup and the analyses methods were identical to the main experiment.We found that participants did not adapt (Fig 4A) and SIS reduction also did not occur (i.e., SIS amplitudes did not change across the phases), F(2, 24) = 0.211, p = 0.812.Therefore, SIS remained unchanged when there was no learning.
Discussion
We used magnetoencephalography (MEG) imaging to examine auditory prediction errors during speech auditory-motor adaptation.Specifically, we measured speaking-induced suppression (SIS)-suppression of auditory responses to self-produced speech compared to the responses to passively heard speech-which is thought to represent auditory prediction errors.
To fully capture SIS changes in the early learning phase during which most of adaptation typically occurs, we analyzed the early learning and late learning phases separately.
Neurophysiological evidence that auditory prediction errors drive implicit adaptation
SIS was significantly reduced in the early learning phase during which adaptation occurred.In contrast, in a control experiment in which there was no perturbation (and thus no adaptation), such a SIS reduction was not found.In addition, the amount of SIS reduction was positively correlated with the amount of adaptation, delineating a direct link between prediction errors (i.e., more SIS reduction) and adaptation.Furthermore, in the late learning phase, SIS increase, instead of SIS reduction, was associated with adaptation reaching an asymptote (i.e., absence of further learning).Hence, it is unlikely that SIS change arises simply because of adaptive behavior (i.e., lower F1 productions).Rather, SIS reduction likely reflects prediction error signals that lead to adaptation.In sum, our results suggest that auditory prediction errors drive speech auditory-motor adaptation.
Our findings are consistent with previous reports of speech adaptation being entirely implicit (e.g., Kim & Max, 2021;Lametti et al., 2020), which is thought to be driven by prediction errors (Haith & Krakauer, 2013;Mazzoni & Krakauer, 2006).In addition, speech adaptation also seems to be sensitive to auditory feedback delays (i.e., 100ms delay can eliminate adaptation), which highlights the importance of prediction errors that require temporally precise comparison of prediction and the actual feedback (Max & Maffett, 2015;Shiller et al., 2020).
More recently, a computational model, Feedback-Aware Control of Tasks in Speech (FACTS, Parrell et al., 2019) also generated simulations of adaptation driven by auditory prediction errors (K. S. Kim et al., 2023).Recently, Tang et al. (2023) showed that SIS was changed after exposure to auditory perturbation that manipulated participants' perceived variability.Given that this type of learning during which participants change production variability (Tang et al., 2022;Wang & Max, 2022) likely involves prediction errors as in the current study, the results of Tang et al. ( 2023) are consistent with our own findings.
To date, only one other study examined SIS during speech auditory-motor adaptation, but they reported no SIS changes during adaptation (Sato & Shiller, 2018).Although their finding may seem contradictory to the current study at first glance, it should be noted that in the previous study SIS amplitudes across the whole learning phase (80 trials) were averaged and analyzed together, which likely included SIS recovery response in the late phase as found in the current study's late learning phase.Hence, it is possible that SIS reduction was present in the early learning phase, but such an effect may have been weakened by the late perturbation data.
It should be noted that our findings do not necessarily reject the notion that task errors may also drive implicit speech adaptation.In upper limb visuomotor rotation, recent studies have demonstrated that task errors contribute to implicit adaptation (Albert et al., 2022;H. E. Kim et al., 2019;Leow et al., 2018Leow et al., , 2020;;Miyamoto et al., 2020;Morehead & Xivry, 2021).Although it remains possible that other types of errors (in addition to prediction errors) may also influence speech adaptation, such evidence has not been documented (also see "What does SIS reflect?" below).
Broadly, our findings provide the first neurophysiological evidence that sensory prediction errors drive implicit adaptation in humans.A similar suppression effect has been previously documented in the cerebellum of rhesus monkey during head movement adaptation (Brooks et al., 2015).In the study, cerebellar neuron activities, which are typically suppressed during voluntary movements compared to passive movements much like SIS, did not differ between the two conditions (voluntary vs. passive) during adaptation.Remarkably, this reduced suppression also recovered (i.e., suppression increased) towards later learning trials, directly in line with our result.Here, we expanded the previous finding by demonstrating that the extent of such suppression reduction (or recovery) was closely associated with implicit adaptation across individuals.
Adaptation plateaus when prediction errors are minimal
Another interesting finding of the current study concerns a potential mechanism that causes adaptation to halt.In the past, several explanations for why adaptation is incomplete have been put forth, especially for speech adaptation which often plateaus around 20-40% (see Kitchen et al., 2022 for detailed discussion).Some studies have demonstrated that speech adaptation accompanies changes in perceptual boundaries which may contribute to incomplete adaptation (Lametti et al., 2014;Shiller et al., 2009), but perceptual auditory targets do not seem to change throughout adaptation (K. S. Kim & Max, 2021) and preventing perceptual target shifts by playing back the participants' baseline productions did not increase adaptation.Others argued that a conflict between unperturbed somatosensory feedback and perturbed auditory feedback may lead to limited adaptation, but this account also lacks supporting evidence.In fact, preliminary data from our laboratory shows that even when somatosensory feedback becomes unreliable by oral application of lidocaine, adaptation behavior does not increase, suggesting that somatosensory feedback may not be a reason for incomplete adaptation.
One idea consistent with previous studies in upper limb reaching adaptation is that consistency of errors modulates error sensitivity, which results in limited adaptation (e.g., Albert et al., 2021).This idea has not been directly examined in the context of speech adaptation, but it is plausible that the overall size of prediction errors may be modulated by feedback (or perturbation) consistency.Some studies have found that individuals with high perceptual (auditory) acuity measured by psychometric functions had a larger extent of adaptation (e.g., Daliri & Dittman, 2019), which may suggest a potential link between error sensitivity and adaptation.However, several other studies failed to find such a relationship (e.g., Abur et al., 2018;Alemi et al., 2021;Feng et al., 2011;Lester-Smith et al., 2020).Another potential explanation is that adaptation is halted by prediction errors which quickly decrease throughout adaptation because of both the motor output changes and sensory prediction updates, an idea put forth by a computational model, FACTS (K. S. Kim et al., 2023).
In these simulations, the adaptive motor output produced lower F1 in response to F1 upshift perturbation, resulting in perturbed sensory feedback to become more like the baseline sensory feedback (i.e., lower perturbed feedback in F1).Interestingly, the simulations showed that sensory prediction was also updated to predict perturbed auditory feedback (i.e., higher prediction in F1).Thus, prediction errors, the difference between lower perturbed feedback in F1 and higher prediction in F1, became minimized throughout adaptation, eventually becoming a small amount that could no longer induce adaptation.
Empirical evidence for the idea that minimal prediction errors may result in halting adaptation can be found in head movement adaptation of rhesus monkeys (Brooks et al., 2015).
In the study, cerebellar neuron activities to the voluntary head movement became more suppressed (compared to passive movement) as adaptation plateaued.Critically, the authors argued that the neural response becoming more suppressed (or less "sensitive") throughout learning demonstrates that sensory prediction was being rapidly updated to predict unexpected (perturbed) sensory feedback.
In the current study, late learning phase SIS increased (i.e., minimal prediction errors) in multiple participants who also showed plateaued adaptation in the phase (i.e., no additional learning) which is directly in line with previous findings.Furthermore, the observation that adaptation plateaued even though adaptation was largely incomplete (i.e., 14.88% of the perturbation size) can be best explained by the idea that sensory forward model updates (i.e., prediction updates) may have occurred throughout adaptation, minimalizing prediction errors.Thus, our findings corroborate the notion that incomplete adaptation may result from not only the motor output changes but also sensory prediction updates, which together minimize prediction errors.
What does SIS reflect?
SIS is typically viewed as a measure that reflects prediction errors given that SIS is reduced upon unexpected auditory feedback (e.g., pitch perturbation, alien voice, Heinks-Maldonado et al., 2005).This view is also shared by other studies examining suppression of motor-evoked auditory responses (i.e., finger pressing a button to generate a tone), which is also reduced or absent in deviant (i.e., unpredicted) sounds (Aliu et al., 2009;Knolle et al., 2013).In contrast to this view, a previous study from our laboratory argued that the SIS response may instead reflect target errors, discrepancies between an intended auditory target with auditory feedback (Niziolek et al., 2013).In the study, Niziolek and colleagues found that the greater the onset formants deviated from the median formants, the more SIS was reduced.Additionally, this reduction in SIS correlated with the amount of subsequent within-utterance formant change that reduced variance from the median as the utterance progressed ("centering").Under the assumption that the median formants are close to the intended auditory target (i.e., an ideal production), it can be argued that SIS reflects target errors.However, our finding that SIS increased in 9 participants during the late learning phase cannot be easily explained by this account.Due to the SIS recovery, their late learning phase SIS response, which did not differ from their baseline SIS response, would be interpreted as minimal or no target errors according to the target error explanation for SIS.Nonetheless, these participants compensated for only 13.6% of the perturbation on average, presumably leaving a considerable discrepancy between any fixed auditory target and auditory feedback.While previous studies have reported perceptual boundaries shifting towards the direction of perturbation during adaptation which may reduce target errors (Lametti et al., 2014;Shiller et al., 2009), it has also been suggested that auditory targets, as opposed to perceptual boundaries, do not change throughout adaptation (K. S. Kim & Max, 2021).In fact, a recent study has demonstrated that playing back the median production (i.e., the assumed auditory target) to participants throughout adaptation did not affect learning (LeBovidge et al., 2020), raising questions about whether auditory targets change during adaptation.
Alternatively, if SIS indeed reflects prediction errors rather than target errors, this view offers a different interpretation of Niziolek et al. (2013).According to the view, reduced SIS in productions with greater deviations from the median production may have resulted from large signal-dependent noise that stemmed from both the lower neural and muscular motor systems (Harris & Wolpert, 1998;Houde et al., 2014;Jones et al., 2002).Because such noise cannot be predicted by cortical areas, observed auditory feedback would not match auditory prediction, leading to large auditory prediction errors.Hence, it is plausible that the reduced SIS found in those productions reflects larger prediction errors.This view would also imply that centering (i.e., subsequent within-utterance formant change) minimized prediction errors, rather than target errors.
Neural correlates of auditory prediction errors
In the current study, we estimated auditory prediction errors from activities in the auditory cortex.Given that auditory areas receive corollary discharge from speech motor areas during speech production (Khalilian-Gourtani et al., 2022), it is possible that auditory prediction errors may be computed in the auditory cortex.However, a large body of evidence suggests that the cerebellum may be a neural substrate for forward models that generate sensory predictions (e.g., Blakemore et al., 1999Blakemore et al., , 2001;;Imamizu & Kawato, 2012;Kawato et al., 2003;Pasalar et al., 2006;Shadmehr, 2020;Shadmehr & Krakauer, 2008;Skipper & Lametti, 2021;Therrien & Bastian, 2019;Wolpert et al., 1998).Studies have also documented evidence that the cerebellum may also compute sensory prediction errors (e.g., Blakemore et al., 2001;Brooks et al., 2015;Cullen & Brooks, 2015).Alternatively, it has also been hypothesized that the cerebellum may work in concert with cortical areas to generate sensory prediction mechanisms and prediction errors (Blakemore & Sirigu, 2003;Haar & Donchin, 2020).In fact, the cerebellum is known to modulate activities in different cortical areas during active movements (e.g., the somatosensory cortex, Blakemore et al., 1999).Additionally, the cerebellum's projection to the posterior parietal cortex (Clower et al., 2001) has been implicated for generating sensory prediction(e.g., Della-Maggiore et al., 2004;Desmurget & Grafton, 2000; also see Blakemore & Sirigu, 2003 for a detailed review).
Is it possible that the cerebellum works in concert with the auditory cortex to compute auditory prediction errors?The cerebellum is certainly known for its involvement in auditory processing (e.g., Aitkin & Boyd, 1975, 1978;Ohyama et al., 2003) including speech perception (Ackermann et al., 2007;Mathiak et al., 2002;Schwartze & Kotz, 2016;Skipper & Lametti, 2021).It is also known that the cerebellum projects to the medial geniculate body (MGB), and the resulting inhibition and/or potentiation of MGB neurons may lead to rapid plasticity of receptive fields of the primary auditory cortex, modulating auditory inputs (e.g., McLachlan & Wilson, 2017;Weinberger, 2011).Such rapid plasticity of the response fields may prepare the primary auditory cortex for discriminating different sounds (David et al., 2012), a function that may be involved in computing auditory prediction errors.Indeed, both right cerebellar areas and bilateral superior temporal cortex were found to be active during speech response to unexpected auditory error (i.e., under the presence of auditory prediction errors, Tourville et al., 2008).
While some studies have suggested that there is no direct projection from the primary auditory area to the cerebellum in primates (e.g., Schmahmann & Pandya, 1991) and mice (e.g., Henschke & Pakan, 2020), others have reported auditory fibers projecting from the superior temporal gyrus and higher-order auditory regions to the cerebellum in primates (e.g., Brodal, 1979).In addition, it is also known that cortical auditory areas project to the cerebellar hemisphere through the cerebro-pontine pathways in some mammals including humans (e.g., Glickstein, 1997;Pastor et al., 2008).Collectively, while exactly how neurons in auditory regions compute auditory prediction errors remain unclear, it is certainly likely that they are estimated through several pathways incorporating multiple cortical and cerebellar areas.
It is also noteworthy that baseline SIS activities are found to be most pronounced in the left auditory cortex, in line with the notion that the left hemisphere is dominant in speech and language perception (Curio et al., 2000;Houde et al., 2002).In this study, we found SIS reduction in the left auditory cortex alone, in line with a previous study that found predictionrelated SIS effect only in the left hemisphere (Niziolek et al., 2013).One discrepancy between the current study and Niziolek et al. ( 2013) is that we did not find a significant SIS effect in the right hemisphere even during the baseline phase (see Supplementary Information 2).Given that the right hemisphere SIS is known to be highly variable across tasks and individuals (K.X. Kim et al., 2023), the discrepancy may have been due to the sampling issue.
Limitations and future directions
The current study examined speaking-induced suppression during auditory-motor adaptation and found evidence consistent with the notion that sensory prediction errors drive implicit adaptation.
However, some limitations must also be noted.First, we excluded two participants who showed "following" behaviors given that the scope of the paper was to examine SIS during adaptation (see Methods for more details).Nonetheless, a recent study points out that such behavior may represent the tail of a unimodal distribution across the participants (Miller et al., 2023).In light of this new finding, future studies should examine adaptation-related SIS reduction in a large number of participants for higher statistical power, rather than excluding "following" responses.
Another limitation of the study is that we repeated adaptation sessions to acquire an adequate number of trials for SIS analyses, which introduces noise in the data given that adaptation behaviors may differ in the two sessions (see Supporting Information 1).With continual advancements in developing powerful source reconstruction algorithms, it may be possible to acquire enough trials from a single adaptation session in future SIS adaptation studies.
Additionally, previous studies in monkeys (Eliades & Wang, 2017) and humans (Chang et al., 2013) reported that different neurons in the auditory areas behave differently during speaking (or vocalization) vs. listening conditions.Future investigations on which types of these neurons in the auditory cortex are ultimately responsible for SIS reduction during adaptation will be critical for understanding how sensory prediction errors are generated in the central nervous system.
Lastly, future studies should examine these questions in more ecologically valid contexts given the recent finding of SIS (Kurteff et al., 2023) and adaptation (Lametti et al., 2018) in more naturalistic tasks such as sentences.
Participants
Participants who were native speakers of American English with no known communication, neurological, or psychological disorders were recruited for both MEG and MRI recording sessions.30 participants participated in the adaptation experiment, but 8 participants were excluded from analyses for various reasons.One participant's source could not be reliably localized, and four participants could not finish the task due to fatigue.Two participants showed "following" non-adaptive behavior (i.e., change of 15 Hz or more in the direction of the perturbation in the late learning phase) and one participant had atypical (outlier) SIS response in the baseline, (SIS < -10 z).
Here, we report adaptation experiment results from the remaining 22 participants (mean age = 30.9,SD = 8.8 years old, 12 females).For the control experiment, 12 participants (mean age = 31.6,SD = 8.0 years old, 5 females) participated.It should be noted that 5 of these participants also participated in the adaptation experiment, each visit being 1-2 months apart.The order was pseudorandomized (i.e., three of the five participants did the adaptation experiment first).All participants in the study passed pure-tone hearing thresholds of ≤ 20 dB HL for the octave frequencies between 500 and 4,000 Hz, except one participant in the adaptation experiment whose threshold was at 30 dB in the right ear and 40 dB in the left ear at 4 kHz.
Adaptation experiment
During MEG data collection of the first two sessions, participants were asked to read "Ed," "end," or "ebb" (60 trial blocks for 3 different words = 180 total trials) that appeared on the screen.During these speaking sessions, participants heard their speech with the first formant frequency (Formant 1 or F1) shifted upward for some trials (trial blocks 21 to 50, see below), which made their speech to sound more like "Add," "And," and "Ab," respectively.The auditory perturbation, 150 Hz upshift, was applied through Feedback Utility for Speech Processing (FUSP, Kothare et al., 2020) and the total feedback latency (i.e., hardware + software, K. S. Kim et al., 2020) was estimated to be about 19 ms.
During the speaking sessions, the first 20 trial blocks (i.e., baseline) had no perturbation, while blocks 21 through 50 had a 150 Hz up-shift perturbation in the auditory feedback.We categorized the first 15 trial blocks of the perturbed trials (21 -45) as the early learning phase and the second 15 trial blocks (36-50) as the late learning phase.In the passive listening condition, participants heard the same auditory feedback that they received during the speaking condition (including the perturbed sounds) through the earphones.With a mean interstimulus interval of 3s and short breaks (roughly 20 seconds) every 30 utterances, the duration of each session was approximately 10 -12 minutes.
Participants performed two speaking sessions, after which they performed two listening sessions.
Between each session, participants were given a few minute-long break that included conversations with the experimenter, which allowed additional exposure to their unaltered auditory feedback.Given that the adaptation task (i.e., speaking session) was repeated, we also checked whether there was any savings effect and found that there was no consistent effect of repeating adaptation (see Supplementary Information 1).
Control experiment
We also designed a control experiment in which we applied 0 Hz perturbation (instead of 150 Hz perturbation) during early and late "learning" phases.All other details of the task remained identical to the adaptation experiment.
MRI
On a separate day, participants also underwent an MRI scan, where a high-resolution T1weighted anatomical MRI was acquired in each participant for source reconstruction.
MEG acquisition
Participants were placed in a 275-channel, whole-head biomagnetometer system (Omega 2000, CTF, Coquitlam, BC, Canada; sampling rate 1200 Hz; acquisition filtering 0.001-300 Hz) for a total of four sessions (two speaking and two listening sessions).Participants heard auditory feedback (or recorded auditory feedback during listening condition) through ER-3A ear-insert earphones (Etymotic Research, Inc., Elk Grove Village, IL) and a passive fiber optic microphone (Phone-Or Ltd., Or-Yehuda, Israel) was placed about an inch in front of their mouths to record speech responses.All stimulus and response events were integrated in real time with MEG timeseries via analog-to-digital input to the imaging acquisition software.
Each participant lay supine with their head supported inside the helmet along the center of the sensor array.Three localizer coils affixed to the nasion, left peri-auricular, and right periauricular points determined head positioning relative to the sensor array both before and after each block of trials.We ensured that participants' head movements were smaller than 5 mm in every session.Co-registration of MEG data to each individual's MRI image was performed using the CTF software suite (MISL Ltd.,Coquitlam,BC,Canada;ctfmeg.com;version 5.
First formant frequency (F1)
The first formant frequency (F1) from each speech production was extracted through a custom MATLAB software, Wave Viewer (Raharjo et al., 2021).We then extracted F1 from the vowel midpoint (40% to 60% into the vowel) and averaged it for each utterance.In case of missing trials, we replaced the data point by using an interpolation method using four nearest neighboring trials as described in Kitchen et al. (2022).We replaced about 2.96% and 2.88% of the data for the adaptation and control experiments respectively.We normalized the data by subtracting the baseline F1 from the data (i.e., baseline = 6 th to 20 th trial blocks).The amount of learning in each phase was assessed by averaging the last 5 trial blocks (31 st to 35 th blocks for early learning and 46 th to 50 th blocks for late learning).
Speaking-induced suppression
We first corrected distant magnetic field disturbances by calculating a synthetic third-order gradiometer, detrended using a DC offset across whole trials, and then filtered (4th order Butterworth, bandpass 4 to 40 Hz) sensor data.In the sensor data of two participants (one of them participated in both of the experiments, so three datasets), considerable (>10pT) sensor noise caused by dental artifact verified through visual inspection was denoised using a dual signal subspace projection (DSSP, Cai, Kang, et al., 2019;Cai, Xu, et al., 2019).After preprocessing sensor data, separate datasets were created with trials during baseline, early learning, and late learning phases for speak and listen conditions.In these datasets, trials exceeding a 2 pT threshold at any timepoint were rejected.In two participants' data, three channels were removed prior to threshold-based artifact rejection.The data was then averaged across all remaining channels.For the adaptation experiment, 5.47% of the speak session trials and 4.70% of the listen session trials were removed.For the control experiment, 5.68% and 6.14% of the trials were removed for speak and listen sessions, respectively.
For each participant, a single-sphere head model was derived from the individual's co-registered T1 structural MRI using the CTF software suite (MISL Ltd.,Coquitlam,BC,Canada;ctfmeg.com;version 5.2.1).Using the Champagne algorithm (Owen et al., 2012) and a lead field of 8mm resolution on the baseline listen data, we generated whole-brain evoked activity between 75 ms and 150 ms (after the auditory feedback onset), and determined the MNI coordinate with the most pronounced M100 response in the left and right auditory areas (i.e., the highest amplitude) for each participant.Although we only report the results from the left auditory area in the main text, the results for the right hemisphere can be found in Supplementary Information 2.
The median MNI coordinate across adaptation and control experiments were [x = -56, y = -24, z = 8] and [x = 48, y = -24, z = 8] for the left and right auditory areas respectively.We then used a Bayesian adaptive beamformer (Cai et al., 2023) to extract time-series source activity focused on the obtained MNI coordinate across all phases (i.e., baseline, early, and late).From the final time-series z-scored data, we measured M100 peak by finding the maximum value between 75 -150 ms after the auditory signal.We then computed the M100 amplitude difference between the listen and speak sessions to determine SIS: = 100 − 100
Statistical analysis
A linear mixed effects model was constructed for SIS with the different adaptation phases as fixed effects and participants as a random effect using lme4 package in R (Bates et al., 2015).
The Tukey test was used for post-hoc pairwise comparisons from the emmeans package in R (Lenth, 2022).A Pearson's correlation tested to examine relationships between the amount of adaptation and the SIS amplitudes.
Figure 1 .
Figure 1.Participants were asked to read words during the first two sessions ("speak").In these sessions, 150 Hz upshift perturbation was present from the trial block 21 to 50.We categorized the first 15 trial blocks of the perturbed trials (21 -45) as the early learning phase and the second 15 trial blocks (36-50) as the late learning phase.After the first session, we asked participants to repeat another speaking session after a break lasting a few minutes.
Figure 2 .
Figure 2. A representative participant's source localization.NUTMEG (Hinkley et al., 2020) identified a few MNI coordinates that showed clear M100 response shown in the coronal (A), sagittal (B), and transverse (C) planes.The MNI coordinate of the voxel with the most power in the auditory areas in each hemisphere was selected for analyses.D: The same participant's left auditory area coordinate selected shown in a surface-based rendering (BrainNet Viewer, Xia et al., 2013).
Figure 3 A:
Figure 3 A: The group average speech auditory-motor adaptation in which participants lowered their first formant frequency (F1) in response to the 150 Hz upshift F1 perturbation (left).Each participant's early and late adaptation.Some participants continued to adapt after the early phase, but others plateaued (right).B: The left auditory cortex responses (M100) in listen and speak conditions demonstrate that the amount of speaking-induced suppression (i.e., listen (black)speak (orange)) is reduced during early learning (Early) compared to the baseline (Base).C: SIS was significantly reduced in the early and late learning phases compared to the baseline (left, r(20) = 0.621, p = 0.002).
Figure 4 .
Figure 4.A control experiment in which no auditory perturbation was applied.As expected, participants did not show any changes in Formant 1, exhibiting, no learning (left).There was also no SIS change across the different phases (right).
2.1) by aligning the localizer coil locations to the corresponding fiducial points on the individual's MRI.MRI images were exported to Analyze format and spatially normalized to the standard T1 Montreal Neurological Institute (MNI) template via Statistical Parametric Mapping (SPM8, Wellcome Trust Centre for Neuroimaging, London, UK).
(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity.It is made | 8,960 | 2023-11-06T00:00:00.000 | [
"Linguistics",
"Psychology"
] |
Embodiment and sense-making in autism
Traditional functionalist approaches to autism consider cognition, communication, and perception separately, and can only provide piecemeal accounts of autism. Basing an integrative explanation on a single cause or common factor has proven difficult. Traditional theories are also disembodied and methodologically individualistic. In order to overcome these problems, I propose an enactive account of autism. For the enactive approach to cognition embodiment, interaction, and personal experience are central to understanding mind and subjectivity. Enaction defines cognition as sense-making: the way cognitive agents meaningfully connect with their world, based on their needs and goals as self-organizing, self-maintaining, embodied agents. In the social realm, when we interactively coordinate our embodied sense-making, we participate in each other’s sense-making. Thus, social understanding is defined as participatory sense-making. Applying the concepts of enaction to autism, I propose that 1) Sensorimotor particularities in autism translate into a different sense-making and vice versa. Autistic behaviors, e.g. restricted interests, will have sensorimotor correlates, as well as specific significance for autistic people in their context. 2) Socially, a reduced flexibility in interactional coordination can lead to difficulties in participatory sense-making. At the same time, however, seemingly irrelevant autistic behavior can be quite attuned to the interactive context. I illustrate this complexity in the case of echolalia. An enactive account of autism starts from the embodiment, experience, and social interactions of autistic people. Enaction brings together the cognitive, social, experiential, and affective aspects of autism in a coherent framework based on a complex, non-linear multi-causality. On this foundation, bridges can be built between autistic people and their often non-autistic context, and quality of life prospects can be improved.
INTRODUCTION
Autism is primarily seen as a combination of social, communicative, and cognitive deficits. However, there is growing awareness that autism is also characterized by different ways of perceiving and moving, as well as particular emotional-affective aspects. Evidence ranges from hypo-and hyper-sensitivities, over difficulties with the timing, coordination, and integration of movement and perception, painfulness of certain stimuli, muscle tone differences, rigid posture, movement, attention, and saliency problems, to differences in bodily coordination during social interactions.
If the social, communicative, and cognitive deficits were difficult to pull together under one explanation, now, as the many and divergent aspects of what we may call autistic embodiment are gaining interest, an integrative explanation seems still further off.
In this paper, I explain why three of the main autism theories [theory of mind (ToM), weak central coherence (WCC), and executive function (EF)] are inherently piecemeal, and why this is a problem.
Then, I sketch a proposal to bring the many aspects of autism together, by doing justice to the experience of autism. The proposal is based on the enactive approach to cognition, which uses the notion of sense-making to define cognition as the meaningful way in which an agent connects with her world. It brings a dimension of personal significance right to the core of cognition. Sense-making is based in the inherent needs and goals that come with being a bodily, self-organizing, self-maintaining, precarious being with a singular perspective on the world. Sense-making plays out and happens through the embodiment and situatedness of the cognitive agent: her ways of moving and perceiving, her affect and emotions, and the context in which she finds herself, all determine the significance she gives to the world, and this significance in turn influences how she moves, perceives, emotes, and is situated.
The social side of this-important in cognition in general, and also for understanding autism-is captured by the notion of participatory sense-making, which describes how individual sense-making is affected by inter-individual coordination. If sense-making is a thoroughly embodied activity, and we can coordinate our movements, perceptions, and emotions in interactions with each other, then, in social situations, we can literally participate in each other's sense-making. This notion brings the dynamics of interactive encounters into the foreground and provides novel elements for the study of autism, such as the idea of the rhythmic capacity (discussed below). The notion connects ways of measuring coordination (using dynamical systems tools) with the investigation of the 1st and 2nd person experience of autism.
These are the central items that I apply to autism research in order to uncover the relation between what I call "autistic embodiment" and "autistic psychology." On the basis of empirical research, I show that autism is characterized by a different embodiment, and propose hypotheses based on the dimensions of significance that are inherent in sense-making. I suggest that their great attention to detail, preference for repetition and sameness, and restricted interests may be inherently meaningful for people with autism, and not just, as they have often been conceived, inappropriate behaviors to be treated away. In the social and communication realm, I suggest that social interaction difficulties are not to be considered exclusively as individually based, but that the patterns in the interaction processes that autistic people engage in play an important role in them. Evidence shows that people on the spectrum have difficulties with temporal coordination in social interactions, but also unexpected capacities in this area. I propose that people with autism are less flexible in dealing with the wide range of interactional styles that characterize social life, but that how they can deal with this depends not just on individual capacities, but also on the interactions they engage in. Different measurable aspects of the dynamics of interactions involving people with autism illustrate this. Finally, I discuss some implications for diagnosis, remediation, integration, and quality of life.
CURRENT UNDERSTANDING OF AUTISM THREE THEORIES
Autism is most often seen as a combination of social, communicative, and cognitive deficits. The three explanatory theories addressing these aspects are ToM, WCC, and EF (Baron-Cohen, 2003;Frith, 2003).
ToM theory aims to explain non-autistic social cognition in functional/computational terms. Underlying it is the assumption that what other people think and feel is internal and hidden from us, and the only clue we have to go on is their perceptible behavior. From this, we supposedly infer their intentions, using dedicated neural and/or cognitive mechanisms. People with autism are thought to have more trouble than usual doing this, and to find it difficult to "read other people's minds," or to imagine what they are thinking or feeling. The suggestion is that autistic people lack or have a broken "ToM"-the purported neural or cognitive device that computes others' underlying intentions from their perceived behaviors-or to have difficulties with "mindreading" or "mentalizing" (Baron-Cohen et al., 1985, 1986Baron-Cohen, 1995;Goldman, 2012). This proposal underlies much of the traditional understanding of autism, and has been very fruitful in terms of research output. It has been around since the 1970s, and many studies today, not just of autism but of social cognition generally, are still built on its basis (see e.g., Sterck and Begeer, 2010), although more recent findings also suggest that people with autism tend to be better at mindreading than thought before (see e.g., Begeer et al., 2010;Lombardo et al., 2010;Roeyers and Demurie, 2010; see also Happé, 1994).
WCC theory (Frith, 1989;Shah and Frith, 1993;Frith and Happé, 1994;Happé, 1994) suggests that people with autism focus on piecemeal information and have difficulty integrating what they perceive as well as perceiving things in context. This difficulty is manifested at different levels, from perceiving whole objects to grasping the gist of a story. For example, research shows that it is difficult to read homographs in context (Frith and Snowling, 1983;Happé, 1997;López and Leekam, 2003). Francesca Happé, Uta Frith and others also call WCC a cognitive style (Happé, 1999;Frith, 2003). Neurotypicals 1 tend to prefer processing the overall meaning of a scene, while autistics focus on details. WCC research has generated interest in remarkable aspects of autistic perception, and has given attention to what can be seen not just as deficits, but as cognitive capacities and advantages (Happé, 1999;Frith, 2003;Happé and Frith, 2009;Mottron, 2011). Some of the more striking such capacities are making jigsaw puzzles upside down or without a picture on it (Frith and Hermelin, 1969;Frith, 1989Frith, , 2003, or finding hidden figures, e.g., triangles, in a drawing of an object like a house or baby cot (Shah and Frith, 1983).
The EF theory proposes that people with autism lack control over their actions and attention, associated with activity in the frontal lobes. This would explain, for instance, problems with the inhibition of behavior, the strong need for routines and structure, narrow interests, repetitive and stereotypic movements and thought processes, and a need for sameness (Ozonoff et al., 1991;Russell et al., 1991;Russell, 1998). It predicts that people with autism have difficulties with, for instance, the Stroop test, which assesses inhibition, and the Tower of London test, which evaluates planning capacities (Robinson et al., 2009).
PROBLEMS WITH THE THEORIES
These theories are not without problems. For instance, Boucher (2012) argues that ToM is too focused on high-level capacities, while it is not clear what could be underlying them. Also, while some people with autism do pass ToM tests, some with other disabilities (and not autism) do not pass high-level ToM tests, leading to the question of whether ToM deficits reliably pick out autism in particular (see e.g., Happé, 1994;Boucher, 2012). If language abilities and higher order reasoning are closely intertwined with ToM (Sigman et al., 1995; for a discussion, see Malle, 2002), maybe autism is rather a problem with language and reasoning? Or could it even be that people on the spectrum, good as they seem to be at literal reasoning, and strict application of structures and rules, are in fact the ones who do use an explicit ToM? As Sigman et al. (1995) found, there may be a connection between high reasoning capacities and good scores on ToM tests in people with autism, because they can "calculate" ToM-like inferences and explanations of behavior. Despite this, such calculations seem to have a limited effect since teaching people with autism about the "rules" of social interaction and perception does not necessarily lead to greater social fluency (Ozonoff and Miller, 1995).
WCC has been criticized for being overly focused on a deficit at the level of contextual, global processing, while there is also research showing a preference for local processing, with global processing sometimes intact (see e.g., Plaisted et al., 1999;Mottron et al., 2000). The theory is also questioned on the basis of how central the drive for central coherence really is, i.e., whether a deficit in integrated information processing spans all levels of cognitive processing (López et al., 2008).
Regarding EF (like for ToM), it is not clear that it is specific to autism and not other disorders, whether all people with autism have executive function deficits, and also precisely how such deficits develop (Hill, 2004a,b).
Perhaps more important than the specific criticisms is the fact that none of the theories suffices on its own to explain autism as a whole. While ToM explains the so-called triad symptoms: social, communicative, and cognitive deficits (Wing and Gould, 1979;American Psychiatric Association, 2000), WCC addresses the non-triad symptoms (narrow attention to detail, islets of ability, and context-insensitivity), and EF deals with the repetitive behaviors (Baron-Cohen, 2003;Frith, 2003). Frith argues that autism is such a complex phenomenon that it needs all these theories (Frith, 2003). She proposes to unify them by searching for the common denominator in the key symptoms of autism, which she suggests is an "absent self " or a lack of top-down control. Frith invokes the age-old idea of the homunculus to explain this. The homunculus-Latin for "little man"-is a kind of controller in the brain, who views what comes in from the sense organs, interprets the situation using these signals, and then sends commands to the muscles and executive organs, so that the human can react appropriately. The idea has a troubled history in philosophy and psychology, and many reject it altogether (Bennett and Hacker, 2003). One of the reasons is that another "little man" inside the first one would be needed to control his brain states, and then another one inside the latter one, and so on, ad infinitum (see, for instance, Dreyfus, 1992). While recognizing this problem, but also that the idea of a homunculus is, indeed, nevertheless taken for granted in much of neuroscience and psychology, Frith suggests that maybe there is an ultimate homunculus, one behind or inside of which there is not a further one anymore. She proposes that this final homunculus is selfawareness or the ultimate controller, and that this is what people with autism lack (Frith, 2003). How this might be possible is not explained. Major theoretical difficulties aside, evidence supporting such an idea is anything but conclusive. And even then, it is not clear how this lack would explain all the aspects of autism (Frith, 2008).
DISEMBODIED AND ISOLATED
When taking a step back and looking at these theories with some distance, we notice in all of them two important underconsidered elements. Firstly, they show little concern for the embodiment and situatedness of the autistic person, and secondly, even in the investigation of social deficits, interactive factors do not play an explanatory role. The theories are disembodied and methodologically individualistic.
The domination of functionalist explanations of autism-at least in the Anglo-Saxon research world-has left other significant aspects of autism all but ignored (or at best informally recognized but never making an impact on research, which amounts to the same). Lately, however, there is increasing interest in the different ways of moving, perceiving, and emoting of autistic people. There is more and more research on autistic perception, hypo-and hyper-sensitivities, movement, and emotional specificities (Gepner et al., 1995(Gepner et al., , 2001Baranek, 2002;Gepner and Mestre, 2002a;Rogers and Ozonoff, 2005;Fournier et al., 2010;Whyatt and Craig, 2012;Donnellan et al., 2013;Smith and Sharp, in press).
The embodied turn in cognitive science urges us to take the role of the body in subjectivity and cognition seriously (see Brooks, 1991;Varela et al., 1991;Dreyfus, 1992;Lakoff and Johnson, 1999;O'Regan and Noë, 2001;Gallagher, 2005;Thompson, 2007;Gallagher and Zahavi, 2008, etc.). Embodied approaches agree that the body plays a crucial role in cognition and emotion. They vary, however, as to the role for and notions of the body they propose. For extended functionalism, the body primarily simplifies cognitive information processing, "offloading" it from brain to muscles (Clark and Chalmers, 1998;Wheeler, 2010). For the sensorimotor approach, perceptual experience and cognition are grounded in the mastering of regularities in sensorimotor activity that depends on bodily structures and habits (O'Regan and Noë, 2001). For enaction, the body may play the above roles but it is in addition an organic precarious selfsustaining system with needs, and this is why embodied creatures care about their world in the first place, they have their own perspective of significance which is rooted in the body Thompson, 2007;Di Paolo et al., 2010). It is this approach that forms the basis for the view on autism that I take here.
Furthermore, the trio of theories, while they are centrally concerned with autism's most striking difficulty-its social and communicative aspects-do not do justice to the possible roles played by social interaction processes (Gallagher, 2001(Gallagher, , 2004aMcGeer, 2001). The study of social interaction processes has recently become prominent in social neuroscience, psychology, and developmental psychology (Reddy et al., 1997;Reddy and Morris, 2004;Sebanz et al., 2006;De Jaegher et al., 2010;Dumas et al., 2010;Schilbach, 2010;Di Paolo and De Jaegher, 2012;Pfeiffer et al., 2013;Schilbach et al., in press). Proponents of ToM will say that social interaction is of course central to their theory (Michael, 2011). But this is not so obvious. ToM is certainly concerned with social interaction, but only as an input to or an end-product of individual, brain-based, high-level cognitive processes, not as complex processes in their own right or in any of their relevant dynamic features. None of the mainstream theories provides an account of the role that interaction processes as such play in how autism manifests, develops, and affects the people on the spectrum as well as those around them.
What is an "interaction as such"? Let me illustrate it with some examples from everyday life. There is a way in which the interactions we engage in can take on a life of their own. This happens, for example, when we feel "in sync" with someone, or when two people speaking on the phone cannot seem to hang up, even if they both feel this is the end of the conversation (Torrance and Froese, 2011), or in cases of interactions that time and again manifest a certain atmosphere, e.g., of animosity, or of flirting-even if each participant clearly wants and even tries to avoid this dynamic (see also Granic, 2000). In these examples, the interaction process, in its extra-individual dimension, influences, modifies, and in part creates the intentions of those engaged in it (De Jaegher and Di Paolo, 2007;De Jaegher, 2009;Fuchs and De Jaegher, 2009;Gallagher, 2009;De Jaegher et al., 2010). Although this plays a great role in everyday life, and also in autism, none of it is accounted for or even considered by ToM, WCC, EF, or any combination of them.
I claim that an integrated theory of autism cannot ignore embodiment and social interaction processes. They are key elements of the enactive account I propose here.
LIMITS OF PIECEMEAL FUNCTIONALISM
There is another common ill that the three theories suffer. Given their cognitivist and functionalist background, it is no surprise that the accounts consider perception, action, and cognition as relatively separate domains that can be investigated practically in isolation (Frith, 2003;. The overall approach is piecemeal, and the hope is that the insights and explanations will eventually be put together. How is another matter. In a way, this is a kind of "weak coherence" view of mind. Or, in the words of Baron-Cohen (though he does not apply this term to autism theories), it is a systemizing way of thinking, which he associates with male thinking and with autism ( Baron-Cohen, 2002), and which is also associated with standard reductionist views of science (see e.g., Polanyi, 1958). Piecemeal approaches can generate partial knowledge, but they have a number of problems at the time of putting the pieces together, especially when the various elements bear intricate relations to each other, as is the case in autism.
First we can ask, what precisely is the link between the different aspects of the "autistic mind"? In general, the aim is for a unified account based on a single causal mechanism or underlying deficit (Volkmar et al., 2004(Volkmar et al., , 2005; though see also , who argue against a single underlying deficit or theory). Functionalism's answers to the question of integration are limited to a linear strategy, in which either everything is reduced to a common root, often a neural function (e.g., Frith's ultimate homunculus), or to a common higher cognitive capacity (e.g., Frith's metaphor of the absent self). But seeking an integrative view of autism does not necessarily imply adopting a monocausal approach. It can also mean adopting a framework where as many factors as possible cohere, even in the presence of multiple causal elements that relate non-linearly. The analytic, systemizing approach in much of cognitive science and autism research has delivered worthwhile insights, but there is something that remains unclear, something that can only be grasped when we look at all the issues through a synthetic lens too. This something, I suggest, is central to what makes autistic people, and others, relate in meaningful ways with the world. We come back to it below.
We can also ask the question of how the elements of autism are related in specifically developmental terms. The deficits proposed by ToM, WCC, and EF are relatively high-level, and several researchers have pointed out that something is likely to go wrong earlier in development, in so-called precursors to, for instance, a full-blown ToM mechanism (Hobson, 1991(Hobson, , 1993Klin et al., 1992;Hendriks-Jansen, 1997;Gallagher, 2001Gallagher, , 2004aMcGeer, 2001;Hutto, 2003;Zahavi and Parnas, 2003). Often, within these theories, development is thought as the straight temporal sequence between a set of precursors and their concomitant trait. But, as dynamical systems researchers argue, a genuinely developmental approach is one that accounts for change over time, i.e., one that "sees capacities and deficits as not just following each other, but following from each other" (Hendriks-Jansen, 1997, p. 383, emphasis in original; see also Fogel, 1993;Thelen and Smith, 1994;Lewis and Granic, 2000;Shanker and King, 2002;Shanker, 2004). On Frith's account of autism, all the problems are tethered to a common anchor, the ultimate self-awareness, which, however, "only gradually emerges in older children and adolescents" (Frith, 2003, p. 209). The fact that the proposed central traits or deficits of autism are relatively high-level makes it difficult to see the developmental trajectory from one symptom to another, let alone how they are meaningfully connected. One keeps wondering: why are the symptoms connected in this way? Another way to put this is that, even though research overwhelmingly focuses on children with autism 2 , its main explanations are adultist (Sheets-Johnstone, 1999a). That is, they posit adult capacities-or rather, deficits in adult capacities-and then work their way down from there. In this way, it has been hard to imagine that sensory and motor difficulties could be basic to autism, because traditionally it has been hard to imagine the embodied aspects of social reasoning, integrative information processing, planning, or inhibition. The same point can be made about the developmental neglect of social interaction.
If, as I suggest, autism is characterized by differences in embodiment, the question is not just: how do we connect the higher-level psychological functions and traits, but: how do we connect all of this with the differences in perceiving, moving, and emoting? What are the binding factors between autistic embodiment and autistic psychology?
TOWARD AN ENACTIVE ACCOUNT OF AUTISM: EMBODIMENT, INTERACTION, AND EXPERIENCE
Certainly, the criticisms laid out here are all directed at the "dry" theories. This does not preclude scientists, researchers, practitioners, clinicians, teachers, people with autism and their nearest from recognizing, dealing with, and using in their daily practices the elements that I suggest these theories lack. In fact, these people often have a sophisticated intuitive practical understanding of autistic embodiment, behavior, sociality, affect, and experience. However, as long as scientific theories do not describe or explain this know-how, these issues remain poorly understood, poorly connected amongst each other, and difficult to systematically link with practice. Most people who deal with autism in some way or another, whether as a researcher, a practitioner, or personally affected, mean the best, and do their utmost to make life as good as possible with the current knowledge available. But a lot of improvement is still possible and needed, as shown by the fact that even for some of the most integrative and dynamic intervention programs, it is still difficult to bring them to those who need them, or to say why they work (see e.g., Gutstein and Sheely, 2002;Greenspan and Wieder, 2006). Such integrative, holistic programs can use the help of a comprehensive, coherent theory to back them up and provide insight into why certain practices work 3 and, in turn, the practical knowhow of these programs can illuminate and inform theoretical and empirical work.
In sum, I suggest that to understand autism we should avoid piecemeal functionalist pitfalls and their reductionistic demands, while taking stock of the insights that established theories have brought us. An approach that integrates the cognitive, social, communicative, embodied, interactive, experiential, and affective aspects of autism is possible. I propose that this account, based on a coherent and comprehensive view of embodiment, subjectivity, and mind, is enaction. In this paper, I can only sketch its potential for understanding autism, and I hope I can at least establish that an integrative understanding of autism-one in which its various elements cohere-requires an account of the embodiment, social interaction processes, and experience of autism.
ENACTIVE COGNITIVE SCIENCE
This section provides a necessary and quick introduction to the central concepts of enactive theory. These concepts are applied to autism below, and I introduce them here with a view to this task. I build up the enactive story around two of its main concepts: sense-making-the enactive notion of cognition in general; and participatory sense-making-enactive social cognition. Along the way, important concepts to pick up are autonomy (both as applied to individuals and to social interaction processes), embodiment, experience, coordination, and rhythm capacity.
SENSE-MAKING
Enactive cognitive science attempts to answer fundamental questions such as: what is an agent, what is autonomy, why do cognizers care about their world, why does anything mean something to someone? Enaction is a non-reductive naturalistic approach that proposes a deep continuity between the processes of living and those of cognition. It is a scientific program that explores several phases along this life-mind continuum, based on the mutually supporting concepts of autonomy, sense-making, embodiment, emergence, experience, and participatory sense-making Thompson, 2005Thompson, , 2007De Jaegher and Di Paolo, 2007;Di Paolo et al., 2010).
The organizational properties of living organisms make them paradigmatic cases of cognizers (Varela, 1997;Thompson, 2007). One of these properties is the constitutive and interactive autonomy of living systems. This autonomy lies in the fact that they self-generate, self-organize, and self-distinguish. That is, living systems are networks of dynamical processes (metabolic, immune, neural, sensorimotor, etc.) that generate their own identity by self-sustaining and distinguishing themselves from their environment, while at the same time constantly exchanging matter and energy with the environment. An autonomous system is composed of several processes that actively generate and sustain an identity under precarious conditions. In short, living systems are constantly producing themselves physically and regulating their interactions with the world to satisfy the needs created by their precarious condition. Constitutive and interactive properties like these have been proposed to emerge at different levels of identity-generation apart from the metabolic level, including sensorimotor and neuro-dynamical forms of autonomy (Varela, 1979(Varela, , 1997Moreno and Etxeberria, 2005; De Jaegher and Di Paolo, 2007;Thompson, 2007;Di Paolo et al., 2010).
Enactive researchers propose that autonomy is also what makes living systems cognizers (Varela, 1997;Weber and Varela, 2002;Di Paolo, 2005;Thompson, 2007). This view rejects the traditional idea that cognizers passively respond to environmental stimuli or satisfy internal demands. Instead, the organism is a center of activity in the world, spontaneously generating its own goals as well as responding to the environment (McGann, 2007). Novel identities emerge, and the coupling between the emergent processes and their context constrains and modulates the operation at underlying levels (Thompson and Varela, 2001;Thompson, 2007;Di Paolo et al., 2010). Actions and their consequences constantly shape the underlying processes and modulate autonomy such that intentions, goals, norms, and significance in general change as a result. The significant world of the cognizer is therefore not pre-given but largely enacted, shaped as part of its autonomous activity.
Taking seriously a principle of emergence and mutual constraining between various levels (e.g., personal and subpersonal) makes the enactive approach very skeptical about the localization of function at one level in specific components at a lower level (exemplified in the idea of the homunculus that Frith would like to revive). It rejects "boxology" as a valid method to address the "how does it work" question (De Jaegher and Di Paolo, 2007;Di Paolo, 2009).
For the enactive approach, the body is more than just anatomical or physiological structures and sensorimotor strategies. It is a precarious network of various interrelated self-sustaining identities (organic, cognitive, social), each interacting with the world in terms of the consequences for its own viability. This makes cognition inherently embodied (Sheets-Johnstone, 1999b).
The same applies to experience, which is both methodologically and thematically central for enaction. Experience is notas it is for cognitivism-an epiphenomenon or a puzzle. It is Frontiers in Integrative Neuroscience www.frontiersin.org March 2013 | Volume 7 | Article 15 | 5 essentially intertwined with being alive and enacting a meaningful world. Therefore, experience also forms part of the enactive method. It is not just data to be explained, but becomes a guiding force in a dialogue between phenomenology and science, resulting in an ongoing pragmatic circulation and mutual illumination between the two (Varela, 1996(Varela, , 1999Gallagher, 1997;van Gelder, 1999). All these ideas together help us to understand the enactive characterization of cognition as sense-making: a cognizer's adaptive regulation of its states and interactions with the world, with respect to the implications for the continuation of its own autonomous identity. In other words, sense-making is concerned acting and interacting, and the concern comes directly from the sense-maker's self-organization under precarious circumstances. Unlike functionalist approaches, enactivism provides an operational definition of cognition. An organism casts a web of significance on its world. It regulates its coupling with the environment because it maintains a self-sustaining identity or identities that initiate that very same regulation. This establishes a non-neutral perspective on the world. This perspective comes with its own normativity, which is the counterpart of the agent being a center of activity in the world (Varela, 1997;Weber and Varela, 2002;Di Paolo, 2005;Di Paolo et al., 2010;Thompson, 2007). Exchanges with the world are inherently significant for the cognizer. Thus, cognition or sense-making is the creation and appreciation of meaning in interaction with the world. Sense-making is a relational and affect-laden process grounded in biological organization (Jonas, 1966;Varela, 1991Varela, , 1997Weber and Varela, 2002;Di Paolo, 2005;Thompson, 2007). This is why and how things matter to embodied cognizers.
PARTICIPATORY SENSE-MAKING
"Social cognition" understood in enactive terms is better captured by the notion of "intersubjectivity," which is the meaningful engagement between subjects (Reddy, 2008). Three aspects here are crucial: engagement, meaning, and subject. In the section above, I explained what enactive subjects are, in their inherently meaningful, cognitive-affective interactions with the world. Here, we focus on the encounters between such sense-making subjects.
In order to explain participatory sense-making for understanding autism, we need the concepts of (the autonomy of) the social interaction process, engagement, coordination dynamics, and social skills (De Jaegher and Di Paolo, 2007;Fuchs and De Jaegher, 2009;McGann and De Jaegher, 2009;Di Paolo and De Jaegher, 2012), all of which are operational, as I will explain now.
Social interactions are complex phenomena involving verbal and nonverbal behavior, varying contexts, numbers of participants and technological mediation. Interactions impose timing demands, involve reciprocal and joint activity, exhibit a mixture of discrete and continuous events at different timescales, and are often robust against external disruptions. Essential to interaction is that it involves engagement between agents. Engagement (Reddy and Morris, 2004;Reddy, 2008) captures the qualitative aspect of social interactions once they start to "take over" and acquire a momentum of their own. Experientially, engagement is the fluctuating feelings of connectedness with an other, including that of being in the flow of an interaction, and tensions.
We define social interaction on the basis of the autonomy of the interaction process and that of the individuals involved. Thus, a social interaction process is "a co-regulated coupling between at least two autonomous agents, where: (1) the co-regulation and the coupling mutually affect each other, constituting an autonomous self-sustaining organization in the domain of relational dynamics and (2) the autonomy of the agents involved is not destroyed (although its scope can be augmented or reduced)" (De Jaegher et al., 2010, pp. 442-443;also De Jaegher and Di Paolo, 2007, p. 493).
Each agent involved in such a coupling contributes to its coregulation, but the interaction process also self-organizes and self-maintains. This means that it sometimes continues in a way that none of its participants intends. To illustrate this, think of encountering someone in a narrow corridor. Sometimes, as you meet, in order to avoid bumping into each other, you both step in front of each other a few times, each moving to the same side at the same time-when all you both wanted was to continue on your way. This is a very simple example of the interaction process becoming, for a brief while, autonomous. We defined autonomy above as a self-distinguishing network of processes that sustain themselves under precarious conditions (Varela, 1997;Di Paolo, 2005Thompson, 2007). The individual participants as interactors are also autonomous (point 2). If one of them loses their autonomy, for the other it would be like interacting with an object or a tool (De Jaegher and Di Paolo, 2007).
This has a resonance for those with experience of autism. Sometimes a person with autism will take another person by the hand and direct her to something that is out of reach for him. This can feel strange and alienating. The feeling makes sense because, following the definition, this situation would not count as a social interaction, and there would only be a shallow kind of engagement or none at all. One person in the interaction determines the situation (or at least attempts to do so). To neurotypicals, this can be both uneasy and uncanny, because they generally expect even rather instrumental interactions to have some element of mutuality. When this is absent, it is experienced as somehow wrong. While they last, interactions self-organize and self-maintain through processes of coordination, including its breakdowns and repairs (De Jaegher and Di Paolo, 2007;Di Paolo and De Jaegher, 2012). Coordination is "the non-accidental correlation between the behaviors of two or more systems that are in sustained coupling, or have been coupled in the past, or have been coupled to another, common, system" (De Jaegher and Di Paolo, 2007, p. 490). Coordination is typically easily achieved by simple mechanical means and, when cognitive systems are involved, it does not necessarily require cognitively complicated skill. Coordination can happen at multiple timescales (Winfree, 2001). Temporal coordination is not the only kind; appropriately patterned behaviors, such as mirroring, anticipation, imitation, etcetera are all forms of coordination according to the definition given here. Coordination does not have to be absolute or permanent. There are degrees of coordination and coupled systems may undergo changes in the level of coordination over time (Tronick and Cohn, 1989;Kelso, 1995;Oullier et al., 2008).
Analyses of social interactions and conversations show that participants unconsciously coordinate their movements and utterances (Condon, 1979;Scollon, 1981;Davis, 1982;Kendon, 1990;Grammer et al., 1998;Issartel et al., 2007;Richardson et al., 2007). For instance, listeners coordinate their movements, however small, with the changes in speed, direction and intonation of the movements and utterances of the speaker (Bavelas et al., 2002). Studies of the way musicians play together also show this (see for instance Maduell and Wing, 2007;Moran, 2007). These findings suggest that interactors' perception-action loops are coupled and interlaced with each other (Marsh et al., 2006;Fuchs and De Jaegher, 2009). This includes processes of synchronization and resonance, in-phase or phase-delayed behavior, rhythmic co-variation of gestures, facial or vocal expression, etc. This complexity of interpersonal coordination is already present in early development (Condon and Sander, 1974;Tronick and Cohn, 1989;Malloch, 1999;Jaffe et al., 2001;Stern, 2002Stern, /1977Trevarthen and Malloch, 2002;Malloch and Trevarthen, 2009).
We coordinate in different modalities (movement of different parts of our bodies, gestures, language, thoughts, etc.). We can distinguish a range of different kinds of coordination, such as precoordination, one-sided and bi-directional coordination (Fuchs and De Jaegher, 2009). Patterns of interpersonal coordination can directly influence the continuing disposition of the individuals involved to sustain or modify their encounter (De Jaegher and Di Paolo, 2007;Oullier et al., 2008). This is due to the fact that the interactors, generally, are highly plastic, and susceptible to being affected by the history of coordination. When this double influence is in place (from the coordination onto the unfolding of the encounter and from the dynamics of the encounter onto the likelihood to coordinate), we are dealing with a social interaction. This emerging level is sustained and identifiable as long as the processes described (or some external factor) do not terminate it.
With the concept of coordination and other dynamical systems tools, interaction dynamics can be measured (see e.g. Kelso, 2009a,b). Moreover, they can be related to neural activity. The field of second-person neuroscience is growing (Schilbach et al., in press) and the investigation of people interacting live has produced interesting results (e.g. Lindenberger et al., 2009;Dumas et al., 2010Dumas et al., , 2012Cui et al., 2012;Konvalinka and Roepstorff, 2012). This is a welcome development and we have formulated enactive proposals of what taking the interaction process seriously means for understanding brain mechanisms involved in social interactions (Di Paolo and De Jaegher, 2012).
The consequence of these developments for social understanding-and here we come to the concept of participatory sense-making-is that, when we engage in interaction, not only the participants, but also the interaction process as such modulates the sense-making that takes place. This means that intentions can be truly understood as generated and transformed interactionally. Sometimes, it is impossible to say who is the "author" of the intention, whether it be an emotion, a thought, a belief, or something else. Interacting with each other thus opens up new domains of sense-making that we would not have on our own. There are, moreover, degrees of participation; we sometimes participate a lot (joint meaning-making) and sometimes minimally (one-sided coordination, where, for instance, we point out an object or an idea to someone).
With this view comes a particular approach to social skill (McGann and De Jaegher, 2009). Social skill is evidenced in interactive performance that cannot be conceived purely as an individual feat. Social skill is the flexibility to deal with the regularities (and irregularities) of the social domain provided by the actions of others. This flexibility, though partly determined individually, is also determined by the process of interaction. Moreover, social skills involve "acting through socially constructed norms and practices" (ibid. p. 430). These societal norms and practices are coordinated and negotiated in interaction with others, "rather than simply acted out without sensitivity to the actions of the other" (ibid. p. 431).
Specifically, as regards timing and coordination, one aspect of social skill is what we call the rhythm capacity (De Jaegher, 2006). This is the skill to flexibly switch between different interaction rhythms, or "a mastery of mutual coordination" (McGann and De Jaegher, 2009, p. 431). The notion of social skill can be applied to an individual interactor by considering his performance along a certain scale of interest across different interactions, and can be tested, for example, by investigating range of flexibility.
The rhythm capacity is only manifested in interaction processes. It is always also dependent on other interactors' behaviors and the dynamics of the interaction processes. In contrast to an individual skill like typewriting, the rhythmic capacity is also dependent on a relation of mutuality and coherence with the social skill of other interactors involved. It is impossible to test this in the absence of another person who also brings their own rhythmic capacity, and the interaction between them. The performance of rhythmic capacity is partly determined by the interaction process. It can be empirically measured in terms of frequencies and timescales of recoveries from coordination breakdowns (e.g., infrequent breakdowns and/or fast recoveries would be indicative of a high rhythm capacity).
In short, the argument for participatory sense-making is this: If, as indicated above, we make sense of the world by moving around in it and with it (sense-making is thoroughly embodied), and we coordinate our movements with others when interacting with them, this means that we can coordinate our sense-making activities, affecting not only how we make sense of the world but also of others and of ourselves. That is, we literally participate in each other's sense-making. We generate and transform meaning together, in and through interacting.
SENSE-MAKING AND PARTICIPATORY SENSE-MAKING IN AUTISM
The enactive approach to autism considers the particular difficulties of sense-making and participatory sense-making in autism. Underlying sense-making in general are a person's organismic self-organization, embodiment, needs, skills, and situation. In section "Evidence for a Different Sense-Making in Autism," we delve into aspects of sense-making in autism, on the basis of evidence from studies of autistic perception, movement, and affect. Differences in these domains, I propose, correspond with a different enactment and understanding of the world. In section "Participatory Sense-Making in Autism," we discuss how this works in the social realm, where a further important factor is the interaction process, and take a look at participatory sense-making in autism. In each area, I propose novel hypotheses for further research.
EVIDENCE FOR A DIFFERENT SENSE-MAKING IN AUTISM
Here, I review evidence for what I call autistic embodiment, i.e., the particular ways in which the biology, neurophysiology, affective, and sensorimotor structures and skills of people with autism differ from those of non-autistics.
Current research investigates "autistic embodiment" as if it consisted of distinct parts. Perception is mostly studied separately from movement and affect, and different pre-supposed sub-aspects of each (e.g., feature detection, categorization, pattern recognition; movement planning and execution; expression and recognition of emotion) are investigated in isolation from each other (see e.g., Rinehart et al., 2001Rinehart et al., , 2006Gowen and Hamilton, 2013). Questions that dominate research on sensorimotor aspects of autism are: which kind of processing is primary (e.g. "low-level" vs. "high-level"); what are the differences between autistic and non-autistic perception, movement, and sensation; are we dealing with underperformance or with superior performance; is the connection between motoric/perception particularities and the social/emotional aspects of autism one of correlation, precedence, causation, or amplification (e.g. Happé, 1999;Mottron et al., 2006;Papadopoulos et al., 2012). There is no agreement on whether people with autism are indeed "differently embodied" and if so, precisely how, but research on these matters is on the rise (Leary and Donnellan, 2012;Donnellan et al., 2013).
Often, the particularities of the ways in which people with autism behave are seen as disturbed or disruptive and consequently as "to be treated away." Two questions not generally asked in current research are: why do people with autism move and perceive in the way that they do, and what does this have to do with how they engage with and understand the world, others, and themselves? If we consider embodiment and sense-making as fundamentally interwoven, these questions are basic. When a person with autism moves, perceives, or emotes differently, this relates inextricably to how he understands the world. This fact is under-recognized in research that considers perceptual, motor, and affective behaviors in view of their role in the functional whole of cognition, instead of in relation to what matters to the person. We need to find out the precise link between sensorimotor-affective characteristics of autism and the way in which autistic people make sense of their world (Savarese, 2010;Robledo et al., 2012;Torres, 2012;Donnellan et al., 2013).
I propose that the notion of sense-making-integrative as it is of perceptual meaning and affective value-is particularly well-placed to interpret the wide-ranging evidence on the sensorimotor-affective aspects of autism. The concept of sense-making may also help integrate the evidence into a comprehensive, coherent framework that can generate further refined research hypotheses 4 .
Perception, movement, and affect in autism
Autistic perception was a legitimate area of study in the 1960s (see e.g., Rimland, 1964;O'Connor, 1965, 1970;Frith and Hermelin, 1969). In 1987, Frith and Baron-Cohen asserted that there were no low-level perceptual problems in autism, and that perceptual differences were due to cognitive processing deficits (Frith and Baron-Cohen, 1987). This is also a basic assumption of the WCC theory, which Frith proposed a few years later in recognition of those aspects of autism that could not be easily explained by ToM, like the islets of ability or the attention to detail (Frith, 1989). While WCC inspired a shift in research focus toward autistic perception, including investigations of socalled low-level perception (Happé, 1996), it considers perception as regulated, top-down, by cognitive processes and thus these cognitive processes as central (Happé and Frith, 2006). Therefore, even if WCC put autistic perception on the research map, its focus is on cognitive processing.
While sensory and perceptual differences are not considered centrally in the main explanatory theories of autism introduced above, they feature prominently in many autobiographical accounts (Williams, 1992;Grandin, 1995;Sacks, 1995;Gerland, 1996;Chamak et al., 2008;Robledo et al., 2012). Everyday sensations that non-autistics generally are not aware of, like the touch of the fabric of a pair of new trousers on the skin, can hurt people with autism. Some loud noises, especially sudden ones, may be unpleasant, while others are pleasurable. Autistic people may not notice other people talking to or touching them, thus being hypo-sensitive to particular events. There are no general patterns of hyper-or hypo-sensitivity, and sensory responses vary greatly across the spectrum, and manifest both toward social and nonsocial stimuli (Baranek, 2002;Rogers and Ozonoff, 2005;Kern et al., 2006). Sensory sensitivity has been linked to problems with attention and attention-shifting (Liss et al., 2006). Attention-shifting has been found to be slower in autism than in the non-autistic population (Casey et al., 1993;Courchesne et al., 1994;Townsend et al., 1996), and Liss et al. (2006) hypothesize that hyper-and hypo-sensitivity are due to a decreased ability to modulate attention (see also Landry and Bryson, 2004). It would therefore seem to be a kind of strategy to deal with overstimulation (Markram et al., 2007;Markram and Markram, 2010).
Research suggests that children with autism perceive visual motion differently. Gepner et al. (1995), for instance, found that children with autism have a weaker postural response to the perception of movement compared to non-autistic children, especially when the movement is very fast (Gepner and Mestre, 2002a). Gepner and Mestre (2002b) also propose that there is a "rapid visual motion integration deficit" in autism, manifesting, for instance, in rapid blinking or looking at things while moving the fingers rapidly in front of the eyes (see also Williams, 1992). Gepner and Mestre propose that the "world moves too fast" for children with autism, and that this is why they need to "slow it down" by exploring it in ways like those just mentioned. One of their experiments suggests that the effect of the rapid, rhythmic, involuntary eye-movements when perceiving fast-moving objects (optokinetic nystagmus, this happens for instance when looking outside while you are in a fast train) is weaker in autistic than in non-autistic children (Gepner and Massion, 2002;Gepner and Mestre, 2002b). Furthermore, people with autism find it easier to perceive emotion in moving displays of faces when the images are shown slowed down 5 (Gepner et al., 2001). Research suggests that autistic people have a higher threshold for perceiving motion coherence (Milne et al., 2002), direction of motion (Bertone et al., 2003), and biological motion (Blake et al., 2003 andKlin et al., 2009). Gepner and Mestre (2002b) also propose possible underlying neurological mechanisms, mainly involving the cerebellum 6 . The research by Gepner and colleagues combines insights into autistic movement (e.g., postural reactions) with the perception of movement, and thus integrates some aspects of autistic embodiment that fit together also on an enactive logic. Mari et al. (2003) suggest that movement problems should be considered basic to autism. They investigated "reach-to-grasp movement" and found that children with autism had more difficulties in planning and execution than the non-autistic control group. Leary and Hill (1996), in their review article on movement disturbances in autism, also argue that movement difficulties should be seen as core to the condition and that they are at the basis of the social difficulties of people affected. According to them, movement difficulties in autism include problems of movement function such as posture, muscle tone, non-goal directed movements such as nervous tics and action-accompanying movements, and difficulties with voluntary movements, which implicate language and movement planning. Papadopoulos et al. (2011Papadopoulos et al. ( , 2012 and Bhat et al. (2011) provide recent supporting evidence.
There is no real agreement on the extent and kinds of sensorimotor disturbances in autism. Several kinds of impairments have been found, and a variety of causes indicated (Vilensky et al., 1981;Jones and Prior, 1985;Bauman, 1992;5 This, on a cognitivist account, could be said to be because they have an explicit ToM approach to emotions, i.e., because they have to think about and infer what the emotions are. The argument would be that this is a slower process than emotion recognition in neurotypicals, and that this is the reason why it is easier like this for them. An enactive account would conjecture that they do not have the interactive experience, and that this is why, indeed, they may have to "figure out" the emotions, rather than relate to them via connection, interaction processes, "direct perception" (Gallagher, 2008), and participatory sense-making. 6 The role of the cerebellum is very relevant, and a possibly fruitful topic for future research, as it is implicated in movement and timing. Hallet et al., 1993;Gepner et al., 1995;Haas et al., 1996;Rapin, 1997;Ghaziuddin and Butler, 1998;Teitelbaum et al., 1998Teitelbaum et al., , 2004Turner, 1999;Brasic, 2000;Müller et al., 2001;Rinehart et al., 2001Rinehart et al., , 2006Gepner and Mestre, 2002b;Schmitz et al., 2003;Martineau et al., 2004;Bhat et al., 2011;Dowd et al., 2012). In contrast to this, Minshew and her colleagues did not find low-level sensorimotor deficits in autism (Minshew et al., 1997(Minshew et al., , 1999. Fournier et al. (2010) recently reviewed the literature on motor coordination deficits, and conclude that they are "a cardinal feature of ASD" (p. 1227). Other research suggests that people with autism have difficulty combining tasks that require perceiving and moving in different modalities at the same time (Bonneh et al., 2008;Hill et al., 2012). Mottron et al. (2006;Mottron and Burack, 2001) propose that there is an enhanced perceptual functioning in autism.
From embodiment to sense-making
Since embodiment and sense-making are intrinsically connected, the body partly determines how we interact with the world. "The world" is moreover that of a specific agent-not that of an external observer. That is, in the way you relate to the world, you construct and pick up as relevant that which is meaningful to you, but not necessarily to someone else. Sensory hyper-and hypo-sensitivities and particular patterns of moving, emoting, and perceiving influence autistic sense-making, and vice versa. In general, the sensorimotor and affective aspects of autism can be seen as alternative ways of perceiving the world or also as strategies to cope with it, for instance in order to slow down the world, or to avoid or modulate stimuli that switch quickly in rhythm and pattern.
Sense-making is a narrowing down of the complexity of the world. Non-autistic sense-making often ignores certain details and jumps to a particular significance (I'm thirsty, I want water, I get it but hardly care about whether the glass is tall or short, transparent, opaque, etc.). People with autism often perceive more detail, but to the detriment of not perceiving quickly enough that which is more salient in a non-autistic context (for instance, when a person with autism grabs someone else's glass of water and drinks from it, not noticing whether this is appropriate or not in the social context, Vermeulen, 2001).
If autistic embodiment is intrinsically linked with autistic sense-making, we can hypothesize that many autistic people will find joy or significance in behaviors and embodied styles of sensemaking that are considered "autistic." An often-ignored factor in perception is the aesthetic element. There may be a value to some autistic sense-making which is simply that of enjoying or remarking on patterns-patterns in space, in ideas, in numbers, in size, in time. Rich patterns exist everywhere in the world, and many autistic people value them, care about them, even enjoy them. This makes ignoring the pattern or the detail doubly difficult. People with autism not only do not initially or without prompt or necessity perceive holistic meaning, but they may feel that they will lose something salient if they (are made to) try to capture the gist of something.
The enactive approach conceives of the way people with autism perceive, make sense, move, and emote, as intrinsically meaningful to them. In this, autistic people are no different from other people. An easy way to test this idea is to see whether persons with autism enjoy or suffer from that which they do and which seems strange to non-autistics. For instance, in relation to their compulsion for detail, we can ask whether people with autism are, in general, at ease with their disposition for piecemeal processing. Do they regret missing the holistic sense or pity non-autistics for not enjoying detailed patterns? If the hypothesis is true, people with autism can be properly described as having a different conception of wholeness, one that has to do with order, patterns, exceptions, and perceptual richness. Anecdotal evidence for this idea comes from aesthetic appreciation, savant skills, and creativity in autistic people (see e.g., Sacks, 1985;Happé and Frith, 2009). Stronger evidence for WCC having a potential value or significance for people with autism is harder to find. WCC has been described positively as a cognitive style (Happé, 1999), and Frith (2009, p. 1348) suggest that there is a "rage to learn" and an intrinsic motivation in special talents, indicating that the special skills, as well as their learning and the learning of certain information can be interesting in their own right.
However, savant skills and high creativity are not representative of the whole autistic population (Hacking, 2009). Also, most of this research is concerned with how the processing style relates to other isolated aspects of the functioning of the person with autism, not with their personal significance or more general value. What enaction predicts goes beyond the cognitivist conception in which functioning and adaptation are considered as adequate fit to the non-autistic context. Enaction is concerned with functioning as valued and significant from the perspective of the person herself, in her context. Cognitive, perceptual, sensorimotor, and affective styles should in the first instance be approached from the point of view of the situated self-organizing sense-maker, not just that of an "objective" observer. What is such an observer objective about if he studies cognition but misses the meaning for the subject whose cognition he is studying?
An area in which there is evidence that people with autism derive pleasure from their specialized activities or thinking styles is restricted interests and repetitive behaviors. Circumscribed interests are highly frequent in autism, with between 75-88% of the autistic population engaging in them (Klin et al., 2007;Spiker et al., 2012). In direct support of the enactive hypothesis, repetitive activities in autism-unlike obsessions and compulsions in obsessive compulsive disorder-have been found to be "beloved activities apparently associated with great positive valence" (Klin et al., 2007, p. 97; see also Baron-Cohen, 1989;Klin et al., 1997). It has been found that circumscribed interests are highly motivating for children with autism, and that allowing them to engage in these behaviors can help them produce appropriate behaviors (Hung, 1978), and increase social interactions with non-autistic peers and with siblings (Baker et al., 1998;Baker, 2000). Lovaas also considers repetitive interests as intrinsically motivating for the perceptual reinforcement and self-stimulation that they provide, even connecting this to the sensory joys of gourmet food, art, recreational drugs, and smoking (Lovaas et al., 1987).
In a qualitative interview assessing how people with autism and their siblings and parents experience the restricted interests, Mercier et al. (2000) found that they "provide a sense of well-being, a positive way of occupying one's time, a source of personal validation, and an incentive for personal growth" (p. 406). The interviewees also recognized negative aspects of repetitive and circumscribed activities, such as their invasiveness, the amount of time they occupy, and (fear of) potentially socially unacceptable behaviors they may provoke. One of the participants sums up the tension between the positive and negative aspects as follows: "Basically, what others will tell me is that I monopolize time that could have been used for better things. But sometimes I can't think of better things to do when I have my free time" (p. 414).
In contrast to Mercier et al.'s subject-oriented approach, a recent study attempted to show a link between anxiety and restricted interests based on the assumption that restricted interests are a (maladaptive) way of coping with distress (Spiker et al., 2012). The study found that particular kinds of restricted interests were associated with anxiety, while others were not. However, the kind that was associated with anxiety, viz. "symbolically enacted restricted interests," is not defined or even described in the paper. Moreover, the authors themselves say that it might be that "symbolically enacted RI [restricted interests] only appear coupled to anxiety in children with high functioning ASD because these problems have overlapping behavioral manifestations, such that RI-related behaviors may be misinterpreted as anxiety-related behaviors" (ibid. p. 316). Furthermore, unlike in Mercier et al.'s (2000) study, the nature and incidence of restricted interests was gathered from interviews with the parents, not with the children themselves, and all the children involved in the study were diagnosed as having an anxiety comorbidity (thus biasing the answer to the question of a relation between anxiety and restricted interests in the cases studied).
Restricted interests, focus on detail, and other autistic sensorimotor and affective particularities often interfere with everyday life, and this can make them difficult to deal with, both for the person with autism and for their social and familial environment. However, this does not imply that they could not in themselves be relevant, salient, or significant for the person with autism. It might be that these behaviors are disruptive as a consequence of their manifesting in a context that can or will not accommodate them. This is not to suggest that such behaviors should simply be accepted. Rather it is to suggest that dealing with them should also start from the meaning they have for the person with autism, not just from the question of whether they are appropriate. The interviews conducted by Mercier and colleagues show that doing this can help find suitable ways to deal with the restricted, repetitive behaviors, even to the point of converting them into acceptable activities or extinguishing them (Mercier et al., 2000).
PARTICIPATORY SENSE-MAKING IN AUTISM
Participatory sense-making relies on the capacity to flexibly engage with your social partner from moment to moment, where this engagement involves emotion, knowledge, mood, physiology, background, concepts, language, norms, and, crucially, the dynamics of the interaction process and its coordinations and breakdowns. I have conjectured that a sensorimotor interactional coordination ability is at the basis of this connection. We have seen that sensorimotor differences imply a different sense-making in autism. Sensorimotor differences, especially those involving temporal aspects of perception and movement, will affect interaction and coordination in social encounters, and therefore introduce systematic differences in participatory sensemaking. This is true the other way around as well. If social connection is basic to individual cognitive/emotional development (Hobson, 2002), embodiment and sense-making will be influenced by a history of interactive engagements. In the following, I paint an increasingly inter-individual picture of (social) sense-making in autism and its problems.
A differently salient social world
Different aspects of the social environment are relevant to people with autism than to non-autistics. Ami Klin suggests that autistic people experience the world, including and especially the social world, as differently salient (Klin et al., 2003). Using an eye-tracker, they analyzed the way persons with autism scan film scenes in comparison with neurotypicals. Autistic people looked significantly less at socially salient aspects like the eyes and mouths of protagonists, or the object of a pointing gesture than non-autistic controls (Klin et al., 2002). It also seems that children with autism do not spontaneously pay attention to social stimuli that are salient to typically developing children, such as human sounds and faces (Klin et al., 2003;Shic et al., 2011). Furthermore, they seem to prefer to attend to inanimate objects over other humans (Klin et al., 2003;Jones et al., 2008). Not only is the preference different, autistic people also seem less sensitive to biological motion, an aspect of the recognition of the motion of other humans (Blake et al., 2003).
Even though Klin and his colleagues emphasize the anchoring of cognition in embodiment and the developmental process of acquiring social cognition, their work still has an individualistic flavor. They hit the nail on the head when they say that "the (nonautistic) child "enacts the social world," perceiving it selectively in terms of what is immediately essential for social action," but when they consider the work for this to be done by "perceptually guided actions" (Klin et al., 2003, p. 349), they fall short of the logical next step. They are rightly convinced that social interaction is the basis of social cognition, and they study social capacities from an embodied perspective. The next thing to put up for investigation is the interaction process.
Interpersonal engagement in autism
On the enactive account, crucial for social understanding is the capacity to connect. This capacity is relevant both during actual interactions and during non-interactive social situations where social understanding is more observational (Di Paolo and De Jaegher, 2012). If people on the autism spectrum have difficulty connecting, we need to study the social interaction processes they engage in (or fail to engage in).
Peter Hobson argues that, generally, "a conceptual grasp of the nature of 'minds'. . . is acquired through an individual's experience of affectively patterned, intersubjectively co-ordinated relations with other people" (Hobson, 1993, pp. 4-5, emphasis in original). In other words, social cognition is based in "interpersonal engagement" (Hobson, 2002). With regard to autism, he makes the conjoined claims that what underlies the deficits of autism is a hampered "intersubjective engagement" with social partners from very early in life, and that these engagements are the foundation of flexible and creative thought. Therefore, a deficit in this area would at once explain the problems with social interaction and communication of individuals with autism and the particularities of their ways of thinking (especially literal and decontextualized thinking, well-known to anyone who regularly interacts with people with autism, see also Vermeulen, 2001).
Hobson probes autistic social interactions as they are experienced, to find out how they differ from neurotypical interactions. In this way, he investigates the qualities of relatedness and connectedness. In several imitation studies, he shows that even though children on the spectrum are able to copy actions, they generally do not copy the way an action is performed, for instance, whether it was performed harshly or gently (Hobson and Lee, 1999), or directed at the experimenter himself or the child (self-or other-directedness, Meyer and Hobson, 2005). For Hobson and his colleagues, these findings indicate that children with autism identify with others less than typically developing children do: "the autistic individuals were not so much abnormal in their attempts to imitate the actions modeled, but instead were abnormal in their attempts to imitate the person who modeled" (Hobson and Lee, 1999, p. 657, emphasis in original). What is missing is an imitation of the "expressive quality of another person's behavior" (ibid.).
Interestingly, Hobson also investigated the other side of this: what it is like to interact with someone with autism, in a study called "Hello and Goodbye" (Hobson and Lee, 1998). As the title says, this study analyzes the greetings and farewells of children with autism, compared with a control group of children with learning difficulties. The children were brought into a room to perform a task at a table with an experimenter (Hobson himself), who sat opposite them. The task was no more than a pretext for creating the opportunities for greetings and farewells. Upon entering the room, the children were introduced to Hobson by his colleague. The videotaped episodes of introduction, greeting and farewell were rated by independent judges naïve to the aim of the study, who counted the amount of smiling, nodding, waving and vocalizing of each participant. The hypothesis was supported: children with autism showed fewer greeting and farewell behaviors than the control group, and also combined them less. This is not so surprising given that this result bears out the diagnostic criteria for autism. However, the judges were also given a more subjective item to rate, namely how much interpersonal engagement there was between the participant and the experimenter. They judged that, in the interactions with the participants with autism, there was much less intersubjective engagement at the different stages of the interaction than in those with the non-autistic group.
In a description of this same study in his book The Cradle of Thought, Hobson relates something that is not reported in the paper: that, from the videotapes, one could have the impression that, regarding Hobson's own behavior as the interactor, "there was a deliberateness to my own gestures and actions [and that] I was less outgoing and more hesitant in my efforts to make contact, and my 'Goodbye' seemed forced. It was clear that I was doing my best to be relaxed and engaging, but I did a poor job when I did not have an engaging partner." He adds: "The lesson is: interpersonal engagement is just that-interpersonal" (Hobson, 2002, pp. 50-51). For a similar point, made through a study of sharing humor and laughter in autism, see Reddy et al. (2002).
The central issue here-which remains insufficiently investigated-is the interaction process as such. If there are sensorimotor and coordination differences in autism, and we take the embodied interaction process as defined in section "Participatory Sense-Making" as central to social understanding, then we can suspect that the interaction process will be hampered in autism. Is this the case?
Interaction rhythm and rhythmic capacity in autism
People with autism often seem awkward in the way they coordinate with others in interactions. Some studies suggest, however, that children with autism have more mastery of the basics of interactional capacity than previously thought. Dickerson et al. (2007), for instance, argue that persons with autism can temporally appropriately place their interventions in social encounters. They investigated interactions between two autistic children and their tutors during question-and-answer sessions involving answer cards, in which both children tapped the answer cards-a seemingly meaningless action. However, Dickerson and colleagues found that the tapping was placed temporally just after the tutor asked the question and before the child started answering, continuing sometimes into the answer of the child. This suggests, first, that the tapping displayed engagement, an engagement that could also have been shown through eye contact, something known to be difficult for people with autism (American Psychiatric Association, 2000). And second, it suggests that the tapping indicated that the child was about to answer the question, i.e., the tapping was "projecting a relevant forthcoming response on the part of the child" (Dickerson et al., 2007, p. 297). Similar findings were made in relation to gaze (Dickerson et al., 2005). Interesting in this research is that the actions of all interaction partners are being investigated, also that of non-autistic participants. This allows to query the experience (cf. Hobson above), as well as the perceived appropriateness of the behavior. The tutors in the tapping study, for instance, took the behavior as interactionally relevant and appropriate (Dickerson et al., 2007).
Other research suggests that people with autism have timing differences. In a study in which participants were asked to tap in synchrony with an auditory stimulus, Sheridan and McAuley (1997) found that the autistic participants' tapping was more variable than that of the non-autistic group (see also Isenhower et al., 2012, for a similar result in an intra-individual bi-manual drumming task). Trevarthen and Daniel (2005) report on interactional timing and rhythmic difficulties in autism in a study of the interactions between a father and his twin daughters, one of whom was later diagnosed with autism (see also St. Clair et al., 2007). With this twin, the father was unable to engage in rhythmic interaction. This is reminiscent of Hobson's Hello and Goodbye study, which also showed that an interaction partner is less able to engage with a partner who is less rhythmically able. Again, it becomes apparent that social capacity is interactional and not just individual.
Another set of investigations centers around the contingency detection hypothesis (Watson, 1979;Gergely and Watson, 1999;Nadel et al., 1999). Gergely (2001) hypothesized that, in normal development, there is a transition from an expectancy of perfect contingency to one of less than perfect contingency. Before they are 3 months old, Gergely conjectures, infants expect to perceive effects of their actions that immediately follow those actions. These are found mostly in their own actions (what Piaget calls "circular reactions," 1936). Around 3 months, infants start to search for "high-but-imperfect" contingency, which is found in games with other people and in effects of the infant's actions on the environment. With this shift the infant supposedly starts to engage in interactions with the social world. With regard to autism, Gergely reckons that this shift does not take place, or not fully. As a result, the child with autism would continue to seek perfect contingency throughout life. There is no direct evidence for this theory yet, even though it is an interesting hypothesis. Jacqueline Nadel, who has also worked on contingency detection in children both with and without autism, found that children with autism do not spontaneously detect and expect social contingency, although they can learn to do it after an experimental phase in which the adult experimenter has imitated them (see Nadel et al., 2000;Field et al., 2001).
While there is a general and rather vague idea that people with autism are "awkward" in their interactions, until we investigate those interactions, we do not know what this means or entails. If interactional timing is awkward, and one or both partners do not have the flexibility to adapt to the other's timing, the rhythmic capacities (see above) will be of a low quality, and this will result in interactional problems. Although further research is needed, the evidence points to various problems with interaction timing in autism, but also unexpected capabilities. On an enactive perspective, both of these will impact on the dynamics of social interaction, specifically on the quality of coordination, the frequency of coordination breakdowns, the ability to repair them, and the experience of the interactors with and without autism, supporting Hobson's observations. Interactions involving people with autism do not fully lack flexibility, but its scope is reduced due to motor and timing differences. This can be both the cause and the symptom of difficulties with connecting. Findings like the ones reported allow to keep searching for and refine hypotheses about what precisely characterizes "autistic interactions." One way in which to test rhythmic capacity and other interactional capacities of and with people with autism, is to study how often breakdowns occur, as well as how easily they are recovered from. Dynamical measures of coordination can be used to construct an index of how quickly the pair achieves coordination again after breakdown (see e.g., Kelso, 1995Kelso, , 2009avan Orden et al., 2003van Orden et al., , 2005Riley et al., 2011). Immediate or fast recovery would indicate a high rhythmic capacity, and slow, absent, "jumpy," or unclear recoveries would indicate a lower or narrower rhythmic capacity, i.e., little interactional flexibility overall. The prediction is that interactions of people with autism show a marked reduction in rhythm capacity compared to those of nonautistics. Recently, Marsh and colleagues tested this in a study of unconscious rocking (in rocking chairs) between children with and without autism and their parents, finding that children with autism had a lower tendency to rock in symmetrical timing with their parents (Marsh et al., 2013; see also Schmidt and O'Brien, 1997). A similar difference is expected between interactions of people with autism who do or do not have an interaction history with each other (i.e., whether they have interacted before, and how much). The case of interactions between people with autism who have an interaction history is especially interesting, because it brings several predictions together. We predict both that people who have interacted before will have a smoother rhythm capacity, and that people with autism will have a more reduced rhythm capacity. If these two elements come together, i.e., in an interaction between two autistic people with a long interaction history between them, this will have its own specific rhythmic characteristics. So far, we have discussed interactional capacities, but what about participatory sense-making?
What is participatory sense-making like in autism?
Penny Stribling and her colleagues have studied the behavior and speech of autistic children in an interactional context, using conversation analysis. One of their studies evaluates instances of echolalia, produced by a boy with autism in a single session of play with a robot (Stribling et al., 2005(Stribling et al., /2006. Echolalia is the repetition of utterances (one's own or an other's), and is often considered meaningless and uncommunicative, and the general advice is to ignore it. However, Stribling demonstrates that the repeated utterances of the boy had an interactional function. He repeated a phrase that seemed communicationally irrelevant because of its literal content, yelling 'spelling assertions' such as "please has got an A in it!" By taking a panoramic view of the situation, i.e., by studying the utterance in its interactive context, as well as its prosodic characteristics, Stribling et al. found that the boy's supposedly irrelevant utterances were in fact a protest at losing control over the robot, and an attempt to regain it. They suggest this because, first, all the instances of the echoed utterance that they recorded happened when another person was starting to play with the robot, and second, the way the utterances were made had strong prosodic similarities to how a protest generally sounds (rising loudness and emphasis). Further to their explanation, we can add that the utterance could also have an intrinsic meaning. From the enactive point of view, in which a cognizer self-maintains and self-organizes, it can be proposed that the boy is self-affirming his place in an interaction in which he feels that something is taken away from him, by uttering knowledge that he has. These utterances could be a way of maintaining individual autonomy in an interactional situation. This possibility can be further researched using the notions of self-organization and individual and interactional autonomy as conceptual tools for deepening the understanding of phenomena like echolalia.
Difficulties with coordinating and interacting in autism will lead to hampered participatory sense-making because, as we have seen, participatory sense-making is the inter-individual coordination of embodied and situated sense-making. As regards the new domains of sense-making that are generated in interaction it is clear to see that, if there are such difficulties in autistic interaction as I have just described, the range of orientations, from onesided (or instructive) coordination of a person in their individual cognitive domain to closely coupled mutual orientation of sense-making, will be difficult to achieve. Additionally, because of the experience of negative affect that results from more frequent coordination breakdowns, social interaction may be less often sought by people with autism, resulting in fewer opportunities to engage in participatory sense-making.
One of Hobson's proposals is that flexible thinking develops from affective interpersonal engagement, and that, in autism, hampered interpersonal relating throughout development leads to the cognitive problems of autism, which are characterized by inflexibility of thinking, lack of creativity, and literal and decontextualized understanding (Vermeulen, 2001;Hobson, 2002). Similarly, if, as proposed by Reddy (2003), complex self-conscious emotions develop out of infants' early interactive experiences (in particular the awareness of being the object of another's attention), then a history of non-fluid interactions must impact on the development and understanding of social emotions, such as embarrassment, pride, and shame.
On the present proposal, if the developmental trajectory of participatory sense-making is hindered in specific ways, among others in the area of interactional coordination, this will reinforce a lack of flexibility in thinking and in dealing with self-conscious emotions. In order to specify in detail why this is the case, the present work needs to be extended with a developmental strand. For now, we can conclude that, if there is less flexibility in social interactional timing and coordination, the creation of new domains of sense-making that rely on participation by others is impeded. It is likely that flexibility in both of these areas is strongly related, especially if there is such a strong developmental interaction between them. Further research is needed to find out the precise relationship between interactional flexibility and flexibility in thinking and emoting.
Some implications for intervention and diagnosis
Underlying the interactional difficulties of people with autism we could find neurological and/or sensorimotor differences, but such individual differences do not suffice to explain where specific autistic ways of making sense of the world come from. Social understanding is a constitutive aspect of cognition in general, and it is at its basis truly inter-individual (even the personal skills that permit remote observational social understanding, I propose, are dependent on interactive skills and experiences, see Di Paolo and De Jaegher, 2012). Therefore, interventions for autism-w.r.t. social difficulties, cognition, affect, and sensorimotor capacitiesneed to pay special attention to interactional coordination, rhythmic capacity and participatory sense-making (this is the basis of, for instance, music therapy, and dance and body movement interventions, Wigram and Gold, 2006;Samaritter and Payne, 2013). This is the context that affords the best interpretation of neurological and other individual factors.
Putting things in the appropriate rhythmic and interactive context is not a novelty for many parents, caregivers, teachers, and friends who successfully motivate, adapt to, and engage autistic partners. Such is the case with approaches like Relationship Development Intervention (Gutstein and Sheely, 2002), or intensive interaction (Caldwell, 2006) and similar ones. The gist of these approaches is to gently introduce the child to flexible interactions with both the social and the "non-social" world in playful settings. At the heart of Relationship Development Intervention sits the idea that people with autism have problems with dynamic, but not with static intelligence. The suggestion has been made before that people with autism are good at scientific-style cognition, but have less adaptive, engaged, knowhow intelligence (Kanner, 1973;Baron-Cohen, 2002. The development of flexibility in interaction can aid the development of flexibility and creativity in behavior and thinking in general, as the present work also predicts, in line with Hobson's ideas (Hobson, 2002), and enhance daily support, friendships, and love relationships.
CONCLUSION
In this paper, I have looked at autism through an enactive lens in order to help integrate the diverse aspects of autism that have up to now been examined in isolation. Unlike the search for a common root or key causal factors, enaction strives for a coherent picture of autism, while embracing a complex, non-linear multicausality. In this effort, two elements that I aimed to do justice to are the experience of autism-both that of people with autism and that of those interacting with them-and the differences in embodiment that seem present in autism. I suggest that people with autism make sense of the world differently, and that, in the social realm, they are differently able to participate in sense-making with others.
This leads to the following methodological considerations. If we base autism research on the question of why something means something for someone, we can connect autistic styles of sense-making with particular ways of moving, perceiving, and emoting. Hypotheses based in a subject-oriented approach to cognition and mind in autism will be better able to connect the elements that up to now have remained disconnected. For instance, I proposed that restricted interests and repetitive behaviors, if given a place in the actions and interactions of people with autism, can help them, among other things, to improve their social flexibility. I suggested that a focused treatment is needed of a surprising blind spot in autism research: the social interaction process itself. Once we do that, we will be better able to understand both the difficulties and the capacities that people with autism have in this domain. Behaviors that seem irrelevant can acquire significance from the context of the social interaction. To understand this, we must abandon disembodied individualism.
I have hinted at the possible developmental questions that may arise from considering both subjective and interactive factors. This is one of the directions where further work is needed. Another such open direction is to draw further implications for diagnosis, therapy, and interventions.
Ethically, the approach put forward here is not one of laissez faire. On the contrary, it is one that starts from also taking seriously the perspective and subjectivity of people with autism themselves, in a principled, coherent, and comprehensive way. It is then that we can expect to be able to build bridges that are well-informed by both autistic and non-autistic experience. | 18,655 | 2013-03-26T00:00:00.000 | [
"Psychology",
"Philosophy"
] |
Directional wave buoy data measured near Campbell Island, New Zealand
The New Zealand Defence Force (NZDF) has established a permanent wave observation station near Campbell Island, south of New Zealand (52 45.71 S, 169 02.54E). The site was chosen for logistical convenience and its unique location adjacent to the highly energetic Southern Ocean; allowing instrumentation typically deployed on the continental shelf to be used in this rarely observed southern environment. From February 2017, a Triaxys Directional Wave Buoy was moored in 147 m depth, some 17 km to the south of the island, with satellite telemetry of the 2D wave spectra at 3-hourly intervals. To date there have been three deployments on locations, yielding some 784 days of data. Validation of the measured significant wave height against co-located satellite altimeter observations suggests that the predominant wave directions are not attenuated by the island. The data provide a valuable record of the detailed wave spectral characteristics from one of the least-sampled parts of the Global Ocean.
Background & Summary
The energetic nature of the ocean to the south of New Zealand is well known to mariners. An almost unlimited circumpolar fetch combined with persistent strong winds, creates a climate with frequent storms that occur throughout the year 1 . The swell waves generated in this southern basin propagate throughout the Indian and Pacific Oceans 2 and make a significant contribution to the wave climate of the northern hemisphere as well 3 . Despite occupying almost a quarter of the world's sea surface and having high importance to the global wave climate, and the planetary ocean-atmosphere gas fluxes, the Southern Ocean is still the least studied of all the worlds' ocean areas.
There are good reasons why few in situ wave measurements exist for the Southern Ocean. Aside from the rough conditions, the distances from land are vast and the water is deep, which makes measurement campaigns very expensive. With the advent of satellite remote sensing, altimeter data now provides reasonably high spatial and temporal coverage for estimates of the non-directional wave height 4 . However, such data do not provide the precise spectral information that can be obtained from reference wave measuring buoy.
Until recently, the most relevant campaign with spectral observations was the Southern Ocean Flux Station (SOFS); a deep-water mooring located some 500 km south-west of Tasmania for a 24-month period (spread over three deployments from 2012-2015). A Triaxys motion response unit was fitted to the moorings' surface buoy to allow wave observations to be included in the experiment. The data are reported by Rapizo et al. 5 and were at the time the southernmost spectral dataset in publication (i.e. latitude 47 S).
The New Zealand Defence Force (NZDF) has recognised the absence of detailed wave spectral information for the extensive Southern Ocean areas where the country has marine search and rescue obligations, as well as sovereign and operational patrol responsibilities. Indeed, the current ship class rules for this area specify a design wave case based on northern hemisphere spectra that is transposed onto the southern hemisphere conditions. The consequence of designing and certifying naval ships based on an unvalidated spectral shape could be severe, which has warranted the involvement of the NZDF in a targeted wave data collection program. Aside from critical ship design information, acquisition of detailed spectral data was seen as having benefit in fundamental research of the wave generation and dissipation processes in the Southern Ocean 6 as well as facilitating general improvements in numerical wave modelling for operational hindcasting and forecasting. These latter benefits are directly addressed in separate programme 7 .
Accordingly, on 8 February 2017 an exploratory observational program was initiated at one of the few exposed locations in the Southern Ocean with continental shelf -allowing a highly responsive spherical instrument to be used. The HMNZS OTAGO deployed a Triaxys Directional Wave Buoy near Campbell Island, which is approximately 600 km south of New Zealand (Fig. 1). The buoy coordinates were latitude 52° 45.71′ S, longitude 169° 02.54′ E and the local depth was 147 m. Since deployment, the buoy reliably transmitted spectral data at 3-hourly intervals with 93% transmission success rate, including a storm event with a maximum individual wave height of 19.4 m 8 . However, on July 27 2017 after 172 days on location, the buoy broke its mooring line and drifted eastwards and was not recovered.
Analysis of the measurements from this initial deployment led to the decision to establish ongoing wave observations at this location. On 2 March 2018, the HMNZS WELLINGTON deployed a replacement Triaxys Directional Wave Buoy, with a revised mooring design to improve the fatigue resistance in these highly energetic waters. This buoy provided data until 19 June 2019; some 474 days with a transmission rate of 91%. During this HM0 m Significant wave height as estimated from spectral moment mo. Hmo = 4.0 * SQRT(m0) where m0 is the integral of S(f)*df from f = F1 to F2 Hz.
Mean Theta degrees
Overall mean wave direction in degrees obtained by averaging the mean wave angle θ over all frequencies with weighting function S(f). θ is calculated by the KVH method.
Sigma Theta degrees
Overall directional spreading width in degrees obtained by averaging the spreading width sigma theta, σθ, over all frequencies with weighting function S(f). σθ is calculated by the KVH method.
Methods
Triaxys Directional Wave Buoys are being used for the programme and the standard onboard processing regime used by the manufacturer was adopted. The buoy samples raw data at 4 Hz over 20-minute bursts at 3-hourly intervals, and the onboard spectral processing applies the Maximum Entropy Method to resolve 65 frequency increments (range 0.05 to 0.38 Hz), with directional resolution of 3 degrees (i.e., 121 directional increments). Further information on the Triaxys data processing methodology and instrument validations may be found in the manufacturers' technical library 9 . www.nature.com/scientificdata www.nature.com/scientificdata/ A manual inspection of the resultant spectral estimates was undertaken before upload to the data portals for dissemination. Note that raw and processed spectral estimates are stored on-board for manual download at the annual servicing, while the 1D and 2D spectral files are transmitted by Iridium telemetry in near real-time.
For the first deployment (08/02/2017-27/07/2017) a factory-supplied 15 m rubber compliant section was attached to the buoy, with 185 m of 12 mm Dyneema rope (and midwater buoyancy units) used to anchor the www.nature.com/scientificdata www.nature.com/scientificdata/ buoy to 600 kg of 32 mm stud-link chain. The mooring revision for the second (02/03/2018-19/06/2019) and third (25/11/2019-25/04/2020) deployments deleted the compliant section as this was the likely failure point under repeated fatigue. Instead, a 220 m length of 12 mm Dyneema rope was used, with midwater buoyancy set at 100 m above the seabed to create a false bottom, and improved chain weight dampening at the seabed to create elasticity.
Data records
The data records are organised by deployment and labelled Southern Ocean Wave Buoy Data: Deployment 1, Deployment 2 and Deployment 3. Complete data records have been archived with Marine Data Archives (MDA) 10 in the same Triaxys file format as received from the buoy, which includes the 2D wave spectra and Fourier coefficients. The processed wave spectral estimates have been archived with the Australian Ocean Data Network 11 (AODN) in the standard Triaxys parameter convention. A list of the Triaxys parameters is provided in Table 1, including a brief description of each. For both data archives, the time stamp is UTC and magnetic correction has not been applied to directions in the data.
Researchers seeking standard wave spectral estimates are encouraged to access files via the AODN. However, if detailed spectra or an alternative processing of spectra is required, then users should access the files from the MDA. For processing of spectra, the open-source code Wavespectra is recommended as this will directly read the native Triaxys files. Wavespectra can be found at GitHub (https://github.com/wavespectra/wavespectra) and Zenodo 12 .
Time series plots of the measured significant (HMO) and maximum (HMAX) wave height are provided in Fig. 2 for the 784 days of data from the three deployments. Statistics from the data set are presented in Tables 2-4, providing a summary of the monthly wave height values, the annual joint probability distribution of significant wave height and wave direction, and significant wave height and peak wave period. Annual and monthly roses are presented in Fig. 3. Note that magnetic correction for direction has been applied in these tables and figures, using the methodology provided in Guedes et al. 12 . Table 4. Joint probability distribution of observed significant wave height and peak wave period.
Technical Validation
The buoy data described in the previous sections were compared against concurrent satellite altimeter passes to verify the quality of the observations. The colocation was calculated as the average of all altimeter measurements inside a circle of 0.5-degree radius within a 1-hour window centred on the timestamp of the buoy measurement. Note this validation must consider the expected uncertainties in the reference altimetry data, as well as differences caused by the different spatio-temporal characteristics of buoy and altimeter measurements 13 . www.nature.com/scientificdata www.nature.com/scientificdata/ Significant wave height measured by the moored wave buoy compares well against satellite observations, as shown in Fig. 4. The satellite datasets used were SARAL, Jason2 and Jason3 (from AVISO), Cryosat2 (from Globwave) and Sentinel3A (from Copernicus). The overall RMSD was 0.427 m and the bias was 0.243 m, corresponding to a normalised bias of 0.063. Note the satellite footprint included the nearby island, which can affect wave height estimates from altimeters.
The wave rose (Fig. 3) shows that the predominant direction of approach for waves is the west and southwest sectors. This quadrant is not affected by the presence of the island to the north of the buoy. We note that wave energy arriving from the north sector will likely be attenuated by the island to some degree, however this directional sector is not common nor particularly energetic compared with the west and southwest sectors. Our directional observations are in qualitative agreement with Rapizo et al. 5 , who noted a predominance of southwest sector waves and showed that the north through east sector waves were very infrequent and typically of low energy. The mean significant wave height from the SOFS programme was 4.09 m, while the mean value observed here was 3.75 m. Both sites are dominated by peak spectral wave period in the range 10-14 s.
Usage Notes
The standard wave spectral estimates may be downloaded from the AODN, while the raw files including 2D spectra are available from the MDA. Note that this is an ongoing observation programme and updates will be made to both repositories at annual intervals. Fig. 4 Scatter plot of wave buoy significant wave heights vs the satellite altimetry derived values during the three deployment periods. Co-located data within 0.5-degree radius and 1-hour were sourced from SARAL, Jason2, Jason3, Cryosat2 and Sentinel3A. | 2,541 | 2021-09-15T00:00:00.000 | [
"Geology"
] |
CREATING 3D INDOOR FIRST RESPONDER SITUATION AWARENESS IN REAL-TIME THROUGH A HEAD-MOUNTED AR DEVICE
Emergency operations are a key example for the need of digital twins in the way it is complex, urgent and uncertain. First, the process is complex, as many organizations are involved. Second, it is urgent, as most damage is done in the first moments of an emergency. Third, it is uncertain, as situational conditions tend to change quickly. For outdoor operations, spatial information systems help in creating an overview of the situation, for example by displaying positions of first responder units involved with the incident. However, spatial data of indoor environments is scarce. Static information of the building, such as floor plans, are often outdated or non-existent. Dynamic operational data such as positions of first responders within the building are only available in a very limited way as well, and often without visual representation. To create situation awareness of indoor first responder operation environments, this paper successfully proposes a proof of concept with two objectives. First, the proof of concept will collect spatial environment data in the form of mapping and tracking data by using a Microsoft HoloLens. This means the geometry of the building will be collected, together with traversed routes within the building. Second, the data will be streamed and displayed to a remote first responder coordinator in real-time to create a common operational picture. This enables the coordinator to quickly build situation awareness of the operation environment, enabling the coordinator to improve the quality of decisions, thereby improving first responder performance. The proof of concept showed that situation awareness on all three levels increases with the real-time (live) availability (visualisations) of 3D indoor environments. This concept needs to be tested further on usability and performance.
INTRODUCTION
This paper is constructed in 7 chapters. First, we will give context to key concepts in the introduction. Second, we will explain this context in a perspective of related academic works. Third, we will explain how we implemented lessons learned from other academics, into a proof of concept (PoC). This PoC will display the feasibility of real-time 3D spatial data acquisition and presentation in emergency operations. Fourth, we will present results of the PoC regarding mapped environments. Fifth, we will discuss performance of the PoC, in terms of accuracy, precision, robustness and added value. Sixth, we will present the conclusion, answering to what extent of real-time 3D spatial data acquisition and presentation is possible in emergency operations, making use of different levels of situation awareness. Finally, suggestions for future work are presented.
Emergency response, operations, and spatial data
First responders, or emergency responders, are defined as the organizations and individuals who are responsible for protection and preservation of life, property and the environment in the early stages of an accident or disaster (Prati and Pietrantoni, 2010). The nature of these organizations can differ across publications, although a general understanding of first responders seems to be a combination of fire departments, paramedics and police departments (Prati and Pietrantoni, 2010;Dilo and Zlatanova, 2011). If the scale of disaster increases, other organizations such as (paramilitary) defense units might be recognized as first responders as well (Dilo and Zlatanova, 2011).
Emergency operations are complex, urgent and uncertain (Kapucu and Garayev, 2011). First, the process is complex, as many organizations are involved. Second, it is urgent, as most damage is done in the first moments of an emergency (Dilo and Zlatanova, 2011). Third, it is uncertain, as situational conditions tend to change quickly (Dilo and Zlatanova, 2011;Kapucu and Garayev, 2011). For outdoor operations, spatial information systems help in creating an overview of the situation, for example by displaying positions of first responder units involved with the incident (Seppänen and Virrantaus, 2015). However, spatial data of indoor environments is scarce (Rantakokko et al., 2011;van der Meer et al., 2018). Static information of the building, such as floor plans, are often outdated or non-existent. Dynamic operational data such as positions of first responders within the building are only available in a very limited way as well, and often without visual representation.
The nature of situation awareness
Situation awareness (SA) is a concept that describes to which extent someone is aware of what is happening in a situation, while using that information for gaining an understanding of what that information means in the present or in the future (Endsley, 2016). Next to this, SA is goal-oriented, meaning the awareness can be described in the added value of the information for a specific goal or operation. This paper adopts the formal definition of SA as stated by Endsley (1988), being: "The perception of the elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future". From the definition, three levels of SA can be distilled: perception of data elements, comprehension of the data elements and projection of their status.
Levels of situation awareness
Having SA of an environment means understanding what is going on in that environment (Endsley, 2016). This awareness can be described in three levels. The first level is the lowest level of SA, while the third level describes the highest extent to which SA can be reached. The first level of SA (perception) relies on availability of data elements that describe a situation. The second level of SA (comprehension) is the transformation and filtering of raw data elements into information that is useful to the system operator. An aspect of this is the combination of raw data elements into a combined interpretation of the data. Finally, the third level of SA is projection. The third level takes the interpretation process of a situation one step further, by stating that not only current, but also future states of the situation should be able to be perceived and comprehended. Therefore, it is necessary that the system is enabled for temporal component and use these components to project to future states of the situation. Such a projection requires a highly developed mental model of the situation and requires significant mental resources. Any of the three levels of situational awareness can only be reached if there enough mental resources left to understand the situation (Endsley, 2016). Therefore, a user-centered design is imperative, as this the information needs and system requirements of users to prevent information overload are taken into account (Endsley, 2016).
RELATED WORK
For the extent to which SA can be built, reliability of the information sources are of the essence (Endsley, 2016). The confidence that an operator has in the different information inputs plays a large part in the trust that he has for depending on the data, which is assessed by personal experience of working with data sources and defined metadata specifications (Seppänen and Virrantaus, 2015). Incomplete or unreliable data sources do therefore hurt the creation of SA. In many domains, the collection of the data to reach level one SA is therefore already challenging (Endsley, 2016). This holds true for first responder operations, as for example the presence of smoke may obstruct visual data collection.
First responder context
The modus operandi of first responder organizations rely to a large extend on well-established procedures (Zlatanova, 2010). As the organizational structure for emergency response differ across organizations and between countries, depending on the vulnerability and preparedness of an organization for disasters, the procedures are tailored to the specific needs of an organization. For example, a country that deals with frequent earthquakes may have different disaster procedures compared to a country in which earthquakes are rare. This causes differences in the way in which disasters are handled by different organizations and in different countries (Zlatanova, 2010). First responders are usually no geo-spatial specialists and therefore they do often lack deep understanding of terminology and structures used for spatial data (Zlatanova, 2010). Proper filter techniques should be applied to (spatial) data support systems, as they tend to create an information overflow rather easily instead of increasing SA (Endsley, 2016). Information overflow is often described in a first responder context, especially in the case of using spatial information under pressure and within stressful conditions (Zlatanova, 2010). A solution that can be applied to battle information overflow is to offer information in different levels of detail and customized to different tasks or user groups (Zlatanova, 2010;Endsley, 2016).
First responder spatial information availability
If an incident occurs, a process is initiated to mitigate the effects of the incident. Procedures exist for response to many types of incidents, but we do also know that the way in which these procedures are executed can change a lot per situation (Dilo and Zlatanova, 2011;van der Meer et al., 2018). Dilo and Zlatanova (2011) discern between two types of information at the base of emergency response operations: dynamic and static information. Static data holds information that is not likely to change during an incident, such as managerial and administrative data, and risk maps. An officer of duty of a fire brigade for example, request several pieces of information such as topographic maps, a map of water resources, optimal route information and risks maps of the area (Zlatanova, 2010). Dynamic data is volatile in nature and is collected during an emergency operation. Within this category, operational and situational data are identified. Operational data describes data about the operation, including information about the ongoing processes such as responsible departments and persons, together with their roles. Situational data describes the incident itself and the impact of the incident on its environment, such as the type of the incident, the affected area and the number of trapped, missing or injured people (Dilo and Zlatanova, 2011).
To create SA and prevent information overload in first responder operations, we should be aware of the tasks and data requirements for successful response to an incident. Dilo and Zlatanova (2011) provide a data model of Dutch first responder incident response operations, independent of disaster type.
Spatial data collection
For indoor operations, first responder organizations are often forced to gather data themselves as spatial information is seldom readily available (van der Meer, Verbree and Oosterom, 2018). Risk maps are drawn up by the safety regions for vulnerable buildings in the preparation phase, in which the basic geometry of ground floors are depicted. This information is enriched by an indoor exploration of a repressive team, to identify the fire source, to determine attack routes and to check if the fire can be extinguished with resources present in the building (van der Meer, Verbree and Oosterom, 2018). The identification of the data sources by exploration is mainly a manual workflow. A real-time indoor application adds to the way dynamic information is shared: by using updated and accurate information about indoor environments, a command and control unit has a better base to make decisions on (Basilico and Amigoni, 2011). A 3D depiction of an environment has an advantage over 2D visualizations in this sense making step, as users are able to view an indoor environment in one complete model instead of viewing it in separate floor plans. Furthermore, 3D data enables users to zoom in to a point of interest to observe it more closely, while a 2D view enables a user to zoom out and still oversee the whole situation (van der Meer, Verbree and Oosterom, 2018). A 2D/3D switch can thereby help in deconflicting visualisation of situation environments. To collect 3D data this research uses depth camera. This is a more low-cost alternative to using LiDAR sensing. By using a depthsensor, distances to surfaces can be sensed instead in a ranged pixel wise way (Khoshelham, Tran and Acharya, 2019). An infrared light is emitted from the sensor, and reflectivity is mapped in a pixel matrix. As we can measure the time it takes for the infrared light to reflect, we can measure distance by using the Time of Flight (Hübner et al., 2020). The range of this method is less compared to LiDAR scans, but can be acquired in real-time as will be shown in this research.
SLAM: Basics
While explaining several mapping methods in the section above, the relation between device pose and the mapped environment is introduced. For this research, the estimation of pose (tracking) and the estimation of geometry of the environment (mapping) is treated as a single problem. In literature, this problem is called Simultaneous Localization and Mapping (SLAM) or Concurrent Mapping and Localization (CML) (Durrant-Whyte and . A detailed Simultaneous Localization and Mapping (SLAM) algorithms refer to a variety of algorithms that enable mobile simultaneous mapping and tracking for a wide range of mobile devices (Rantakokko et al., 2011). Among these devices are for example backpack systems, handheld sensors, trolley systems and head mounted devices (Khoshelham, Tran and Acharya, 2019;Nikoohemat et al., 2020). For a detailed explanation of the concept, can be consulted.
SLAM research gap for first responders
Although SLAM algorithms enable systems to map and track efficiently within indoor environments, most systems are limited to delivering their results until completion of the scan. Sometimes, even additional postprocessing is needed to generate a reliable 3D representation of the environment (Luhmann et al., 2013). This delay of processing is not a problem for many SLAM use cases, as processes such as 'scan to BIM' (Wang, Cho and Kim, 2015) or generation of high quality navigation graphs (Staats et al., 2017;Flikweert et al., 2019;Nikoohema et al., 2020) do not require instant accessibility. First responders however, need the resulting data as soon as possible due to the dynamic environment of first responder operations (Kapucu and Garayev, 2011;Seppänen and Virrantaus, 2015). This asks for more research to the added value of real-time SLAM systems to the first responder context (Rantakokko et al., 2011;Khoshelham, Tran and Acharya, 2019), to which this research aims to make a contribution.
Coordinator application
The focus of the PoC will be to facilitate the coordinator of the operation with real-time environment data to create SA. The coordinator application will run on a laptop and present mapping and tracking information of the operation environment to the system simultaneous with the explorer process, described below, with the goal to create SA. The operator is able to change the way in which the data elements are visualized and to interact with the data. An example of such interaction is enabling/disabling specific floor levels of the building. This provides an operational picture for this coordinator which unfolds in real time with the action elsewhere.
Explorer application
As mapping and tracking data of the operation environment is not readily available, we have to collect reliable data as quickly as possible. The data is collected by a first responder who is sent into the building: the 'explorer'.
The explorer is equipped with a Microsoft HoloLens, thereby capable of collecting accurate spatial information within indoor environments (Hübner et al., 2020). To do this, the Microsoft HoloLens is equipped with a depth camera and 4 tracking cameras. Furthermore, the Microsoft HoloLens can transfer data by using Bluetooth and WLAN connections. Additionally, the Microsoft HoloLens is head mounted, leaving the hands of the explorer free. The explorer is therefore more flexible in climbing over or removing obstacles and rubble, or even help victims of the incident. At last, the holographic screen offers the explorer the opportunity to receive visual instructions the holographic screen is able to give visual feedback of the mapped environment, display menus for interaction with the application and receive instructions from the coordinator. Two kinds of spatial information will be collected simultaneously. First, mapping information will be collected, meaning geometric measurements of the indoor environment in the form of a spatial mesh. This spatial mesh will represent the environment in a 3D model. Second, tracking information of the explorer will be collected. This information will represent the pose (orientation + position) of the explorer over time within the mapped environment.
Proof of concept development
The PoC is developed in C# in Unity3D. This is a game engine, making it suitable for fast processing and visualization of 3D features. Furthermore, it enables us to deploy interaction methods more easily compared to developing the middleware ourselves. At last, Microsoft has deployed the second version of the Mixed Reality ToolKit (MRTK_V2), to provide developers for easy interaction with the Microsoft HoloLens hardware with C# code.
Mapping module
At the explorer side, priority is given to the spatial mapping capability of the system. As there is generally little information about indoor geometry, an environment should be scanned before the position of the explorer within that environment can be displayed. The objective of the mapping module is to capture the indoor 3D geometry, which is done by using the built in mapping capability of the Microsoft HoloLens. The Microsoft HoloLens is equipped with a time of flight depthsensor. Surfaces are scanned by the Microsoft HoloLens depthsensor and the environment cameras, resulting in a point cloud. The Mixed Reality ToolKit transforms this point cloud into a spatial mesh with a set level of detail (Hübner et al., 2020). At the coordinator side, the mapping information should be presented in real-time to the system operator in a way that it is easily interpreted. The mapping information is collected in the form of a spatial mesh. To prevent information overload and make interpretation easy, the spatial mapping data is visualized in attribute space, object space, and temporal space (Kraak, Ormeling and Ormeling, 2013). When visualized in attribute space, the 3D model does only take geometric aspects such as the normal and the relative height of a surface into account. It is therefore a very robust method, as there are few rules applied to how the spatial mesh is visualized. When visualized in object space, a higher level of interpretation is applied by the system. If this visualization method is used, the system tries to separate floors, walls, obstructive objects, stairs and ceilings in a visual way by depicting them in different colors. The translation from the raw geometry data into features should help to form a mental model of the situation and through this process, SA (Endsley, 2016). At last, when visualized in temporal space, a time component is added to the attribute visualization. First responder operations are very dynamic, as environment conditions tend to change quickly. Therefore, it is beneficial to know when a certain part of an environment is scanned. You could say the reliability of a mapped environment 'decays' over time, which is represented by adding other colors to the attribute visualization. Various services can be built upon the collected 3D model, such as indoor navigation. We can ask the system operator to interpret whether a space is navigable. However, this is assumed to be a difficult and time consuming task that can be automated (Rantakokko et al., 2011;Seppänen and Virrantaus, 2015). Therefore, this research will explore if calculation of navigable space is possible in real-time for the spatial mesh created by the Microsoft HoloLens by using 'navigation meshes', first introduced by (Snook, 2000).
Recognizing floor levels
Next to the visualization of the mapped features, one should consider multi floored buildings. Although a complete 3D model of a building has its purposes, one may want to zoom in to an overview of a separate level (van der Meer, Verbree and Oosterom, 2018). This may be useful for viewing the position of the explorer without clutter of other floors and may also be used to follow the explorer on a (2D) map. The application will enable the coordinator to show or hide floors with a single press of a button. Therefore, spatial meshes need to be segregated based on floor level. The PoC uses a method inspired by (Díaz-Vilariño et al., 2017) to recognize floors, stating that the scan trajectory and scanned surfaces are related to each other. This research will use timestamped explorer positions to relate surfaces to a floor level, utilizing the position of the explorer device at the time of observing a spatial mesh. If a spatial mesh is created or updated, it is always observed from a certain point in space: the position of the explorer device. As the explorer moves around space, the explorer is always standing on navigable space when observing a spatial mesh. This means that an offset between the floor height and the height of the explorer device can be determined.
Tracking module
The explorer will be tracked by the system in terms of a series of 'poses'. The pose is collected by the Microsoft HoloLens and is a combination of a position (x, y and z coordinate in cartesian system) and orientation (roll, pitch and angle of the device). As the Microsoft HoloLens is head mounted, we do not distinguish between the pose of the device and the pose of the head of the explorer. The Microsoft HoloLens uses a SLAM algorithm to correct for pose drift (Khoshelham et al., 2019). Although the HoloLens SLAM algorithm itself is largely unpublished due to the proprietary rights of Microsoft, the mixed reality documentation enables researchers to use it and to estimate what is going on. This conceptual model is enhanced by literature of the likely predecessor of the Microsoft HoloLens SLAM algorithm: KinectFusion (Khoshelham et al., 2019). This enables the Microsoft HoloLens to track itself with an accuracy of 2 centimeters (Hübner et al., 2020).
Tracking loss
Although a SLAM algorithm is implemented by the MRTK to estimate poses in the real-world, it is possible the device loses its reference system. After tracking loss, all content becomes pose locked instead of world-locked and all spatial meshes will be removed from view if tracking is regained. This is identified as a threat for first responder applications, as it would render the system temporarily useless. According to the documentation, tracking loss can be experienced especially if the following (combination of) aspects are apparent: -Lighting conditions are too bright, too dark or lighting conditions change too sudden; -A room with strongly reflective surfaces; -Landmark poor environments, such as a hall without a lot of distinctive features; -Places that look similar, such as office spaces with the same interior for every floor; -Movement in place, for example in crowded areas; -Rooms without Wi-Fi connections, as Wi-Fi fingerprinting enables the device to recognize spatial anchors (reference points) more quickly.
Preventing tracking loss on a device level would require improving the SLAM algorithm of the Microsoft HoloLens. This is out of scope, as it would require a more low approach to the device. Therefore, the limitations as stated above should be considered when scanning. Furthermore, tracking loss will be an important aspect within the reliability tests which will be discussed later in this research.
Communicating module
The data transfer of spatial data elements from explorer to coordinator device will happen via an Microsoft Azure Queue. By choosing this communication protocol, the data is routed through the Azure service, which acts as a bridge. Therefore, the data is first sent from explorer application to the Azure service. Subsequently, the coordinator application polls for new messages and retrieves them from the server if they are available. It is assumed a continuous WLAN connection is troublesome in first responder operation environments. To prevent connection problems, a mobile phone connected with a fourth generation mobile network is brought with the explorer. The smartphone is used as a hotspot, thereby connecting the Microsoft HoloLens indirectly to a mobile network.
Situation Awareness Testing
To discuss the outcome of the PoC, three meetings with first responders have been organised. Both the requirements evaluation and first responder meetings are used to indicate whether the PoC is able to create a certain level of SA as described by (Endsley, 2016): perception (level 1), comprehension (level 2) or projection (level 3).
Mapping, Tracking, and Communicating
Every 0.5 seconds, a depth image is captured and processed into a mesh with a maximum size of 500 triangles. That mesh is subsequently sent to the coordinator application. With this settings, the explorer can walk and look around in an environment, while capturing the environment simultaneously in a spatial mesh. The testing environment used to illustrate the process is an office environment. Office environments are often a blind spot for first responders, as data about the interior is often unavailable or outdated. Furthermore, the environment depicted in this research has been chosen for its distinctive furniture and subspaces. The two left spaces of the room contain two tables, while in the right space there is a number of chairs facing a presentation stand. Both mapping data and tracked poses were collected without problem. Every time a mapping or tracking data element was created by the explorer application, the data element was transferred within a second to the coordinator application. Therefore, real-time mapping, tracking and data transfer is found to be possible.
Capturing indoor points of interest
By capturing indoor geometry, surfaces like walls, floors and stairs are mapped. A highly requested feature of involved stakeholders was to add objects such as exit signs, victims and light switches to the spatial mapping mesh as well. For this purpose, an explorer menu has been developed for use in augmented reality. With this menu, objects can be pinpointed in space with a coloured sphere, which is sent to the coordinator together with the geometry.
Created situation awareness
The mapping module describes the collection of raw data within the explorer application. The collected data has to be presented in the coordinator application in a way that the coordinator can make sense of the 3D model. This interpretation from raw data into information should require a minimal amount of mental resources, to reach comprehension and projection SA levels easier (Endsley, 2016). As stated in the methodology, three visualization perspectives will be used: Geometry focused, Time focused, and Object focused.
Geometry focused
The first objective was to make a geometry focused representation of the mesh. This means that the visualization takes only the geometric aspects of the mesh, such as connectedness of the mesh vertices, global height and the normals within the spatial mesh. Interpretation of this model is done solely based on geometric features of the spatial mesh collected by the Microsoft HoloLens. The geometry focused visualization can be seen in Figure 1. Ceilings are collected for some parts of the structure, but as the model is observed from above the ceilings are fully transparent. The same applies for the walls on the side of the observer, at the southern side of the model. As we can observe from the coloured normal visualization, walls have a distinctive colour from the floor. As the walls have a horizontal normal, all walls are coloured distinctive from the vertical normals. However, as the walls are placed almost perpendicular to each other, all walls do also have a different colour from the other walls. This is an unnecessary overload of information, as we are only interested in the information if a surface is a wall or not. Therefore, the colours are harmonized to the verticality of the normal: because of this, the horizontal direction does not matter anymore.
Time focused
As first responder operation environments tend to change quickly, the way in which spatial information represents the true state of an environment is bound to time. The reasoning here is that relatively old data is less reliable compared to newer data. This temporal component is added to the basic geometry by adding colour from a separate variable: the last update time. If a spatial mesh has not been updated for a set amount of time, a colour will be added to current representation of the mesh, see (Figure 3). Generally, this presentation is received as the best visualization of the spatial mesh by first responders. Interpretation of objects, floors and walls is easy. However, due to the many rules applied to the spatial mesh, it is also easy to make mistakes in the classification of spatial features. Therefore it is important that system operators are able to switch quickly from an object focused representation to a geometry based representation. The geometry based representation is more reliable compared to the object focused representation, as the geometry based representation depends on less and more robust rules.
Extracting navigable space
From the mapping information, navigable space can be extracted by fitting the shape of an 'agent' in the model. If the agent fits in the model at a certain space, that space is navigable. This navigable space can be extracted for different agents with different specifications. Figure 4 shows the extracted navigable space in a Unity3D NavMesh. By using the navigation mesh, agents are able get a route from one position to another position. Practical use of this functionality is not implemented in the PoC, as the navigable space that is extracted is deemed to be too rough: not all navigable spaces are connected to each other while they should be.
Tracking Presentation
In Figure 5 the tracking component is visualized within the object focused spatial mesh representation. The explorer is represented by a blue dot (in the middle of the figure). From this blue dot, a blue track follows the explorer, changing from blue to white to black over one second of time. Because of the time based gradient, the track is continually moving over time, making it easier to distinguish the track from the background. As our vision is directed to motion, this makes it easy to follow the explorer while it moves through the scene. Figure 5: Tracking representation. The left part of the model has been scanned while the right part of the model is not yet scanned.
User Interface
In Figure 6 the user interface of the coordinator application is shown. As can be observed, the object oriented spatial mesh visualization is used in this interface. Several items are displayed in the interface.
(1): Object menu, displaying all data elements. Spatial mesh elements are categorized to floor, making it possible to enable or disable visualization of floor levels.
(2): Navigation menu, where navigation setting such as step height and agent size can be altered. (3): Function menu, where functionality such as a specific visualization space can be selected. (4): The scene view, which is the main screen of the coordinator. This is a full 3D view of the environment in which the coordinator can zoom, select and rotate the contents. On the right, three virtual cameras move along with the explorer, either displaying a side view (5), a first person view (6), and a top-down view (7). Figure 6: User interface. Left: conceptual overview. Right: the system as it is in use.
PERFORMANCE
Reliability has been defined in the theoretical framework as being a construct, consisting out of accuracy, precision and robustness. The construct is used to describe to what extent measurements reflect the 'real world' situation. Data elements must be reliable to be used in the creation of SA: the system operator must be able to trust the data that is received from the system. Therefore, this section will provide test results of the reliability of the data elements.
Accuracy
The accuracy tests describe errors in the way that objects are represented by the PoC at the right position: it tests whether the measurements are actually there or if the mapping information has drifted a bit. From (Khoshelham, Tran and Acharya, 2019), we do already know the local accuracy of the Microsoft HoloLens spatial meshes is about 5 centimetres from the real world representation. Hübner (2020) shows the local tracking capabilities of the device are around 2 centimeters. Therefore, we will not discuss the accuracy of the Microsoft HoloLens on single floors further. Both Hübner and Khoshelham (2019) only consider single floors. There is reason to suspect differences between horizontal and vertical accuracy, as the HoloLens SLAM did not perform well on staircases.
Precision
Precision measures the consistency of the measurements, often described by the resolution of the data (Luhmann et al., 2013). Precision translates to the level of detail of PoC. We know that the level of detail of the spatial mapping system has been set to 'low', which does not sound very promising for our precision measurements. To evaluate precision of the spatial mapping, two scans will be compared visually. A visual comparison between the spatial mesh and the point cloud can be observed in Figure 7. We can distinguish desks (orange), chairs and people (black/orange), floors (red), walls (black), and columns (black). The columns are window frames. We can see that the geometry in the images looks the same. However, the geometry of the window frames and the chairs in the PoC scan are often jagged. Therefore, the scan looks to be imprecise. Precision could be increased within the current setup of the PoC at the cost of scan update frequency. The SA tests will cover if low precision of the data is a problem for the feasibility of gaining SA from the data.
Robustness
Robustness is the third component of reliability. While accuracy and precision give us information about the quality of the measurements, we do not have information about the continuity of the measurements. If the PoC fails for whatever reason to report environment data in real-time to the coordinator application, it is regarded as a failure of the robustness of the system. In the development and testing phase of the PoC, a couple of aspects have been identified that are important for the robustness of the system. They will be explained in the subsections below.
Reliability of tracking
In general, the Microsoft HoloLens is able to track its position within an environment well, especially if a user respects the limitations of the system. For example, tracking was only lost once while tracking the visual feature rich environment of the large office space. In a visual feature sparse environment tracking was lost twice. If tracking is not lost at a staircase, the SLAM algorithm is almost always able to restore a consistent mapped image if tracking is restored.
Added value as defined by first responders
Three meetings with first responders have been organized to discuss added value of the PoC to indoor first responder operation SA.
The first meeting set the requirements for the PoC. Two officers of duty of a Dutch Safety Region were involved. They agreed with the statement that basic, geometric collection of mapping and tracking data would benefit indoor SA greatly. They stressed the data should be presented in a interpretable way: displaying geometry was not simple enough. Furthermore, indoor navigation should be a focus of the eventual system. At last, besides catching indoor geometry, important objects such as fire hydrants should be added to the 3D model as well.
The second meeting was organized with two groups of first responders of different organizations at a Dutch event for 3D data use for first responder operations. Here, progress of the PoC was discussed with about twenty first responders. The participants noted they were impressed with the transfer of the mapping and tracking data. Although it was visualized in the basic geometry focused presentation (see section 4.3.1), the first responder stated they could interpret the data fairly easily. They also focused on the importance of time (both in mapping and tracking data) and on the ability to add objects such as exit signs and victims, which was only partly implemented at the time. Finally, a third meeting was organized at a Dutch Safety Region to discuss the added value of the PoC to the creation of indoor SA. A demo was given to the 13 participants, among who officers of duty and indoor map makers were present. Next to the provision of mapping and tracking data, the potential of extracting navigable space for future navigation and evacuation simulation purposes were highly valued. The data was understood and comprehended into mission critical information, as was stated by the first responders. Of course, this statement is subjective in nature, but it does still support the conclusion that creation of level two SA is possible with the PoC. Furthermore, the temporal component of both mapping and tracking data has been presented, of which one officer of duty stated this enabled him to follow and predict the states of the situation environment.
CONCLUSION
This original research creates SA from real-time shared 3D mapping data. It shows that a common operational picture for a first responders commander or control room coordinator can not only be realized, but also helps to improve the SA. The PoC creates both mapping and tracking data elements in an indoor first responder operation environment and is able to transfer these data elements to a remote coordinator in real-time.
The collected data elements fit the spatial data requirements for first responder operations. As we know it is important for first responders to know to what degree they can trust the system. We have seen the data elements are reliable in a way that they are accurate (below 10 centimetres deviation from TLS results), relatively precise (features are jagged, but can be interpreted) and robust under set circumstances (PoC delivers continuous streams of data elements in real-time if tracking is not lost).
To aid the interpretation process, mapping elements can be visualized in attribute, temporal, and object space as explained in section 3.4. Attribute and temporal space visualization is most reliable as these visualizations are based on few rules. Object space is more easy to interpret, as elements separated into floors, walls, stairs, and obstructive objects, giving more meaning to the 3D model compared to the raw geometry of the maps. For object space, ease of interpretation comes at a cost of more complex visualization rules making the visualization less reliable. Therefore, we recommend to use object space as default visualization method, while using attribute space visualization as fallback option. The mapping information can be extended with named objects, such as fire hydrants. The tracking information is integrated with the mapping information, enabling operators to quickly review the spatial data of the operation environment. The findings can be attributed with levels of SA. We have seen the PoC fulfils the data requirements of first responders regarding mapping and tracking. This fulfils the requirements for the first level of SA: perception. Furthermore, we have seen the mapping and tracking data can be integrated in one common operational picture, combining elements in a interpretable way. This leads to the second level of SA. Nevertheless, this is a theoretical conclusion based on subjective testing together with first responders. Concerning projection (as third level of SA) it is concluded that by showing the age of data elements, the model does not only give a comprehensive idea of when a certain area has been scanned, but it also assures the user that always the most current situation will be displayed. This might be used to project change over time.
Connecting multiple explorer devices
The research problem of this topic has a different nature: different explorers might explore different parts of operation environments. Of course, the resulting models are collected separately from each other and therefore they remain isolated. However, if two explorers have mapped the same space, this should be recognized by the application. Spatial anchors could offer a solution to recognizing surfaces and spaces. A spatial anchor is a fingerprint that combines visual features, geometry and radio signals into one distinctive hash. These hashes can be stored in the cloud, for example by using Azure Spatial Anchors. Ideally, both models would be joined in one large model of the operation environment. One large model will likely require less mental resources of a coordinator compared to two smaller models.
Dividing mapped and unmapped space
We only know which surfaces within an indoor environment has been scanned. Therefore, we do not know which surfaces have not been scanned. This seems obvious, but has large consequences. As the Time of Flight scanner of the Microsoft Hololens does only have a range of approximately 3 meters, surfaces are easily missed in the mapping process. Future research could map this unknown space, by combining the known device poses with the mapping information. From all poses, a spatial model can be created by taking the space that should have been mapped from every pose. A frustrum can be calculated from each pose, indicating the space that should have been seen. For example, we know the HoloLens can map for 3 meters in a certain angle of view. If no object is in the way, we can regard this whole frustrum as 'mapped'. Creating an overview of mapped voxel dataset can be done by creating an empty voxel around the initial position of the PoC. While the PoC runs, the voxels within the dataset are classified in 'mapped empty', 'mapped surface', and the default state 'unmapped'.
Improving navigable space extraction
Navigable space is not always continuous in the models generated by the PoC. The cause for this is that the Microsoft HoloLens can only scan for 3 meters forward. If a user looks up, a part of the floor might be missed while it is passed. Careful mapping is therefore required to scan a complete area. Future research could focus on filling such gaps by post-processing the spatial mesh of the model. If combined with a mapped/unmapped space division, such post-processing could be more aggressive for parts that have not been scanned but are likely to be connected to mapped space. | 9,620.8 | 2021-06-17T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
The Pathogen Aeromonas salmonicida achromogenes Induces Fast Immune and Microbiota Modifications in Rainbow Trout
Environmental stressors can disrupt the relationship between the microbiota and the host and lead to the loss of its functions. Among them, bacterial infection caused by Aeromonas salmonicida, the causative agent of furunculosis, results in high mortality in salmonid aquaculture. Here, rainbow trout were exposed to A. salmonicida achromogenes and its effects on the taxonomic composition and structure of the microbiota was assessed on different epithelia (gills, skin, and caudal fin) at 6 and 72 h post-infection (hpi) using the V1–V3 region of the 16S rRNA sequencing. Moreover, the infection by the pathogen and immune gene responses were evaluated in the head kidney by qPCR. Our results suggested that α-diversity was highly diverse but predominated by a few taxa while β-diversity was affected very early by infection in the gills after 6 h, subsequently affecting the microbiota of the skin and caudal fin. A dysbiosis of the microbiota and an increase in genera known to be opportunistic pathogens (Aeromonas, Pseudomonas) were also identified. Furthermore, an increase in pro-inflammatory cytokines and virulence protein array (vapa) was observed in trout head kidney as soon as 6 hpi and remained elevated until 72 hpi, while the anti-inflammatory genes seemed repressed. This study suggests that the infection by A. salmonicida achromogenes can alter fish microbiota of gills in the few hours post-infection. This result can be useful to develop a non-invasive technique to prevent disease outbreak in aquaculture.
Introduction
Epidermal surfaces of metazoans are the interfaces between the host and the environment and are recovered by mucus layers. These mucus layers are colonized by microorganisms that exploit these particular ecological niches and provide many benefits for the host such as fish, where the microbiota is known to provide a protection against pathogen infection [1]. Indeed, this microbiota is involved in the maturation of innate and adaptive immunity, its stimulation, and the defense against pathogens by avoiding their colonization, acting as passive/non-passive barrier [2][3][4]. Moreover, in rainbow trout (Oncorhynchus mykiss) skin, the commensal lactic bacteria were shown to prevent the colonization of Lactococcus garvieae, etiological agent of lactococcosis [5], by occupying the ecological niches of this latter one [6]. In salmonids, Robertson et al. highlighted that the bacteria from the Carnobacterium genus were able to inhibit the growth and the proliferation of several pathogenic strains such as Aeromonas hydrophila, and Aeromonas salmonicida [7]. Boutin et al. (2012) showed that indigenous bacteria from unstressed brook charr (Salvelinus fontinalis) skin can exert an antagonistic effect against Flavobacterium infection [8]. Therefore, it is essential for the host to maintain a successful communication with its microbiota through appropriate immunity. However, this tight cross-talk can be disrupted by environmental factors such as the diet, pollutants, or infection [9] and lead to a dysbiosis. Dysbiosis is defined as a disturbance in microbial composition or a disruption
Fish Rearing
Bacterial challenge and fish sampling were approved by the local Ethic Committee for Animal Research of the University of Namur, Belgium (Protocol number: 16272 KE). Rainbow trout juveniles were obtained from the commercial Hatrival pisciculture (Hatrival, Belgium) and were randomly distributed into 100 L glass tanks (20 fish/tank) in a recirculating system. The average fish length was 19.7 ± 3.5 cm from head to tail and the average weight was 92.86 g ± 22.86 g. Fish were allowed to acclimate to the new standard conditions for 1 month, i.e., 13.5 • C, 8 mg/L of O 2 , with a control level of ammonium and nitrite (respectively < 0.75 and 0.01 mL) and under a photoperiod of 12Light:12Dark (L:D). Fish were fed twice a day 6 days per week with commercial food (Coppens, . No mortalities were reported during this acclimatization.
In Vivo Experiment: Bacterial Challenge and Fish Sampling
The Aeromonas salmonicida achromogenes strain used for the infection was provided by the Centre d'Economie Rurale (CER-group, Marloie, Belgium) and was originally isolated from an infected fish that exhibited furunculosis. The strain was identified as a A. salmonicida achromogenes using biolog system and was deposited in the BCCM collection (Belgian coordinated collection of microorganisms) with code LMG p-31558. The virulence and LD50 was assessed during a preliminary test infection, and various bacterial doses were performed to determine the LD50 CFU of the targeted rainbow trout population (LD50 = 3.1 × 107 CFU/100 g fish body weight) [25]. Bacteria were isolated from a unique colony in sterile brain heart infusion (BHI) solid medium (Sigma Aldrich, Saint-Louis, MO, USA) and then incubated at 22 • C for 18 h. Bacteria were cultured until reaching 3.1 × 10 9 CFU/mL just before the start of the experiment. Prior to this bath infection, 10 fish were euthanized with an overdose of buffered MS222 (200 mg/L) and gills, caudal fin, and skin mucus were sampled with individual swabs (Copan, Italy) to perform microbiota analyses and the head kidney were kept for immune gene expression measurement. Then, fish were exposed to the A. salmonicida achromogenes strain for 1 h by bath infection at a final concentration of 10 6 CFU/mL. Afterwards, the exposed fish were rinsed with clean water and placed in glass tanks (20/tank) corresponding to the different sampling time points (6,24 and 72 hpi post-infection) to avoid disruption of the microbiota that might be due to sampling at the previous time. As the sampling at different timepoints can induce a stress due net chasing that in turn would potentially affect the different analyzed microbiota, only one tank per timepoint was used. Because each tank was part of a recirculating system, water from each tank was collected through a single filter and returned to the entire system, leading to identical water in each tank. The analysis of water commensal microbiota did not show any significant differences in microbial diversity confirming the absence of a tank effect between timepoints. Using the same protocol as for fish euthanized before bacterial infection, ten fish were randomly sampled at 6 and 24 h post-infection (hpi); and 6 fish were sampled at 72 hpi. Microbial swabs and tissues were stored at −80 • C prior to DNA extraction and mRNA extraction. For each timepoint, water samples (500 mL) were also collected in triplicate, centrifuged during 30 min at 21000 g, and the pellets were stored at −80 • C to ensure that changes in individual microbiota were not due to changes in the bacterial community in the water after the previous bath infection. A control group that was not exposed to the bacteria was also conducted on three rainbow trout (skin and gill mucus) to follow the changes that might occur naturally in the different microbiotas within 72 h. All along the experiment, fish welfare was evaluated and fish that had reached the human acceptable endpoints had to be euthanized. Particularly, fish exerting several symptoms and an advanced furunculosis were euthanized. All sampled fish were tested for the presence of A. salmonicida achromogenes using expression of its specific vapa gene in head kidney samples.
DNA Extraction, PCR and 16S Amplicon Sequencing
DNA extractions were performed using a modified protocol of QIAGEN DNeasy Blood and Tissue Kit (Hombrechtikon, Switzerland) for swabs and pellets from water samples. The protocol developed by the manufacturers was used using twice the suggested solution volumes (except for the elution steps) and by introducing a lysis step with lysozyme at 20 mg/mL (30 min at 37 • C) prior to proteinase K lysis step (56 • C overnight). DNA was eluted in two steps in 50 µL of DNAse-free water. Samples were quantified using the Qubit method prior to the first PCR step. The V1-V3 region of the 16S rRNA was amplified using the V1 (27F: 5 -AGAGTTTGATCMTGGCTCAG-3 ) forward and the V3 (534R: 5 -ATTACCGCGGCTGCTGG-3 ) reverse primer which contained an overhang adapter for the second PCR step. PCR products synthesis was performed using thermocycling with 2.5 µL of genomic DNA, 5 µL of amplicon PCR forward primer (1 µM), 5 µL of amplicon PCR reverse primer (5 µM), and 12.5 µL of 2× KAPA HiFi HotStart Ready Mix (Kapa Biosystems, Wilmington, MA, USA) starting by performing an initial step at 95 • C for 3 min, then 25 cycles of 95 • C for 30 s, 55 • C for 30 s, and 72 • C for 30 s, and a final extension step at 72 • C during 5 min. PCR amplicons were subsequently purified using the MSB Spin PCRapace kit (Invitek, Berlin, Germany) and the concentration was checked using Qubit protocol. Then samples were sent to GenomicsCore (KU Leuven, Belgium) in order to attach the dual index and Illumina sequencing adapters (San Diego, CA, USA) using the Nextera XT index kit (Juno Beach, FL, USA). Second PCR product synthesis was performed using thermocycling step with 5 µL of DNA, 5 µL of Nextera XT Index Primer 1, 5 µL of Nextera Index Primer 2, 10 µL of PCR grade water, and 25 µL of 2× KAPA HiFi HotStart Ready Mix starting by performing an initial step at 95 • C for 3 min, then 8 cycles of 95 • C for 30 s, 55 • C for 30 s, and 72 • C for 30 s, and a final extension step at 72 • C during 5 min. PCR products were purified using AMPure XP beads. Sequencing was performed using Illumina HiSeq 2500 (250 + 50 bp).
Bioinformatics Pipeline and Microbiota Analysis
The sequencing process generated 34,309,182 reads. The sequence data were processed using different independent software. Firstly, low quality sequences were removed, i.e., no ambiguous base, a length varying between 400 and 580 bp, minimum overlap of 15 bp, and a quality threshold set up at 0.8 out of 1 using Pandaseq software 2.10. After this process, 30,100,580 reads were obtained. Then, sequences were processed using 1.9.1 Quantitative Insights Into Microbial Ecology 1.9.1 (QIME 1.9.1) pipeline [26]. Moreover, de-noising and chimera detection were performed prior to clustering step using USEARCH [27] and at the end of this step, 29,046,778 reads were acquired. Afterwards, the sequences were clustered with 97% sequence similarity using UCLUST in a de novo way. The representative sequence for each cluster provided a taxonomic assignment using the SILVA database, aligned on the representative set with PyNAST [28] and finally a phylogenetic tree was built with FastTree. The microbiota analyses were carried out with Phyloseq [29], a package on R. Prior to any analysis, our data were transformed using CSS normalization to reduce the effects of a heterogeneous library size using the metagenomSeq package [30]. For α-diversity, the Chao1 (richness estimator), Shannon and Inverse Simpson indexes were calculated with Phyloseq tools and generalized linear models (GLMs) were used to highlight statistical differences. The microbiome package was used to analyze the core microbiota and this method allowed to obtain a good representation of the different microbiota analyzed. The core microbiota was defined as the bacteria that are present in all the samples from the same tissue but one (present in 90% of the samples for each group), with a relative abundance of at least 0.5%. For β-diversity analyses, Permanova was performed for testing differences between groups of samples with vegan package [31]. Bray-Curtis [32] and Weighted Unifrac distances were used as the quantitative method and Unweighted Unifrac for the qualitative indices [33]. Prior to Permanova, "rare" OTUs from each sample were removed (<0.1%) in order to reduce noise from the dataset [34]. Nonmetric multidimensional scaling (NMDS) was used to visualize differences in the bacterial community based on the same distances discussed above. Negative binomial GLMs were applied to determine the OTU that were differentially abundant across tissues throughout bacterial infection. For these GLMs, we trimmed OTU that were not present in at least 75% of the samples within the experimental group and with an abundance > 0.1%. Models and p values for the multivariate generalized linear models were obtained using mvabund (r package) [35] and corrected by the likelihood ratio test for multivariate analysis. The inferences were performed using 999 bootstrap resamplings. The correlation network analysis was achieved in R based on correlations between the abundance of the OTUs for each microbiota separately. Significant Spearman correlations (|r| > 0.75 and corrected p value < 0.001) were calculated in the R packages Hmisc, pscych, and igraph. Network layouts are based on the Fruchterman-Reingold process which helps to create clusters since this method tends to place correlated OTUs next to each other.
The functional and metabolic profiles from our 16S amplicons were also predicted using Tax4Fun [36] at the most accurate level. Then, Permanova with Bray-Curtis distance were performed to detect differences between the different experimental groups.
Immune Gene Expression Analysis
The immune response of fish throughout the bacterial infection was assessed using immune gene expression from the head kidney. Total RNA was extracted from the head kidney using Tri-Reagent solution (Ambion, Thermofisher Scientific, Waltham, MA, USA) according to the manufacturer's instructions. The pellet was dried and resuspended in 100 µL of RNase-free water. Total RNA concentration was determined by NanoDrop-2000 spectrophotometer (Thermo Scientific) and the integrity was checked using the bioanalyzer 2100 (Agilent, Santa Clara, CA, USA) and on agarose gel (1%). Genomic DNA was digested for 15 min at 37 • C with 1U of rDNAse I (Thermofischer Scientific). Then, 1 µg of total RNA was reverse-transcribed using RevertAid RT kit according to the manufacturer's instructions (Thermofischer Scientific).
Relative expression of 9 coding transcripts were assessed by qPCR. Two housekeeping genes (β-actin and 18S rRNA) were used to normalize the expression of the target genes. Bestkeeper results showed a coefficient of correlation of 0.926 (p < 0.001), standard deviation of 0.19 Cq and 0.941 (p < 0.001), and standard deviation of 0.37 Cq for 18S RNA, and βactin [37]. The gene expression of pro-inflammatory cytokines (il-1β, tnfα), antibacterial proteins (lysozyme (lyso), C3 protein of complement system (c3-4)), myeloperoxidase from neutrophils (mpo), and anti-inflammatory response (il-10, tgfβ) was evaluated. In addition, the relative expression of the transcript Virulence array protein A (vapa), which is involved in the infection and virulence of A. salmonicida was measured. The list of specific primers used for gene expression analysis is described in Table 1. qPCR reactions were carried out with SsoAdvanced™ Universal SYBR ® Green Supermix (Bio-Rad Laboratories, Hercules, CA, USA) using a 1:100 dilution of the cDNA for target genes and reference genes. Primers for target genes were used at a final concentration of 0.5 µM. The thermal conditions used were 3 min at 95 • C for the initial step, followed by 40 cycles at 95 • C for 30 s and 60 • C for 30 s. qPCR analyses were performed with a StepOnePlus device (Applied Biosystems). The relative gene expression was calculated according to the relative standard curve method based on the geometric mean of the housekeeping genes [38]. Mean Ct results are available in the Supplementary Materials S1.
Statistical Analysis
All statistical analyses for the immune assessments were performed in R using GLM with the appropriate distribution in order to obtain the normality and homoscedasticity of residuals. We tested the effect of the mucus location, the time after the bath infection, and both combined. Post-hoc analyses were performed using the multcomp package using the Tukey test.
In Vivo Experiment
The health of trout exposed to A. salmonicida achromogenes from each pond was monitored throughout the experiment. Several signs of hemorrhages at the base of the fins, small ulcers on the side of the fish, and mortalities were observed. These symptoms are characteristic of furunculosis induced by this pathogen (see Baset (2022) for more information on the description and visualization of the symptoms). No symptoms of disease or mortality were observed on the control fish.
The results of skin and gill microbiota confirmed a stability of the microbiota overtime, meaning that the alpha diversity (
Bioinformatics Analysis
After quality trimming, chimera removal and OTU's assignments filtering, several sequences varying between 150,000 and 672,000 were obtained. These sequences were clustered in 3022 OTUs, 313 genus, 191 family, 103 order, 52 class, and 30 phyla.
α-Diversity Analysis
The evenness of the different microbiota was calculated through Shannon and Inverse Simpson indexes and the richness was estimated through Chao1 index and are indicated in Table 2. α-diversity measurements (Shannon, Inverse Simpson, and Chao1) were evaluated using GLMs with a Gaussian distribution. Shannon and Inverse Simpson index GLMs revealed that the bacterial infection did not influence α-diversity but that each mucosal epithelium was different depending on the measurement used (respectively, p value < 0.001 for Shannon index Inverse Simpson index). Shannon measurements indicated that every microbiota is different from each other with the water with the higher diversity followed by the mucus of the gills, then the mucus of the fin, and finally the mucus of the skin. Whereas the Inverse Simpson index showed that the mucus of the gills and the water indices were not different. The microbiota of the gill mucosa and the caudal fin are not significantly different. Chao1 richness measurements did not highlight any significant effect (p value < 0.05) but the results follow the same pattern as observed with the Shannon and Inverse Simpson indexes.
Structure of the Rainbow Trout Core Microbiota throughout Bacterial Infection
The composition of the microbiota during bacterial infection was analyzed (Figure 1). The core microbiota of water, gill mucus, skin, and caudal fin throughout infection were 61.5, 61.8, and 59.4% (pre-infection-6 h-72 h) respectively; 50.3, 55.9, and 69.5% (pre-infection-6 h-72 h), respectively; 92.8, 92.7, and 92% (pre-infection-6 h-72 h), respectively; 76.9, 79.2, and 80.7% (pre-infection-6 h-72 h), respectively, of the OTUs present in their microbiota. The core bacterial community of the water did not seem to be impacted by the bacterial infection except for the Flavobacteriaceae flavobacterium which slightly increased after 72 h post-infection.
Gills microbiota appeared directly impacted by the infection since an increase in Burkholderiaceae polynucleobacter and of Pseudomonaceae pseudomonas was observed 6 h after infection. Moreover at 72 hpi, F. flavobacterium rose to 45% of the core microbiota composition. At the opposite, a decrease in Comamonadaceae as well as Burkholderiaceae polynucleobacter (from 21 to 13% for the Comamonadaceae and from 46 to 13% for the B. polynucleobacter) was highlighted.
Regarding the core microbiota of the skin, we could not observe any impact of the bacterial exposure during the first 6 h even though there was a slight increase in B. polynucleobacter population coupled with a decrease in F. flavobacterium. Moreover, the core microbiota of the mucosal skin was much more impacted 72 h post-infection. We observed a large decline in B. polynucleobacter population along with a huge increase in F. flavobacterium population. We also highlighted an important increase in Oxalobacteraceae undibacterium population even though the percentage was quite low.
The mucosal fin core microbiota appears to show a similar pattern to that observed for the skin core microbiota, i.e., there is no real difference in the first 6 h and then it undergoes a significant impact 72 hpi. However, the decrease in B. polynucleobacter population seemed to be correlated with an increase in Comamonadaceae population. An increase in Chromatiaceae rheinheimera, P. pseudomonas, and O. undibacterium populations was also noted 72 hpi. Despite the bacterial infection by A. salmonicida achromogenes, the presence of this bacteria within the core microbiota of the different epithelia was not identified.
β-Diversity Analysis
Phylogenetic diversity (weighted Unifrac), phylogenetic richness (Unweighted Unifrac), and Bray-Curtis diversity were also assessed to compare the bacterial communities. Permanova using these indexes indicated a strong effect of the mucus location as well as the bacterial infection by A. salmonicida (Mucus_location*TimepostInfection-Weighted Unifrac: F.stat: 3.813; R 2 : 0;0924; p value: 0.001. Unweighted Unifrac: F.stat: 1.3697; R 2 : 0.06384; p value: 0.001. Bray-Curtis: F. stat: 3.888; R 2 : 0.1055; p value: 0.001). Therefore, these effects were explored through pairwise comparisons Permanova using a Benjamini-Hochberg correction [35] which controls the false discovery rate. All the results are presented in Supplementary Materials S2-S4. Bray-Curtis and Weighted Unifrac Permanovas suggested that the mucus location has a strong influence on the bacterial communities. This showed that the gills and skin microbiota at 6 and 72 h post-infection were significantly different from the mucus before the infection, whereas fin microbiota was only modified 72 hpi. Furthermore, pairwise comparisons using Unweighted Unifrac distance suggested that there was no significant difference between the different mucus locations but there was a strong effect of the bacterial infection on the bacterial communities.
β-Diversity Analysis
Phylogenetic diversity (weighted Unifrac), phylogenetic richness (Unweighted Unifrac), and Bray-Curtis diversity were also assessed to compare the bacterial communities. Permanova using these indexes indicated a strong effect of the mucus location as well as the bacterial infection by A. salmonicida (Mucus_location*TimepostInfection-Weighted Unifrac: F.stat: 3.813; R 2 : 0;0924; p value: 0.001. Unweighted Unifrac: F.stat: 1.3697; R 2 : 0.06384; p value: 0.001. Bray-Curtis: F. stat: 3.888; R 2 : 0.1055; p value: 0.001). Therefore, these effects were explored through pairwise comparisons Permanova using a Benjamini-Hochberg correction [35] which controls the false discovery rate. All the results are presented in Supplementary Materials S2-S4. Bray-Curtis and Weighted Unifrac Permanovas suggested that the mucus location has a strong influence on the bacterial communities. This showed that the gills and skin microbiota at 6 and 72 h post-infection were significantly different from the mucus before the infection, whereas fin microbiota was only modified 72 hpi. Furthermore, pairwise comparisons using Unweighted Unifrac distance suggested that there was no significant difference between the different mucus locations but there was a strong effect of the bacterial infection on the bacterial communities. Differentially abundant taxa and correlation network were used to detect OTU variations caused by the bacterial infection. For these GLM, OTUs that were not present in at least 75% of the samples were trimmed. After this filtration, 176, 218, and 141 OTUs for the gills, skin, and fin microbiotas, respectively, were obtained. Among those OTUs, 44 were shared across all microbiota. Models and p value for the multivariate generalized linear models were obtained using mvabund (r package) [32] and corrected using the likelihood ratio test for multivariate analysis. The overall microbial composition showed a significant impact of the bacterial infection (analysis of deviance (denovo 802505, 376261, 786594 and 1638662), and five OTUs from the caudal fin core microbiota (denovo 2455659, 802505, 2594052, 115553 and 955978) were impacted by the bath infection. The taxonomy related to these OTUs can be seen in the Figure 3 at the heatmap level.
A correlation network approach was also developed to assess the dysbiosis induced by bath infection for each microbiota using the time post-infection as variable. For the correlated gill network (Supplementary Materials S5), three complex networks can be distinguished, with the largest in the center. This large network seems to be symmetrically divided into two subnetworks by bacteria that negatively influence the other subnetworks. Among these bacteria, some were found to be modified by the bath infection (C. rheinheimera, X. arenimonas, Rhizobiaceae, M. mycobacterium, and Saprospiraceae). To this extent, when the abundance of these bacteria is altered, we expect the other part of the network to be altered in the opposite way. However, this is not the case and thus the bacterial challenge has created an imbalance in the composition and correlation of the microbiota. At the bottom, a smaller network contains several putative pathogenic bacteria (Pseudomonas, Acinetobacter, and Flavobacterium) and some of the bacteria present in this network increased dramatically after the bacterial infection.
For the correlated skin network (Supplementary Materials S6), one big network can be distinguished and is mostly constituted by Polynucleobacter bacteria (110 out of the 184 OTUs present in this sub-network) and most of the edges represent positively correlated bacteria. Furthermore, the bacteria that were significantly changed following bacterial infection belonged only to this subnetwork. The other networks did not show any changes in bacteria following bath infection.
For the correlated caudal fin network (Supplementary Materials S7), four complex networks were observed. Among these four complex networks, three were slightly modified after the bacterial challenge. The impact of the bath infection on the bacterial network for every mucus location was also calculated. The number of edges (bacterial correlations) linked to each node (OTU) that were changed due to bacterial infection were counted. Then this number was divided by the total number of correlations in the network. A ratio corresponding to the percentage of the bacterial network that was impacted by A. salmonicida was obtained and was determined as the dysbiosis ratio. For the bacterial network of the gill microbiota, the dysbiosis ratio is 14.53% (42 edges impacted out of 289). For the bacterial network of the skin microbiota, the dysbiosis ratio is 23.95% (358 edges impacted out of 1495). For the bacterial network of the fin microbiota, the dysbiosis ratio is the lowest (171 edges impacted out of 275).
Functional Analysis through Tax4FUN
Functional profiles of the different microbiota were assessed using Tax4Fun [33], with 65.3% (±25.5%) of all 16S rRNA being mapped to KEGG organisms into the analyses. A total of 6324 KEGG pathways at the highest level were found then regrouped into 11 groups of basic KEGG pathways. Permanova highlighted a significant effect of the interaction between the time post-infection and the mucus location (p value < 0.001) and is illustrated by NMDS in the Figure 4a
Evaluation of the Immune Status of the Fish
First, the virulence of the bacterial strain used was assessed using vapa gene expression and the result is presented in Figure 5. The expression level of vapa was significantly higher in fish from the 6 h, 24 h, and 72 hpi groups compared with the pre-infection group where no expression was detected (p value < 0.001). The expression levels of pro-inflammatory (il1b and tnfα), anti-inflammatory (il10 and tgfβ), and antimicrobial compounds (c3, mpo, and lyso) coding transcripts are presented in Figure 5. il1β gene expression was significantly enhanced in fish that were exposed to the bacteria 24 and 72 h before. For tnfa, its expression increased significantly 6 hpi compared with the group before infection but was already decreasing after 24 hpi as this group was no more different from the group before infection (p value < 0.01). For the anti-inflammatory gene expressions, there were no significant effects of the infection compared with the group before the infection. Regarding gene expression of antimicrobial compounds, it was significantly higher in the 24 h post-infection group than in the infection group (c3, mpo, and lyso: p value < 0.001, p value < 0.01, and p value < 0.001, respectively). Figure 5. Evaluation of the immune status of rainbow trout through qPCR on head kidney in group before the infection and after bacterial infection (6,24, and 72 hpi). The graphs represent vapa, il1b, tnfa, il10, tgfβ, c3-4, mpo, and lyso mRNA expressions. Letters (a and b) on the top of the plot represent the significance between groups and the three red dots in the boxplot represent the mean of the associated group. The black dot represents outlier fish.
Discussion
In this study, we evaluated the impact of an infection by Aeromonas salmonicida achromogenes can induce on microbiota level of different tissues through 16S rRNA sequencing.
Because the mucus layer is the primary interface between the environment and the host, fish have developed lymphoid tissues associated with this mucus layer. Although some studies highlight the impact of parasitic infections on salmonid alpha diversity [13,39], in this work no significant difference due to bacterial infection was observed. Reid et al. [39] also studied the impact of another pathogen, the salmonid alphavirus (SAV) on the bacterial community of salmon skin where they did not detect significant differences due to high within-group variation. Those studies focused on the impact of parasites and viruses and, therefore, a bacterial infection could have differently affected bacterial communities, but this was not the case.
However, strong differences in diversity between sites in the bacterial community were observed, with higher diversities for water and gills. Some studies have been conducted to determine the diversity between different body sites in fish. Lowrey et al. [40] performed a topographic mapping of the different tissues of rainbow trout. They found that the skin microbiota has a higher species diversity than the gill microbiota, which is in contradiction with the present results. Another study conducted on Atlantic salmon Salmo salar [41] found high species diversity in the water and lower diversity for the skin microbiota with similar values to ours. By combining the richness (Chao1) and diversity (Shannon and Inverse Simpson) indices calculated in the present study, we can suggest that the composition of the different microbiota is highly diverse but dominated by a few different taxa.
In addition to differences in alpha diversity between body sites (higher in gills and lower for skin and fin), we revealed a change in β-diversity as well. We showed that the diversity between communities at different body sites are different from each other suggesting that all these sites represent specialized ecological niches with their own community. Moreover, an impact of bacterial infection on the composition of the different communities was highlighted but at different times depending on the body site. The early changes in the gill bacterial community may be an indication that this body site could be a major site of contact and entry for A. salmonicida during infection. The gills are a highly vascularized organ due to their role in the uptake of dissolved oxygen from the water and, therefore, may be targeted by the pathogen for infection [24]. This is because the gill surface is primarily covered by a single layer of epithelium, unlike the skin, which has a much thicker layer of epithelium [42]. This primary infection of the gills was observed for Yersinia ruckeri in rainbow trout [17]. Ohtani et al. [18] showed similar results using IHC and optical projection tomography but also proposed the skin as a secondary entryway for the bacterial infection. These results are in accordance with our findings showing that the microbiota of the gills is the first impacted when facing bacterial infection.
Moreover, the bacterial community in the water was not altered by the changes in fish microbiota due to the bacterial infection and the potential release of the pathogen in the water. No differences in microbiota between body sites were shown, but rather an effect of bacterial infection alone. This suggests that the bacterial compositions between body sites were similar but not as abundant. It also suggests that bacterial infection caused some bacteria that were not present in the microbiota initially to appear and others to disappear. Therefore, we can state that the bacterial infection caused by A. salmonicida achromogenes resulted in dysbiosis. Bottleneck effects were also observed, caused by the bacterial infection, leading to a reduction in inter-individual bacterial diversity in each microbiota. This loss of bacterial diversity may be the starting point for infection by opportunistic pathogens, as they have much more space to grow and expand. Overall, the core microbiota of the gills and the skin are largely dominated by the genus polynucleobacter which is a bacterium of the Proteobacteria phylum and is known to be highly present in the freshwater environment [43]. Polynucleobacter is also abundant in fin microbiota but less dominant since bacteria from the Comamonodaceae family (Proteobacteria phylum) take a bigger part. Interestingly, we first observed an increase in the Proteobacteria phylum 6 h after bacterial infection. However, the relative abundance drastically dropped 72 h after infection. In a recent study, Zhan et al. showed that rainbow trout infected by infectious hematopoietic necrosis virus also exhibited a decrease in Proteobacteria accompanied by an increase in Actinobacteria [44].
Here, the decrease in Proteobacteria relative abundance 72 h after the bath infection was mainly due to the decrease in bacteria from the polynucleobacter genus and Comamonadaceae family. Although the proteobacteria phylum encompasses many different bacteria with different lifestyles, it is recognized that a decrease in the proportion of proteobacteria can lead to dysbiosis [45,46]. The core microbiota of the caudal fin is mainly dominated by Comamonadaceae and Flavobacteriaceae. This decrease is coupled with a large increase in Flavobacterium and Pseudomonas genus that are genus well-known to contain some opportunistic pathogens such as Flavobacterium psychrophilum, columnare, Pseudomonas putida, aeruginosa, and fluorescens [47,48]. Moreover, an increase in Aeromonas genus was observed in the core microbiota and concomitantly with the expression of vapa gene in the head kidneys that confirmed the presence and the virulence of A. salmonicida achromogenes. These observations are confirmed by a previous study that highlighted that A. salmonicida infection is associated with an increase in opportunistic pathogens on the skin of Atlantic salmon [49]. This expansion of opportunistic pathogens in microbiota was described following infection by a virus [39] but also following a parasitic infection [13]. Moreover, we observed an increase in O. Undibacterium in every microbiota after the bacterial challenge. Undibacterium has been described as a stress biomarker in pH stress in Tambaqui [45] but also has been reported to increase in the rat feces with experimental autoimmune encephalomyelitis (EAE) [50]. Therefore, Undibacterium may be an interesting candidate to follow the stress status of the fish while studying the microbiota.
Although the impact of bacterial infection seems less obvious on the caudal fin and skin microbiota networks, the changes observed in the gill microbiota correlation network are less equivocal. Indeed, the dysbiosis observed is not only due to an increase in opportunistic pathogens but also to changes in the relationships within the community. Indeed, many bacteria belonging to the different core microbiota increased or decreased after infection with A. salmonicida achromogenes. These observations were reinforced by the dysbiosis ratios as half of the bacterial network was modified showing a huge effect of A. salmonicida achromogenes. It has already been shown that stress [16] or pathogen infection [44,46] induce multiple modifications in the relationship between bacterial populations. Understanding the dynamics within a bacterial community while the host is undergoing infection is crucial for the development of pro/pre/synbiotics in aquaculture. Tax4Fun was used to assess the gene functions associated with the bacterial community of the different microbiota locations. First, this method confirmed that the caudal fin, gills, and skin microbiota as well as bacterial community of the water consist of specialized ecological niches. The most predominant differences are between the functional profiles of the skin with higher percentage of amino acids metabolism and energy metabolism compared with the fin functional profile which has a higher percentage of carbohydrate metabolism. However, unlike previous analyses, only the gill microbiota is impacted by the bacterial challenge while other organ microbiotas are not modified. Therefore, these microbiota modifications are in relation with the observed immune status modifications. The expression levels of proinflammatory cytokines (il-1β and tnf-α) increased 6 h after bacterial infection, indicating that the fish is in a state of stress and is trying to cope with the bacterial infection, but it never returned during the duration of the experiment to control levels. This suggests that the bacterial infection induced strong inflammatory responses and is combined with the inability of the fish to come out of this state by producing anti-inflammatory cytokines (il-10 and tgf-β). This hypothesis is supported by another study [51] that highlighted a strong expression of il-1β and tnf-α early in the infection by A. salmonicida achromogenes. Schwenteit et al. [52] showed an increase in pro-inflammatory cytokines in the head kidney of the Artic Charr (Salvelinus alpinus, L), which is a related species to rainbow trout, 3 days postinfection by A. salmonicida. Il-1β and tnf-α are from two different cytokine families which have overlapping functions, notably by the induction of other cytokines from activated lymphocytes [53].
About the regulatory mechanisms, we did not observe an effect of the bacterial infection on the anti-inflammatory cytokines in the head kidney. Brietzke et al. [51] highlighted a similar result to ours where tgf-β remained unmodified after the bacterial infection by A. salmonicida. Interestingly, A. salmonicida exerts strong immune suppressive effects through il-10 expression from leukocytes of the head kidney culture cell [54]. This strong immunosuppression through il-10 over-expression was not confirmed by our in vivo experiment.
The expression of the antimicrobial compounds (c3, mpo, and lysozyme) mRNA showed a similar pattern, meaning a delayed answer after 24 h post-infection and then a return to the control state except for the mpo.
Conclusions
In this study, a bath infection with A. salmonicida achromogenes significantly altered the gill, skin, and caudal fin microbiota in rainbow trout. In particular, beta diversity measurements showed that the gill microbiota was altered only 6 h after infection, subsequently affecting the skin and caudal fin microbiota, indicating that the gills may be the gateway to this infection. Negative binomial GLMs, associated correlation networks and dysbiosis ratios clearly revealed significant changes in the different microbiota. Along with the dysbiosis, the immune system attempted to cope with the infection by producing proinflammatory cytokines, resulting in high stress levels in the fish. A delayed response in the expression of antimicrobial compounds was also observed 24 h after infection. Therefore, we demonstrated that furunculosis not only damages the immune system but also induces dysbiosis in a tissue-and time-dependent manner post-infection. Author Contributions: B.R. carried out the molecular genetic studies, participated in the sequence alignment, carried out bioinformatics analyses and wrote the manuscript. V.C. helped in the design of the study, carried out the molecular genetic studies and wrote the manuscript. N.D. participated in design of the study, in the bioinformatic training of B.R. and helped to draft the manuscript. P.K. participated in design of the study and helped to draft the manuscript. All authors have read and agreed to the published version of the manuscript.
Funding: This research was supported by the Fonds de la Recherche Scientifique (FNRS) within the framework of Baptiste Redivo thesis that obtained a FRIA grant. Data Availability Statement: Please contact author for data requests. | 8,811.6 | 2023-02-01T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Silicon-based fluorescent platforms for copper(ii) detection in water
The potential of silicon-based fluorescent platforms for the detection of trace toxic metal ions was investigated in an aqueous environment. To this aim, silicon chips were first functionalized with amino groups, and fluorescein organic dyes, used as sensing molecules, were then covalently linked to the surface via formation of thiourea groups. The obtained hybrid heterostructures exhibited high sensitivity and selectivity towards copper(ii), a limit of detection compatible with the recommended upper limits for copper in drinking water, and good reversibility using a standard metal–chelating agent. The fluorophore–analyte interaction mechanism at the basis of the reported fluorescence quenching, as well as the potential of performance improvement, were also studied. The herein presented sensing architecture allows, in principle, tailoring of the selectivity towards other metal ions by proper fluorophore selection, and provides a favorable outlook for integration of fluorescent chemosensors with silicon photonics technology.
Introduction
Several techniques are currently employed for detecting toxic metal ions at low concentrations in aqueous and biological environments, including Inductively Coupled Plasma Atomic Emission Spectrometry (ICP-AES), 1 Inductively Coupled Plasma Mass Spectrometry (ICP-MS) 2 and Atomic Absorption Spectrometry (AAS). 3 These techniques provide excellent detection limits but involve high-cost instrumentation, time-consuming sample preparation and highly trained personnel. Colorimetric and uorescence sensor devices are fast-growing technologies showing remarkable advantages over conventional techniques, such as fast response times, non-destructive analysis and remote operation, while attaining competitive performances in terms of detection limits, sensitivity, selectivity and reversibility by rational device design. [4][5][6][7][8] Fluorescent probes, which have found widespread use in biomedical applications, 9,10 are being increasingly studied for real-time and remote environmental monitoring such as detection of toxic metals in water and biological media. [11][12][13][14][15] In a molecular-type approach, a uorescent probe, which represents the sensitive part of the uorescent device, basically consists of a recognition (binding) moiety linked to or included into a uorescent species (usually an organic dye) whose emission properties, such as peak wavelength and quantum yield, are modied by the interaction with the analyte. The chemical nature of both the binding and uorescent units can be tailored to optimize the probe for any specic analyte and/or application.
Most uorescent probes are designed to work in solution. However, depending on the application, an improvement in the sensing performance can be obtained by covalently anchoring uorescent probes to different substrates, including biomolecules, 16,17 silica-based nanocomposites [18][19][20][21] and metal-organic frameworks. [22][23][24] Graing uorescent probes to silicon-based planar platforms represents a promising route towards sensor integration with silicon photonics technology and realization of Lab-on-Chip devices. 25,26 To this aim, the sol-gel technique based on organosilane derivatives 27,28 is one of the most suitable strategies for fabricating robust, low-cost and functional silicon-based platforms. In particular, the use of 3-aminopropyltriethoxysilane (APTES) as organosilane derivate, allows to insert amino groups capable to covalently link uorophores on silicon surfaces, via the formation of amides, ureas, thioureas and imines as linking functional groups. [29][30][31][32] Although step-by-step silanization of silicon substrates has already been investigated for the fabrication of uorescent platforms, 33-35 the potential of these platforms for the detection of trace metal ions in aqueous environment is yet to be demonstrated.
A fundamental study on silicon-based uorescent platforms for detection of trace metal ions in aqueous environment is herein presented. Fluorescein isothiocyanate (FITC) was covalently linked to APTES-prefunctionalized silicon substrates via thiourea formation to realize silicon-based on-chip devices integrating both recognition and uorescent units. On the one hand, FITC is highly suitable to serve as prototypical uorescent sensing unit because of its remarkable photophysical properties; 36-38 on the other hand, thiourea groups have been demonstrated effective in binding metal cations such as Cu II , Cd II , and Hg II in water, in fact enabling metal-ion uorescent sensors with competitive performances. 22,[39][40][41][42][43] Selectivity to copper(II), high sensitivity and low limit of detection were reported. Successful surface regeneration was achieved using ethylenediaminetetraacetic acid (EDTA) chelating agent, yielding insight into the mechanism responsible for metal binding.
Among heavy metal ions, copper plays a crucial role in different biological processes, 44 being essential for the functioning of many enzymes such as C oxidase (COX) and dopamine b-hydroxylase. 45 Its accumulation in the human body can be harmful and even lead to genetic disorders such as Wilson's disease, and neurodegenerative disorders including Alzheimer's and Parkinson's disease. [46][47][48] Consequently, EPA (US Environmental Protection Agency) and WHO (World Health Organization) have set the maximum copper level in drinking waters to 1.3 and 2 mg L À1 , respectively. Therefore, the search for innovative metal-ion sensing devices represents still a challenge in the environmental eld.
Results and discussion
Reagents and silicon surface modication steps are illustrated in Fig. 1. (100)-Oriented silicon chips were rst coated with ultrathin silica layers by (i) the sol-gel technique by using tetraethoxysilane (TEOS) as silica precursor (Scheme S1, ESI †) and (ii) the spin-coating deposition method (Step 1). Subsequently, amino groups were introduced on the Si@SiO 2 surface by using APTES (Step 2). Finally, FITC was covalently linked to the Si@SiO 2 @APTES surface via thiourea formation (Step 3).
Freshly prepared samples were investigated following a multivariate approach, namely, Contact-Angle (CA) measurements, Atomic-Force Microscopy (AFM), Reectance and Fluorescence Spectroscopy. This way, it was possible to monitor how each functionalization step modied the morphological and optical properties of the silicon surface. 49 Surface wettability tests were performed by water CA measurements. Data revealed a decreasing surface wettability upon amino-modication by APTES and subsequent FITC linkage. In fact, the mean CA, which was found to be 38.3(0.5) in the Si@SiO 2 chip, increased up to 54.4(0.7) aer silanization with APTES 50 and then up to 72.1(0.9) upon functionalization with FITC 51 (Fig. 2). These results could be easily rationalized as a decreased surface hydrophilicity due to the functionalization of the silica layer (bearing hydroxy groups, -OH) with the amino (-NH 2 ) groups of APTES and then with FITC, which contains hydrophobic aromatic rings (Fig. 1b). Measurements carried out on different points of the sample surface conrmed the homogeneous deposition of the layers. This conclusion was supported at the nanoscale by AFM height images, showing lack of topographic reliefs and root mean square (RMS) values of the surface roughness lower than 1 nm (Fig. S1, ESI †).
VIS reectance spectroscopy was used to evaluate the thickness of the layers deposited through the various functionalization steps. Measurements were performed in hemispherical geometry to collect both specular and diffuse reectance. Results are summarized in Fig. 3. Spectral reectance ratio R i (l)/ R iÀ1 (l) was calculated aer fabrication Step i. Silica deposition resulted in 10-15% decrease in reectance across the visible Step 1: surface silica layer deposition bearing hydroxy groups via the sol-gel method using TEOS (left); Step 2: amino-functionalization of the silica surface with APTES (centre, amino groups highlighted in blue).
Step 3: functionalization with FITC (right), leading to the formation of thiourea groups (highlighted in red). spectrum ( Fig. 3a), as the silica layer acted as an antireection coating on silicon. Knowing the index of refraction of silica from previous work done with similar sol-gel precursor concentrations and physical deposition parameters, 52 it was possible to calculate the thickness of the silica layer with high degree of accuracy. The estimated value of 28(1) nm implies that reectance was very sensitive to any increase in thickness caused by subsequent functionalization steps. In the actual experimental setup, this sensitivity resulted to be $1 nm. In the inset of Fig. 3a, the reectance ratio R/R 0 , where R (R 0 ) is the reectance of the Si@SiO 2 structure (bare silicon chip) at 700 nm, is plotted against the SiO 2 layer thickness.
The high sensitivity of the ratiometric reectance to the thickness of the APTES and FITC layers was readily deduced from the slope of the black line at the intersection with the blue and red vertical bars, depicting the APTES and FITC layers, respectively. Fig. 3b and c show that reectance remained practically unchanged upon functionalization of APTES and FITC, allowing to conclude that these functionalization steps were well-controlled and involved the deposition of monolayers. Indeed, a very small spectral dip was observed in the reectance ratio measured aer FITC deposition (Fig. 3c). This feature was found to be compatible with the optical absorption of a dense monolayer of FITC in the visible.
Furthermore, combined continuous-wave/time-resolved uorescence spectroscopy was applied to check for surface functionalization with FITC and gain information on the xanthene-centred uorescence efficiency upon FITC linkage to the silicon surface. The uorescence spectrum of the sensing device is shown in Fig. 4a. The main emission peak, centred at 540 nm (red curve), was readily assigned to the FITC xanthene unit, while comparative measurements on Si@SiO 2 bare chip (grey curve) allowed for attributing the short-wavelength shoulder, centred at 420 nm, to the silica uorescence spectrum. For reference, emission spectra were also collected for FITC dissolved in water and ethanol (Fig. 4b). FITC graing resulted in loss of uorescence efficiency, as deduced from the stretched exponential decay of the uorescence intensity with reduced lifetime as compared to that of the nearly monoexponential decay observed in solution (Fig. 4c).
A preliminary screening of the sensing performances was carried out on several metal cations (Al III , Cu II , Zn II , Ag I , Cd II , Hg II , Pb II ) in MOPS buffer solutions at pH ¼ 7.2, at a xed molar concentration of 1.3 Â 10 À4 mol L À1 (Fig. 5a). Remarkably, among the investigated ions, only Cu II resulted in a neat uorescence turn-off behaviour (see also the uorescence spectra in Fig. 5b) with a signicant response on a short time scale of the order of tens of seconds. Sensitivity toward Cu II was determined by uorescence-intensity titration curves as a function of Cu II concentration. In the low concentration regime below 5 ppm, the decrease in uorescence intensity measured for increasing concentration was tted with a linear decay function, yielding a signal loss coefficient of 0.15 ppm À1 or, equivalently, 15% reduction at 1 ppm level (Fig. 5c). Stabilization of uorescence quenching was observed for Cu II concentration values larger than 15 ppm. Finally, regeneration tests carried out in EDTA water solution showed stabilization of the uorescence signal recovery level at $60% already aer the second regeneration cycle (Fig. 5d), thereby demonstrating good reversibility of the surface-analyte interaction.
Several Cu II sensing mechanisms could in principle be envisaged for the present hybrid platforms. Fluorescence sensing of Cu II by a rhodamine B derivative linked to silica nanoparticles through a thiourea group was previously shown to yield Cu II complexes exhibiting an optical absorption band partly overlapping with the uorophore emission band, thus acting as uorescence quencher via photoinduced Forster's resonance energy transfer (FRET). 18 However, no additional absorption bands were detected by ratiometric reectance aer device dipping into a Cu II aqueous solution (Fig. 3d).
Redox processes involving Cu II reduction to Cu I , thiourea oxidation to a disulde moiety and concomitant formation of Cu I -thiourea complexes were also demonstrated experimentally in solution. [53][54][55] In the present uorescent platforms, however, surface oxidation with formation of disulde (-S-S-) bonds would presumably cause irreversible modication of the surface, in contrast with regeneration tests by using EDTA. Thiourea-induced copper(II) reduction was therefore discarded as a possible metal-ion sensing mechanism.
Thiourea groups may indeed not be directly involved in Cu II sensing. Experiments performed on FITC dilute solutions in ethanol/water, where thiourea groups are not present (Fig. S2, ESI †), showed that Cu II can selectively interact with FITC, leading to a decrease in both absorbance and uorescence intensity without formation of new optical absorption bands. FRET-type processes were therefore ruled out. A new FITC-Cu II interaction mechanism was in turn suggested, which involves the formation of a uorescein lactonic species where p-conjugation interruption leads to vanishing of the xanthene-centred optical transition moment 56 and, hence, to uorescence quenching. This mechanism (Scheme S2, ESI †) was further supported by mass and tandem mass spectrometry data (Fig. S3, ESI †). A model-based analysis of the uorescence response for varying Cu II concentration is also reported (Fig. S4, ESI †).
It is worth discussing on the limit of detection (LoD) of the reported uorescent chips. LoD is dened as the analyte concentration that produces a response signal equal to a given threshold level, which is usually set to three standard deviations (s) of the signal (F/F 0 ) from its mean value. Although step-bystep characterization conrmed a good quality of the planar platforms, surface disorder appeared to result in non-negligible sample-to-sample uctuations of the uorescence intensity (Fig. 5b), whereas statistical noise and readout noise of the photodetection apparatus were found to be negligible. The resulting standard error bars of data points in Fig. 5c (3s ¼ 0.63 at [Cu II ] ¼ 2.1 ppm) currently set the Cu II LoD to 4.1 ppm, a value that is close to current limits for copper in drinking waters. The large sensitivity value of 0.15 ppm À1 could be exploited to greatly improve the LoD of the uorescent silicon platforms upon reducing chip-to-chip signal uctuations. 18 Another point of interest is represented by the uorescence efficiency that can be achieved upon uorophore linkage to the silicon substrate. The strong reduction reported for the FITC uorescence lifetime upon surface graing (180 ps against the 2.6 ns value found in water solution, Fig. 4c) hints to a strong sensitivity of FITC uorescence to surface loading, possibly arising from the sizable absorption-emission spectral overlap, which causes uorescence self-quenching through FITC-to-FITC photoinduced energy transfer. 37 This issue, oen encountered in bioanalytical applications of uorescence, opens up to the use of organic dyes with shorter critical (Foerster's) energy transfer distance and, hence, less sensitive to uorescence self-quenching.
Conclusions
Silicon-based uorescent chips for the detection of metal ions in water were reported, where the xanthene-based uorophore, FITC, was covalently linked to an amino-silanized silicon surface via thiourea formation. These solid-state hybrid platforms exhibited selectivity towards copper(II) with good detection limit, competitive sensitivity, and regeneration capability using a metal-chelating agent. An original FITC-Cu II reaction mechanism involving the formation of a lactonic uorescein species, exhibiting disappearance of the fundamental optical transition moment, is proposed as a possible cause of uorescence quenching. The sensing platform architecture, where recognition/uorescent units are integrated on a silicon chip via a layer-by-layer functionalization approach, is very versatile owing to the possibility of tuning the selectivity to other metal ions or different types of analytes by changing the uorophore or the anchoring group. This represents a viable strategy towards silicon-integrated uorescent devices for remote detection of a range of metal ions in water. In the present solid-state planar platforms with high dye loading, spatial uniformity and quantum efficiency of the uorophore are important issues that require further investigations before determining matrix effects, such as the role of pH and interfering ions, in real water samples.
Silica synthesis and deposition
The silica sol precursor was prepared by mixing TEOS, EtOH and distilled water under stirring at room temperature (RT). Here, F 0 refers to the initial fluorescence intensity (at 0 ppm) before the regeneration test started.
Subsequently, HCl was added to the sol and the mixture was maintained under stirring at 60 C overnight. The molar ratio of the starting solution was TEOS : H 2 O : EtOH : HCl ¼ 1 : 5 : 6 : 0.065. A diluted solution was prepared by mixing freshly prepared TEOS solutions with a proper amount of EtOH (volume ratio 1 : 10) in a closed vessel at RT. Silica lms were deposited on 100-oriented silicon substrates cut into $15 Â 15 mm 2 pieces for morphological and optical characterization, and $7 Â 7 mm 2 pieces for sensing measurements. Prior to coating, the silicon substrates were cleaned with water soap (in ultrasonic bath at 50 C for 15 min), distilled water, acetone, and nally rinsed with isopropanol and dried in a nitrogen ow. Ultrathin ($30 nm) silica lms were obtained by spin coating at 7000 rpm for 30 s. Aer deposition, samples were dried at RT for 24 h.
Silica@APTES functionalization
The functionalization of the ultrathin silica lms on silicon substrates with amino groups was done by immersion of the substrates in a 1 mM APTES/hexane solution for 10 s. The lms were then rinsed with hexane followed by acetone to remove the excess of APTES.
Silica@APTES@FITC functionalization
A FITC/EtOH mixture was prepared by dissolving 8 mg of FITC in 10 mL of EtOH. Silica@APTES lms on silicon substrates, vide supra, were then immersed in the FITC solution for 30 min. The reaction was performed at RT in dark conditions. Upon completion of the reaction, the FITC-functionalized silicon substrates were rinsed with EtOH and acetone to remove the excess of FITC, and then kept in 20 mL of milli-Q water for 20 min before characterization measurements.
Morphological characterization
A NT-MDT Solver-Pro atomic-force microscopy (AFM) instrument was used to study the topography of the sample surface. AFM measurements were performed at 0.5-1 Hz scan speed in semicontact mode in air. Topography image analysis and calculation of surface roughness were performed by using WSxM 5.0 Develop3.2 soware. 57 The measurements were performed on at least three different points of the same sample to assess the uniformity of the layers.
Surface wettability experiments
Water contact angle (CA) measurements were performed at 22 C by using a Kruss drop shape analyzer (DSA 30S) and analyzed by the Kruss Advance soware. A sessile drop method was used to measure the contact angle of a 1 mL distilled water drop. The measured CA was the average between the le and right contact angles. The measurements were performed on at least three different points of the same sample to assess the uniformity of the layers.
Optical spectroscopy and uorescence lifetime measurements Reectance spectroscopy measurements were performed under sample direct illumination in a dual-beam spectrophotometer (Agilent Technologies Cary 5000 UV-Vis-NIR) equipped with a diffuse reectance accessory. Fluorescence spectra for samples in air and in aqueous 3-(N-morpholino)propane sulfonic acid (MOPS) buffer in a quartz cuvette were measured under 355 nm irradiation of the samples by a passively Qswitched powerchip laser (Teem Photonics PNV-M02510) operating in pulsed regime (350 ps pulses, 1 kHz repetition rate). The uorescence was spectrally resolved using a single-grating spectrometer (Princeton Instruments Acton SpectraPro 2300i) and acquired by a thermoelectrically cooled Vis CCD camera (AndorNewton EM ). Fluorescence lifetime were measured in air under sample irradiation at 350 nm by an optical parametric amplier (Light Conversion TOPAS-C) pumped by a regenerative Ti:Sapphire amplier (Coherent Libra-HE), delivering 200 fs-long pulses at 1 kHz repetition rate. The uorescence was spectrally dispersed using a single-grating spectrometer (Princeton Instruments Acton SpectraPro2300i) and acquired by a Vis streak camera (Hamamatsu C1091). Pump uence was kept below 50 mJ cm À2 per pulse in all uorescence experiments to prevent sample degradation.
Sensing measurements
Sensing performances towards different metal cations were assessed by uorescence spectroscopy with the silicon-chip sensors placed in a 1 cm-thick quartz cuvette. The cuvette was then lled with 1500 mL of aqueous MOPS buffer (70 mM, pH ¼ 7.2) using fresh distilled water puried by a Milli-Q system (Millipore), and 60 mL of 0.1 mM water solution of various perchlorate salts (Al III , Cu II , Zn II , Ag I , Cd II , Hg II , Pb II ) was added. Sensitivity towards Cu II ions was evaluated through uorescence titration experiments in the 0-120 mL range. A waiting time of 3 min was set before each measurement to ensure that the surface-analyte interaction had reached equilibrium. Surface regeneration aer sensing tests was accomplished by sample sonication in 3 mL of EDTA solution (0.1 M) for 10 min. Three generation cycles were performed on each sample.
Mass spectrometry
Mass spectra were recorded using a triple quadrupole QqQ Varian 310 MS mass spectrometer using the atmosphericpressure Electrospray Ionization (ESI) technique. FITC solutions (20 mL) were injected into the ESI source by a Rheodyne® model 7125 injector connected to a HPLC Varian 212 LC pump, with a 50 mL min À1 methanol ow. Experimental conditions: Dwell time 2 s, needle voltage 3000 V, shield voltage 600 V, source temperature 60 C, drying gas pressure 20 psi, nebulizing gas pressure 20 psi, detector voltage 1600 V. Mass spectra were recorded in the 100-600 m/z range. Collision-Induced Dissociation (CID) tandem mass (MS/MS) experiments were performed using argon as the collision gas (1.8 psi). Collision energy was varied from 20 to 40 eV. | 4,786.6 | 2021-04-26T00:00:00.000 | [
"Environmental Science",
"Chemistry",
"Materials Science"
] |
Numerical Study of a Customized Transtibial Prosthesis Based on an Analytical Design under a Flex-Foot ® Variflex ® Architecture
Featured Application: A potential application of this work is an ergonomic method for designing personalized hiking and jogging prostheses for use in rehabilitation after amputation. Abstract: This work addresses the design, analysis, and validation of a transtibial custom prosthesis. The methodology consists of the usage of videometry to analyze angular relationships between joints, moments, and reaction forces in the human gait cycle. The customized geometric model of the proposed prosthesis was defined by considering healthy feet for the initial design. The prosthesis model was developed by considering the Flex-Foot ® Variflex ® architecture on a design basis. By means of the analytical method, the size and material of the final model were calculated. The behavior of the prosthesis was evaluated analytically by a curved elements analysis and the Castigliano theorem, and numerically by the Finite Element Method (FEM). The outcome shows the di ff erences between the analytical and numerical methods for the final prosthesis design, with an error rate no greater than 6.5%.
Introduction
The main cause of lower limb amputations is car accidents, followed by diabetes and related vascular diseases. According to the National Health and Nutrition Examination Survey (NHANES) 2012, about 6.4 million people have diabetes in Mexico [1]. Mexico currently ranks eighth worldwide in terms of the prevalence of diabetes [2], and Projections of International Reports estimate that, by 2025, Mexico will rank sixth or seventh, with 11.9 million Mexicans with diabetes. Prostheses, together with the amputee, present a complex biomechanical system whose behavior is influenced by various design factors. To determine the operating conditions to which the prosthesis will be subjected, it is necessary to analyze the gait cycle [3]. The operating conditions are the reaction times and forces, as well as the angles between each joint during the running cycle. In terms of biomechanical design, the main factors to consider are the mechanical properties [4], the length of the prosthesis [5,6], and the weight of the prosthetic components [7]. Most studies in the literature have focused on assessments of the kinematic and kinetic gait [8] and foot plantar pressure [9]. The kinematic method [10,11] describes body movements and the relative movements of body parts during different gait phases; for instance, the study of the angular relationships between lower limb segments during the gait cycle is a kinematic method.
In recent decades, numerical methods have gained popularity, especially when it is necessary to obtain detailed and complex information about the behavior of the prosthesis or the interaction between an amputee and the prosthesis [12][13][14]. Through the Finite Element Method (FEM), the stress and strain distribution in a prosthesis can be determined, which is non-viable using experimental or analytical methods. Despite the many articles dealing with modeling by FEM and the interaction of limbs, only a few have explored the behavior of the components of lower limb prostheses [15][16][17]. FEM modeling of a SACH ® -type foot was performed to treat the effect of the viscoelastic heel performance [18]. Additionally, [19] presents a complete analysis of a prosthetic foot, validating it through FEM and an experimental analysis to improve footwear testing.
Nevertheless, to the best of the knowledge of the authors at the time of the writing of this paper, there are no published works describing the behavior of the entire prosthesis. Some approaches can be found in studies examining the mono limb ® prosthesis model [20]. In cases where a prosthetic foot is included, the ground reaction force (GRF) is considered as a load for FEM analyses. However, GRF has usually been directly applied to model nodes of the prosthetic foot or surfaces, despite geometric nonlinearity in the prosthesis design [18][19][20]. This assumption has been reported as the most likely source of error, so the incorporation of elements of a contact and friction floor-standing model in future works could sensibly reduce such error [18].
This work has three main goals: to define the operating conditions of lower limbs during a human gait cycle at a medium-high speed; to design a transtibial prosthesis for medium-high impact activities, through an analytical method, supported by a well-defined methodology; and, finally, to validate the obtained results through analytical methods and numerical simulations.
Materials and Methods
This work heavily relies on the guidelines of the general method for solving biomechanical problems, proposed by Özkaya and Nordin [21]. It proposes the construction of a transtibial prosthesis with a Flex-Foot ® prosthesis architecture using the methodology shown in Figure 1. These prostheses are designed for persons engaged in medium-high impact activities, because they allow vertical shock absorption, reducing the trauma in the residual limb, joints, and lower back during daily activities. In addition, they provide energy storage and return. The weight exerted on the heel becomes energy that drives the step, imitating the driving force of a normal foot, offering a greater flexibility and a design that is easy to cover and adapt aesthetically.
Biomechanical and Anatomical Considerations
The most important parameters employed to determine the gait cycle behavior in a patient are the age, weight, and height. Table 1 shows the physiological characteristics of the patient and test subject, which shall allow the videometric analyses.
Conditions of the Lower Limbs during the Gait Cycle
The method used for the gait test was videometry analysis. One of the most important parameters to be taken into consideration for the study of the human gait cycle is the speed. As the design of the custom prosthesis ought to be geared towards daily activities, gait speeds ought to be chosen within the parameters in the ranges of medium and fast motion. A half speed of 2.4 m/s was chosen. For the videometry analysis, a single camera in a transversal position in relation to the test subject was used. Circular indicators for the video capture system were employed, in order to recognize specific points in the gait cycle to determine angles formed with the hip, knee, and ankle. The employed camera has a video capture resolution of 50 fps (frames per second), and it always remained at the same distance from the subject. In Figure 2, a complete cycle of motion is observed, relative to the sagittal plane. This work only takes into consideration two joints as points of study for gait analysis.
Kinematics
The information obtained from videometry analysis was processed using Tracker ® software, which allows the coordinates of reference points, as well as the elapsed time from a sequence of frames obtained from a video, to be acquired.
To obtain relationships between reference points (joints), it is necessary to calibrate the distances between such points. In Figure 3, calibration in Tracker ® was performed for the test subject from the anatomical data. To determine the angular relationships between joints, we used the geometric triangular relationships formed in the leg joints, in order to be able to apply the law of cosines to obtain the subtended angles for each phase of the gait cycle ( Figure 4). Once the system had been calibrated ( Figure 3) and the coordinates of the assigned points had been obtained, the distances between each reference point were obtained through the Cartesian distance equation in the plane (Equation (1)).
An example of this calculation is shown in the following section: in order to obtain the distance between the hip joint and knee joint, X1 and Y1 values are considered as fixed joints and X2 and Y2 represent the values of movable joints.
Kinetics
Moments and reaction forces are important parameters for the march, as these indicate the loads to be applied in prosthesis modeling. In order to obtain the resulting moments of mass centers, Dempster diagrams were employed [22]. In Figure 5, the percentage ratios of the mass centers are shown.
In Figure 5, Wa = 61,247 kg is the total weight of the body to the hip, Wb = 7.227 kg is the thigh's weight, Wc = 68,474 kg is the total weight of the body to the knee, and Wd = 3,431 kg is the weight of the leg.
Approximation of the Geometric Model of the Prosthesis
The Flex-Foot ® system with a Variflex ® architecture is the basis for the customized design model of the transtibial prosthesis. Such a prosthesis consists of four essential components: A foot module, heel module, male pyramid adapter, and clamping elements. The architecture of the Flex-Foot ® Variflex ® system mainly consists of two fundamental elements: foot and heel modules. To define the prosthesis architecture and dimensions, a healthy foot was taken as a starting point. In Figure 6, the first approximation of the geometric model of the prosthesis is shown, from which two main modules were taken: Module Foot (MF) and Module Heel (MH).
Analogy between the Human Body Parts and Mechanical Elements
The first geometric model approach ( Figure 7) allowed us to define the longitudinal dimensions, so it was then necessary to define the proper thickness for each section of the prosthesis. The initial model consisted of curved sections with different dimensions, which could be related to the concept of curved beams, simplifying the analysis. In the first stage, the theory of Timoshenko and Goodier was used to obtain stresses; then, the energy method through the Castigliano theorem allowed us to obtain the strains [23]. The proper thickness of each section of the prosthesis model was determined through four study cases. The initial conditions are represented in red and the boundary conditions are represented in gray. The stress on curved beams was determined by an analysis of each case of study. The ideal thickness of each prosthesis module was obtained through Equation (2), which allowed us to determine the circumferential stress distributions: where σ is the circumferential stress, A is the cross-sectional area, r is the beam radius, N represents the axial forces, M x represents the bending moments, and Lmax is the length of the lever arm. The expressions for A, R, and A are It should be noted that the international system of units was adopted to carry out all calculations. The units used in all of our calculations and simulations were Newtons (N), Pascals (Pa), meters (m), and kilograms (kg). Taking study case 1 for MF, different thicknesses were analyzed, producing their stresses in different fibers. The maximum stresses in the cross section of the curved zone on MF ( Figure 8) were determined by Equation (5), which permitted us to determine the circumferential stress distribution on curved beams. In Figure 8, an example of the application of Equation (5) for study case 1, by the analytical method, can be observed.
The maximum stress in sections of the curved beam with respect to its angle can be calculated as follows: where ΣFr is the sum of radial forces, ΣFθ is the sum of circumferential forces, σ is the circumferential stress, ΣM0 is the sum of moments, N is the axial force, V is the shear force, and M x represents the bending moments.
Material Selection
With the acquaintance of the main stress, σ1, Equation (2) is easy to determine by the von Mises criterion. For the MH in the case of study 1, the maximum tension stress was 936.355 Mpa, while the maximum compression stress was −898.998 MPa. Therefore, the von Mises stress result was 1.589 GPa. Using a safety factor of two, the elastic limit was calculated with the stress of 3.2 GPa. A carbon fiber was selected because its elastic limit was within the calculated values, which allowed a greater deformation of the prosthetic element. In addition, a High Resistance (HR) carbon fiber allows for greater energy storage.
Analytical Behavior of the Prosthesis
To illustrate the analytical applied method, we will present its application to study case 1 (sum of forces and moments shown in Figure 9). As the Castigliano theorem determines the total deformation of an element from the strain energy in each element independently, it is necessary to determine the independent sections. In Figure 9, the division of independent elements for MF is shown, for study case 1. It is necessary to analyze the energy generated in MF to determine the energy to be taken by the articulated area. Therefore, a strain energy analysis from the Castigliano theorem and free body diagram for MF was performed.
Case 1: Summation of Forces on the Foot Module
It is necessary to analyze the energy generated in the foot module to determine the energy to be taken into an articulated area. This is achieved through the analysis of strain energy from the Castigliano theorem and free body diagram foot module.
Calculation of the strain energy generated in the curved zone, through the deformation of equations defined by the strain energy, produces the Castigliano theorem: Using the first Castigliano theorem (Equation (7)): where U is the deformation energy, F is the reaction force obtained from the analysis by videometry, A is the cross-sectional area, Am is the distance from the center of the circumference of the curved beams, r is the beam radius, N is the axial stress, V is the shear stress, M x represents the bending moments, k is the correction coefficient, E is the young modulus, and g is the shear modulus.
Case 4: Articulated Section
The calculation of the articulated section is important because part of the energy generated in the module foot (MF) is accumulated and released in this area, and it is thus necessary to determine the right dimensions. In Figure 10 the articulated area is shown.
Case 4: Summation of forces and moments on the foot module curve Section 2 where ΣFr1 and ΣFr1 are the sum of radial forces, ΣFθ1 and ΣFθ2 are the sum of circumferential forces, ΣM01 and ΣM02 are the sum of moments, N1 and N2 are the axial force, V1 and V2 are the shear force, and M x represents the bending moments. To calculate the strain energy generated in the curved zone, we used the strain energy equations, defined by the Castigliano theorem. Substituting the values of the straight section in the curved Sections 1 and 2 produces the total strain energy: (11) The maximum average energy generated in MF was then used to determine the variables of the articulated section. For this, the Castigliano theorem considers the energy U generated by an item as the ratio in the stress-strain curve of material under linear loads.
Numerical Prosthesis Assessment
The final geometry of the transtibial prosthesis model based on the Flex-Foot ® Variflex ® architecture was drawn in AutoCAD ® and exported to the ANSYS ® program generator ( Figure 11). To compare the analytical and numerical results, a prosthesis ought to be divided into two main modules, analogous to the division made for the analytical method. Then, three critical cases of study can be determined: MF, MH, and the complete prosthesis model. Figure 12 shows, in red, such critical cases. The properties of the HR carbon fiber employed for the numerical analyses were as follows: Modulus of elasticity: E = 3.9 × 105 N/mm 2 ; yield stress: y = 3.5 × 103 N/mm 2 ; and Poisson's ratio: ν = 0.3. After exporting the prosthesis geometrical model into ANSYS ® , it was meshed as follows: The SOLID186 element was used because it has intermediate nodes, which yielded an increased numerical accuracy. Numerical loads applied to the MF model were taken from the kinetic analysis.
Case 3: Final Prosthesis Model
In order to assess the final prosthesis model, it ought to be divided into two subcases, because each sub-model (MF and MH) is exposed to two different forces, depending on the gait phase. Figure 13 shows each subcase to be analyzed:
Lower Limb Operating Conditions during a Gait Cycle at a Medium Speed
The angular relationships, angular velocities, resulting moments, and reaction forces of joints presented in the following figures are plotted vs. the cycle of human motion to determine the positions in the gait cycle where greater angular relationships, angular speeds, resulting moments, and reaction forces are achieved. Such values are considered for the analytical design of prostheses. Figure 14 shows the angular relationships in knee and ankle joints from triangular geometric relationships (law of cosines). Figure 15 shows the vertical displacements obtained from coordinates and distances between joints; a maximum vertical displacement of 7.755 cm (hip), 7.476 cm (knee), and 12.009 cm (ankle) can be observed. Although the test subject mass center does not follow a straight line during the gait cycle, the mean point of this vertical displacement in the male adult is, on average, 8 cm, causing the center of gravity to have a very smooth movement without abrupt deflection changes, minimizing the spent energy. The parameters obtained from both knee and ankle flexion angles are shown in Figure 16. In this work, a knee joint flexion between 4.7 • and 50.5 • was determined, while, for the ankle joint, it lied between 10 • and 19.7 • . Furthermore, it ought to be remarked that the obtained graphical behavior of angles during the gait cycle is consistent with previous work [12][13][14][15][16][17][18][19][20]. The fact that the minimum flexion angle at the ankle joint is slightly above the average reported values might be because the employed camera has a lower resolution with respect to those used in other works. As for the moments of reaction and resulting forces, the literature reports that moments resulting from the knee have a maximum of 0.5-0.45 nm and a minimum of 0.01-0.05 nm [12][13][14], while, for the ankle, they have a maximum of 0.9-0.70 Nm and a minimum of 0.01-0.03 Nm [15][16][17][18]. In this work, the resulting moments for knee joint of 0.035 nm and 0.487 nm were obtained as minimum and maximum values, respectively, while for the ankle joint, values of 0.0164 nm and 0.703 nm were obtained analogously. These values are in good agreement with previously published results. With respect to the reaction forces, the knee joint was the only considered joint, because this work is focused on the design of a transtibial prosthesis. In Figure 8, the reaction forces, generated in the knee, can be particularly observed in the tibiofemoral joint. This is mainly because the prosthesis under design is focused on working during low-medium impact activities, such as climbing stairs, smooth jogging, or even a fall, which do not entail double support. Therefore, the values obtained due to reaction forces in this work and double values are published elsewhere. Figure 16 shows the resulting moments obtained from Equation (7), in knee and ankle joints, where the ankle angle is represented with a red line and the ankle reaction moment is represented with a blue line for Figure 16A,B, while Figure 17 shows the reaction forces in such joints.
Analytical Assessment of Prosthesis Behavior
In this section, stresses and associated deformations in different thicknesses of the cross section are analytically analyzed for the most critical points in the prosthesis. Stresses and deformations in flexion and compression were obtained for cross sections in cases 1, 2, and 3 ( Table 2) by means of Equations (3)-(5). To assess the behavior of fibers in the cross section of the curved beam, the length of the thickness was divided. For stress analysis, the considered angles selected for each case of study are those in which the greatest stresses were achieved. Once maximum deformations had been determined for each case, they were compared to determine the case that produced the maximum values, which turned out to be case 3 (see Table 3). Therefore, case 3 was the most critical point in the prosthesis. In this case, the HR carbon fiber had a maximum elongation of 2%. In Figure 18, the deformation energy of the cross section of MF is shown for the case of study 3 (calculated from Equation (6)). Figure 18b shows the computed value of 3.925 cm for the base of the hinged section, for the final prosthesis model.
Numerical Assessment of Prosthesis Behavior
Numerical analysis of geometric models allows a realistic and efficient simulation of the working conditions, in order to assess the proposed Flex-Foot ® prosthesis design. Figures 19 and 20 show the stresses and deformations obtained after analyzing the study cases 1 and 2 previously stated. Figure 19 shows case 1, where a maximum stress of 382.812 N/mm 2 and maximum deformation of 0.998 mm were obtained. Figure 20 shows case 2, where a maximum stress of 639.467 N/mm 2 and maximum deformation of 2.58192 mm were obtained. The loads applied to MF and MH were taken from the most critical reactions obtained from geometry analysis. For MF, the toe-off position was selected, while for MH, the heel strike condition was imposed (see Figure 13). Figure 23 shows case 3.2, where a maximum stress of 675.236 N/mm 2 and maximum deformation of 7.27703 mm were obtained. The loads applied to MF and MH were taken from the most critical reactions obtained from videometry analysis. For case 3.1, the steps of heel strike were selected, while for case 3.2, the toe-off phase load was imposed (see Figure 13).
Discussion
The results obtained by the analytical method could be validated by comparing them with numerical results. Figure 23 compares the maximum stresses and deformations obtained in MF and MH. Tables 4 and 5 compare the most critical stresses and deformations obtained using both methods. Figure 24 compares the maximum numerical stresses and deformations obtained for the complete prosthesis model for cases 3.1 and 3.2, with respect to the maximum ones that can be borne by the selected material. It can be appreciated that the maximum stresses and deformations exhibited by the complete model do not exceed the material yield stress and its allowed deformations. Furthermore, the slippage suffered in the modules of the prosthesis due to the contact is very low, ensuring that the prosthesis does not undergo abrupt geometric changes, providing prosthesis integrity and patient safety.
Conclusions
In this work, a methodology to design a personalized transtibial prosthesis using an analytical method was developed. Furthermore, the analytical results were validated by means of finite element analysis, obtaining errors no greater than 6.5%. The main difference between commercial and customized prostheses is the efficiency and comfort they provide to the patient. The methodology provided herein allows the geometric model of the prosthesis to be approximated, so as to be able to determine stresses and deformations associated with its operating conditions. Therefore, the selection of adequate construction materials is easier, achieving an operational design.
Furthermore, the usage of the videometry technique has been proven to be a successful method for studying the biomechanics of the human gait, simplifying the process due to the ease of implementation. Of interest is the analysis of the articulated area in the prosthesis design, which allows the impact produced in the foot during the contact phase of the walking cycle to be partially absorbed. The weight of the prosthesis is also considered in order to provide comfort and adaptation for the patient. In this sense, the designed prosthesis has a smaller weight than the patient's healthy foot, which allows the weight in the socket to be adjusted, maintaining the stability of both lower limbs.
This paper presents limitations based on four important aspects. Firstly, in terms of methodological limitations, the sample size was only one test subject; however, the test subject had a 50th percentile anatomy. This allows us to estimate that 50% of the world's population will be able to use this research to design a custom Flex-Foot ® prosthesis. Secondly, the software used to capture the test subject's gait could present errors due to shutter delay, so it is necessary to carry out many tests until the repeatability of the experiment is greater than 90%. Thirdly, the lack of previous research studies on this subject prevents a comparison of the results obtained. However, this research supports the use of an analytical method and numerical method for prosthesis design. Finally, one of the greatest limiting factors is the longitudinal effects that occur in the prosthesis when it is used by a patient with a lower limb amputation. The time available to investigate a problem and measure change or stability over time is, in most cases, very limited. This bias may reflect its dependence on research, since it is not possible to estimate data on the evolution of the anatomy of the subject using the implant. These limitations of our study may inspire other researchers to return to custom prosthesis design. | 5,719 | 2020-06-22T00:00:00.000 | [
"Engineering"
] |
High optical spin-filtering in antiferromagnetic stanene nanoribbons induced by band bending and uniaxial strain
Non-equilibrium spin-polarized transport properties of antiferromagnetic stanene nanoribbons are theoretically studied under the combining effect of a normal electric field and linearly polarized irradiation based on the tight-binding model at room temperature. Due to the existence of spin-orbit coupling in stanene lattice, applying normal electric field leads to splitting of band degeneracy of spin-resolved energy levels in conduction and valence bands. Furthermore, unequivalent absorption of the polarized photons at two valleys which is attributed to an antiferromagnetic exchange field results in unequal spin-polarized photocurrent for spin-up and spin-down components. Interestingly, in the presence of band bending which has been induced by edge potentials, an allowable quantum efficiency occurs over a wider wavelength region of the incident light. It is especially important that the variation of an exchange magnetic field generates spin semi-conducting behavior in the bended band structure. Moreover, it is shown that optical spin-filtering effect is obtained under the simultaneous effect of uniaxial strain and narrow edge potential.
Methods
The proposed spin optoelectronic device is designed based on ZSNR in the presence of some external field, such as normal electric field, antiferromagnetic field and linearly polarized light field.The performed simulations are divided into two self-consistent computations, in which the first part is based on calculating spin-dependent electron features of ZSNR and the second part taking into account the quantum transport equation of the interaction of light with matter by employing the NEGF approach.It is worthy to mention that due to the absence of impurity or the electron-phonon interaction in the present study, spin-flip mechanisms are neglected 33 .Consequently, transport equations have been solved for spin-up and spin-down components, individually.
The total Hamiltonian of the proposed nanodevice is divided as follows: where the first two contributions in Eq. (1) are the Hamiltonian of semi-infinite left and right leads, respectively.H LC and H RC describe coupling between the scattering region and the left and right leads.H C represents the Hamiltonian of scattering region: H 0 is the tight-binding model in the presence of antiferromagnetic field and the normal electric field as follows [34][35][36] where t is the usual nearest-neighbor hopping in the scattering region and its value is equal to 1.3 eV.c iα (c † iα ) annihilates (creates) an electron with spin polarization α at atom i, and < i, j > (<< i, j >> ) represents the sum over nearest (next-nearest) Sn-Sn pairs.The second term in Eq. ( 3) accounts for the effective spin orbit coupling parameter with a coupling strength of so = 100 meV .� σ = (σ x , σ y , σ z ) is the Pauli's spin matrix.ν ij = −1(+1) for clockwise (anticlockwise) next-nearest-neighboring hopping.The third contribution denotes the effect of a perpendicular electric field E z .Also, ℓ is the buckling height and ξ i is equal to +1(−1) for the upper (lower) sublattice.The last term is related to antiferromagnetic exchange field 37,38 .The antiferromagnetic region can be realized with two ferromagnetic layers oriented antiferromagnetically on two sides of the sample, for example, (1) (3) www.nature.com/scientificreports/Crl3 39-42 or even can be realized in four layer configuration of two ferromagnetic layers 43 .H eγ = e m e � A .� P indi- cates the electron-photon interaction and it is considered as the first-order perturbation Hamiltonian.m e is the mass of electron, A and P represent the time-dependent electromagnetic vector potential and the momentum of the electron, respectively 44,45 .
After computing the Hamiltonian of the scattering region and the left and right leads, the retarded Green's function of the nanodevice, in the presence of light radiation, can be written as: where In Eq. ( 4), η and I are infinitesimal broadening and identity matrix, respectively.� L(R),σ is the retarded self- energy due to the presence of the left and right leads.While the self-energies of the left and right contacts were computed by the Sancho iterative approach 46,47 , the electron-photon scattering is included as self-energy term.In Eq. ( 5), � γ ,σ is the self-energy of the electron-photon interaction which is expressed as: γ ,σ and � > γ ,σ are the lesser and greater self-energies of the electron-photon scattering which are given as follows: In the above equation, E ± = E ± ℏ ω and N γ exhibits the number of photon with energy ℏ ω 48 .M γ is the electron-photon interaction arising from the perturbation Hamiltonian H eγ .Each element of M γ is given by: A 0 is the amplitude of electromagnetic vector potential and it's direction is determined by the light polariza- tion ( êp ).G < σ is the electron correlation function 49 : and the hole correlation function is: In Eqs. ( 9) and (10) ) , represents the broadening functions of the left (right) electrode.� γ ,σ is determined self-consistently by the iteration method.Once convergence is obtained, one can calculate the spin photocurrent across the system by 50 : is the first block of the hole (electron) correlation function 51 .Spin-dependent quantum efficiency can be written as: where A D = L ch W ch and E ph are cross section of central channel and photon energy, respectively.Also, spin polarization is defined as
Results and discussion
Spin-polarized photocurrent.Spin-photocurrent across the antiferromagnetic single layer zigzag stanene nanoribbons was simulated through the tight-binding approximation and the NEGF formalism by considering the combining effect of the normal electric field and the linearly polarized light.The incident light is monochromatic with constant intensity of I w = 100 kW cm 2 , which is radiated normally on the top of central channel.In this study, the scattering region has constant length which consisting of 120 unit cells which is sandwiched between two semi-infinite left and right leads.In addition, the scattering region, the left and right leads have same structure.
In the first instance, spin transport properties of ZSNR with N = 10 zigzag chains and 20 atoms in the unit cell are investigated.To this end, spin-resolved band structures of the antiferromagnetic ZSNR for various strengths of E z is shown in Fig. 1.These results are similar to reports of previous study 52 .As can be seen in Fig. 1a, in the absence of E z , twofold spin degeneracy of the band structure is observed.Applying the external electric field breaks inversion symmetry.The inversion symmetry breaking in combination with the large spin-orbit coupling lifts the band degeneracy in stanene.Also, it is found that switching the direction of the normal electric field will reverse the spin polarization of the band structure and hence, sign of the spin polarized photocurrent.Moreover, finite energy band gap is emerged between spin-up and spin-down energy levels (Fig. 1b,c).As depicted in these figures, the magnitude of band gap is different for spin-up and spin-down energy levels.In the absence of antiferromagnetic term, the magnitude of band gap is equal for both spin states.Physically, due to different effect of the antiferromagnetic term on the potential energies of spin states in the on-site terms of the Hamiltonian, it is expected unequal displacement of spin-up and spin-down energy levels.Accordingly, the presence of the antiferromagnetic exchange field leads to asymmetric spin-dependent band gap and breaking of time-reversal symmetry 38 .
The spin-polarized quantum efficiency of 10ZSNR with M AF = 0.025t versus photon energy under the lin- early polarized light and for different strengths of E z is plotted in Fig. 2. As presented explicitly in the inset of Fig. 1b, the asymmetry of band gap energy at the K and the K ′ valleys results in unequal absorption of light which leads to spin population imbalance in these valleys.It can be said that these valleys in ZSNR are approximately equivalent to the valleys of infinite stanene sheet which are around the Dirac points 53 .Also, spin splitting of the energy band structure gives rise to spin-dependent absorption when the linearly polarized illumination is shed normally on the top of central region.On the other hand, because of the relatively large enough spin relaxation time in stanene 33 , the electron preserves its spin in this photo-excited phenomena.In fact, the spin-polarized carriers are excited from spin up(down) valence sub-bands to spin up(down) conduction sub-bands by photons with an appropriate energy.Based on these discussions, different occupation numbers between the K and the K ′ ( K ′ = −K ) valleys, which has been induced by the antiferromagnetic exchange field, leads to a unequal spin- polarized photocurrent for spin-up and spin down components.It should be mention that the normal electric field and the antiferromagnetic exchange field is applied to the scattering region.As can be observed in Fig. 2, the acceptable quantum efficiency is obtained for the photon energy within the range 2.5 eV < E ph < 3.56 eV .From the results represented in Fig. 2, one can see considerable variations in spin optoelectronic properties of ZSNR in the whole allowable photon energy range.In addition, position and magnitude of the most probable optical transitions are varied by different strengths of E z .It is obvious that the electric field changes magnitude of the first optical absorption considerably.
For more clarification, optical spin polarizations of ZSNR for various strengths of E z is displayed in Fig. 3.The spin polarization diagrams for different magnitudes of the electric field have similar qualitative behavior, nearly.
Moreover, spin optoelectronic properties of other narrow ribbons with various widths has been studied.The results reveal similar qualitative behavior but with differences in the magnitudes, positions of the highest peaks of quantum efficiency and acceptable range of the photoresponsivity.These results demonstrate the robustness of the obtained results to the ribbon size.It is worth mentioning that the temperature has been set in the Fermi-Dirac distribution function as room temperature and the other temperature phenomena have been ignored.Of course, it is obvious that the effect of temperature in the form of phonon scattering can modify the obtained results.
Band bending.In this section the effect of band bending on the spin transport properties of single layer antiferromagnetic ZSNR is investigated.A previous study on the band structure and conductance of a zigzag silicene nanoribbon has shown that band bending could be created and controlled by the edge potentials.The bending near valleys can be realized by the edge states 54 .In this paper, we proceed further by studying the benefits of both 2D buckled structure and antiferromagnetic spintronics in order to harnessing spin dependent transport in narrow ZSNRs in the presence of band bending.To this end, firstly, the band structures of 10ZSNR in the presence of edge electric field is analyzed (Fig. 4).In Fig. 4a, the edge field is applied to N = 8 zigzag chains, such that two chains which are located at the center of ribbon are not affected by E z .Also, in Fig. 4b, the edge potential is applied to N = 4 zigzag chains and six chains at the center of ribbon are not affected by E z .It should be noted that applied edge fields at the two edges of ribbon are symmetric, that is, � Owing to the buckled structure of stanene, the edge potentials could significantly impact on the edge states and the band structure.Figure 4a,b indicate that with the narrowing of edge field, the bending enhances gradually, which is in good agreement with previous report 54 .Also, in Fig. 4c electronic band structure of 10ZSNR with narrow edge field and M AF = 0.03t is displayed.By increasing the magnitude of antiferromagnetic exchange field from M AF = 0.025t to M AF = 0.03t in combination with applied edge field to N = 4 chains, the half-metal behavior is revealed at the band structure as shown in Fig. 4c.For spin-down electrons, the gap is closed and metallic behavior is observed, while spin-up carries display still semiconductor properties.
Quantum efficiency as a function of photon energy for the spin-photovoltaic device based on antiferromagnetic 10ZSNR under applied edge field to N = 8 chains is presented in Fig. 5.Note that, here, elE z = 0.09t and M AF = 0.025t .In comparison with Fig. 2a, it is clear that , in general, quantum efficiency is enhanced for both spin states in the presence of edge potential.Note that, in Fig. 2, the electric field is applied to the whole area of the central region.Furthermore, acceptable range of the photoresponsivity is broaden under edge potential.In this case, the maximum magnitude of the spin polarization is obtained 95.94 % .at E ph = 4.04 eV .In Fig. 6a,b www.nature.com/scientificreports/respectively.Other parameters are: elE z = 0.09t and M AF = 0.025t .By comparing Fig. 6a with Fig. 5a, it is clear that allowable range of the photoresponsivity is broaden with the narrowing of the edge field.Additionally, the identical qualitative manner of the quantum efficiency is realized, although position and height of the spin-dependent optical transition lines are different, which comes from the difference in the spin-resolved band structure for spin-up and spin-down states.By evaluating of the spin polarization diagrams in Figs.5b and 6b, one can see that spin polarization is improved with the narrowing of the edge potential.The highest spin polarization peak of 99.43% is appeared at E ph = 4.26 eV .The effect of the antiferromagnetic exchange field strength increasing on spin optoelectronic behavior is investigated in Fig. 7.In this figure, the edge potential is applied to the N = 4 zigzag chains with elE z = 0.09t and here M AF = 0.03t .Obviously in Fig. 7a, quantum efficiency of spin-down carries is higher than quantum efficiency of spin-up carries for a broad range of photon energies which is in contrast to the observed overall trend for quantum efficiency in the presence of the edge field (Figs.5a, 6a).This distinguished behavior can be attributed to the half-metallic feature of spin-down component.Moreover, as can be observed in Fig. 7b, the perfect (100%) optical spin polarization is obtained at E ph = 3.98 eV.
Strain.
In the crystal structure of a unstrained single-layer stanene and equilibrium condition, each atom of the upper sublattice is connected to its neighboring atoms in the lower sublattice through three bond vectors: , where a = 4.7 is lattice constant of stanene and ϕ = 107.1 0 .The presence of an external tension T , which is defined as T = T cos θ êx + T sin θ êy , leads to the transformation of these vectors.Longitudinal expansion of stanene layer arising from tension is determined by ε = (á − a)/a .The tight-binding approximation is a standard approach to describe the elec- tronic properties of nanodevices.One of the important benefits of this approximation is that one can take into account the effect of strain only by modifying the tight-binding parameters.By applying a uniaxial strain in www.nature.com/scientificreports/stanene, equilibrium distance, R 0 = a/( √ 3 sin ϕ) , is distorted.Thereby, the hopping energy is changed.Calcula- tions demonstrate that under the effect of strain, hopping coefficients are modified as follows 55 : where β is Grüneisen parameter and in our simulation we take β = 1.95 56,57 .
To study the effects of a uniaxial strain on the performance of proposed spin-optoelectronic device based on antiferromagnetic ZSNRs, the spin-dependent band structure and the quantum efficiency under various strains is inspected.Experimentally, in 2D materials, local strains are generated by depositing them on a prestretched elastomeric 58 or rough substrates 59 .The results mentioned in this sub-section are for the case in which spin-optoelectronic device is based on antiferromagnetic 10ZSNR and edge potential is applied to N = 4 zigzag chains where six chains at the center of ribbon are not affected by E z .As depicted in Fig. 8, applying the uniaxial strain leads to a shift in the spin- resolved sub-bands around the Fermi energy and it therefore results in change of population of the states.As a consequence, by employing strain, one can monitor accumulation of spin-polarized carriers in the scattering region.In this case, the variation of the kinetic energy induced by strain in the hopping term of the Hamiltonian is included.This may lead to the alteration of magnitude and also the number of peaks corresponding to spin-up and spin-down components.www.nature.com/scientificreports/ Figure 9 presents the numerical results for the spin-dependent quantum efficiency of 10ZSNR for different applied tensions.As can be seen in this figure, different tensions reveal spin-optoelectronic features in an energy range between 2.2 eV < E ph < 4.28 eV .In addition, the similar qualitative behavior for the spin-dependent quantum efficiency diagram is obtained in the presence of different strains.Nevertheless, due to the variation in spin-resolved energy levels and the existence of various band gaps for spin-up and spin-down states different strains, location and height of the spin-dependent optical transition peaks are varied.Moreover, by comparing Fig. 9 with Fig. 6a it can be understood that in the presence of strain the magnitude of quantum efficiency is decreased strongly for spin down component and moderately increased for spin up component.Accordingly, it is expected that applying strain leads to spin-filtering effect in the spin optoelectronic device.
In order to get an exact explanation of spin-filtering effect in the presence of strain, the optical spin polarization of ZSNR as function of the photon energy is shown in Fig. 10 for different strengths of the strain.As mentioned earlier, the spin splitting of the electronic band structures which is attributed to the simultaneous effect of vertical electric field and large spin orbit coupling leads to appearance of the spin polarization and particularly, a fully spin-polarized photocurrent for only one spin state.Furthermore, the spin-filtering photoresponsivity which is induced under the effect of strain is occurred for the wide photon energy range from 2.84 to 4.28 eV, approximately.It can be said that strain improves the optical spin-filtering property of stanene lattice such that the full spin polarization range of the photon energy is broaden considerably.
Conclusion
In summary, spin-polarized photocurrent in the antiferromagnetic single layer stanene nanoribbon under the perpendicular electric field is theoretically investigated.Optical response of spin optoelectronic nanodevice in the presence of linearly polarized illumination is calculated by means of the self-consistent nonequilibrium Green's function approach together with the tight-binding model.In antiferromagnetic SZNR, twofold spin degeneracy of spin-resolved energy levels is split due to combination effect of the vertical electric field and the large spinorbit interaction in stanene lattice.Also, robustness of the spin-polarized photocurrent has been demonstrated when the electric field is applied on the two edges.Interestingly, the spin-polarized current is generated in the wide wavelength region of incident light in the presence of the edge potential.Besides, the results indicate that band bending enhances spin-polarization in device.In particular, it is shown that in the presence of the narrow edge potential, by varying antiferromagnetic exchange field, spin-semiconducting behavior is obtained in stanene nanoribbon.This behavior arises as a consequence of the large spin-orbit coupling in stanene lattice.Furthermore, it is found that the spin-resolved photocurrent can be engineered by external strain and nearly full optical spin-filtering is obtained in antiferromagnetic stanene lattice under the combining effect of strain and narrow edge potential.The obtained results in this study may be useful to develop stanene-based nanodevices such as spin-photo detectors, spin photodiodes and generation of full spin-filtering.
Figure 1 .
Figure 1.The band structure of antiferromagnetic 10ZSNR subject to a perpendicular electric field and so = 100 meV : (a) with E z = 0 , (b) with elE z = 0.09t and (c) with elE z = 0.12t .Blue line denotes spin down and dashed red line denotes spin up.
Figure 2 .
Figure 2. Quantum efficiency as a function of the photon energy for the spin-photovoltaic device based on antiferromagnetic 10ZSNR with M AF = 0.025t under the simultaneous effect of linear illumination with I w = 100 kW cm 2 with so = 100 meV and (a) elE z = 0.09t , (b) elE z = 0.12t.
Figure 4 .
Figure 4.The band structure of antiferromagnetic 10ZSNR subjected to a edge electric field and so = 100 meV : (a) with M AF = 0.025t and applied electric field to N = 8 zigzag chains, (b) M AF = 0.025t and applied electric field to N = 4 zigzag chains, (c) M AF = 0.03t and applied electric field to N = 4 zigzag chains.Blue line denotes spin down and red line denotes spin up.Other parameters are: M AF = 0.025t and elE z = 0.09t .Here, elE z = 0.09t.
Figure 6 .
Figure 6.(a) Quantum efficiency and (b) spin polarization as a function of the photon energy for the spinphotovoltaic device based on antiferromagnetic 10ZSNR under the simultaneous effect of linear illumination with I w = 100 kW cm 2 and applied edge field to N = 4 chains.Other parameters are: M AF = 0.025t , so = 100 meV and elE z = 0.09t.
Figure 7 .
Figure 7. (a) Quantum efficiency and (b) spin polarization as a function of the photon energy for the spinphotovoltaic device based on antiferromagnetic 10ZSNR under the simultaneous effect of linear illumination with I w = 100 kW cm 2 and applied edge field to N = 4 chains.Other parameters are: M AF = 0.03t , so = 100 meV and elE z = 0.09t.
Figure 8 .Figure 9 .
Figure 8.The band structure of antiferromagnetic 10ZSNR subjected to the combining effect of strain and narrow edge potential with applied edge field to N = 4 chains ( so = 100 meV ).(a) ε = 0.2 and θ = 0 • and (b) ε = − 0.2 and θ = 90 • .Blue line denotes spin down and red line denotes spin up.Also, dashed blue line denotes spin down and dashed red line denotes spin up in the absence of strain.
Figure 10 .
Photon Energy (eV) . Blue line denotes spin down and dashed red line denotes spin up. | 5,077.8 | 2023-08-08T00:00:00.000 | [
"Physics"
] |
Geometry of Thermodynamic Processes
Since the 1970s contact geometry has been recognized as an appropriate framework for the geometric formulation of the state properties of thermodynamic systems, without, however, addressing the formulation of non-equilibrium thermodynamic processes. In Balian&Valentin (2001) it was shown how the symplectization of contact manifolds provides a new vantage point; enabling, among others, to switch between the energy and entropy representations of a thermodynamic system. In the present paper this is continued towards the global geometric definition of a degenerate Riemannian metric on the homogeneous Lagrangian submanifold describing the state properties, which is overarching the locally defined metrics of Weinhold and Ruppeiner. Next, a geometric formulation is given of non-equilibrium thermodynamic processes, in terms of Hamiltonian dynamics defined by Hamiltonian functions that are homogeneous of degree one in the co-extensive variables and zero on the homogeneous Lagrangian submanifold. The correspondence between objects in contact geometry and their homogeneous counterparts in symplectic geometry, as already largely present in the literature, appears to be elegant and effective. This culminates in the definition of port-thermodynamic systems, and the formulation of interconnection ports. The resulting geometric framework is illustrated on a number of simple examples, already indicating its potential for analysis and control.
Introduction
This paper is concerned with the geometric formulation of thermodynamic systems. While the geometric formulation of mechanical systems has given rise to an extensive theory, commonly called geometric mechanics, the geometric formulation of thermodynamics has remained more elusive and restricted.
Despite this increasing interest, the current geometric theory of thermodynamics still poses major challenges. First, most of the work is on the geometric formulation of the equations of state, through the use of Legendre submanifolds [1][2][3]5,8], while less attention has been paid to the geometric definition and analysis of non-equilibrium dynamics. Secondly, thermodynamic system models commonly appear both in energy and in entropy representation, while in principle, this corresponds to contactomorphic, but different contact manifolds. This is already demonstrated by rewriting Gibbs' equation in energy representation dE = TdS − PdV, with intensive variables T, −P, into the entropy representation dS = 1 T dE + P T dV, with intensive variables 1 T , P T . Thirdly, for reasons of analysis and control of composite thermodynamic systems, a geometric description of the interconnection of thermodynamic systems is desirable, but currently largely lacking.
A new viewpoint on the geometric formulation of thermodynamic systems was provided in [21], by exploiting the well-known result in geometry that odd-dimensional contact manifolds can be naturally symplectized to even-dimensional symplectic manifolds with an additional structure of homogeneity; see [22,23] for textbook expositions. While the classical applications of symplectization are largely confined to time-dependent Hamiltonian mechanics [23] and partial differential equations [22], the paper [21] argued convincingly that symplectization provides an insightful angle to the geometric modeling of thermodynamic systems as well. In particular, it yields a clear way to bring together energy and entropy representations, by viewing the choice of different intensive variables as the selection of different homogeneous coordinates.
In the present paper, we aim at expanding this symplectization point of view towards thermodynamics, amplifying our initial work [24,25]. In particular, we show how the symplectization point of view not only unifies the energy and entropy representation, but is also very helpful in describing the dynamics of thermodynamic processes, inspired by the notion of the contact control system developed in [11][12][13][17][18][19]; see also [16]. Furthermore, it yields a direct and global definition of a metric on the submanifold describing the state properties, encompassing the locally-defined metrics of Weinhold [26] and Ruppeiner [27], and providing a new angle to the equivalence results obtained in [3,5,7,10]. Finally, it is shown how symplectization naturally leads to a definition of interconnection ports; thus extending the compositional geometric port-Hamiltonian theory of interconnected multi-physics systems (see, e.g., [28][29][30]) to the thermodynamic realm. All this will be illustrated by a number of simple, but instructive, examples, primarily serving to elucidate the developed framework and its potential.
Thermodynamic Phase Space and Geometric Formulation of the Equations of State
The starting point for the geometric formulation of thermodynamic systems throughout this paper is an (n + 1)-dimensional manifold Q e , with n ≥ 1, whose coordinates comprise the extensive variables, such as volume and mole numbers of chemical species, as well as entropy and energy [31]. Emphasis in this paper will be on simple thermodynamic systems, with a single entropy and energy variable. Furthermore, for notational simplicity, and without much loss of generality, we will assume: with S ∈ R the entropy variable, E ∈ R the energy variable, and Q the (n − 1)-dimensional manifold of remaining extensive variables (such as volume and mole numbers).
In composite (i.e., compartmental) systems, we may need to consider multiple entropies or energies; namely for each of the components. In this case, R × R is replaced by R m S × R m E , with m S denoting the number of entropies and m E the number of energies; see Example 3 for such a situation. This also naturally arises in the interconnection of thermodynamic systems, as will be discussed in Section 5.
Coordinates for Q e throughout will be denoted by q e = (q, S, E), with q coordinates for Q (the manifold of remaining extensive variables). Furthermore, we denote by T * Q e the (2n + 2)-dimensional cotangent bundle T * Q e without its zero-section. Given local coordinates (q, S, E) for Q e , the corresponding natural cotangent bundle coordinates for T * Q e and T * Q e are denoted by: (q e , p e ) = (q, S, E, p, p S , p E ), (2) where the co-tangent vector p e := (p, p S , p E ) will be called the vector of co-extensive variables.
Following [21], the thermodynamic phase space P(T * Q e ) is defined as the projectivization of T * Q e , i.e., as the fiber bundle over Q e with fiber at any point q e ∈ Q e given by the projective space P(T * q e Q e ). (Recall that elements of P(T * q e Q e ) are identified with rays in T * q e Q e , i.e., non-zero multiples of a non-zero cotangent vector.) The corresponding projection will be denoted by π : T * Q e → P(T * Q e ).
It is well known [22,23] that P(T * Q e ) is a contact manifold of dimension 2n + 1. Indeed, recall [22,23] that a contact manifold is an (2n + 1)-dimensional manifold N equipped with a maximally non-integrable field of hyperplanes ξ. This means that ξ = ker θ ⊂ TN for a, possibly only locally-defined, one-form θ on N satisfying θ ∧ (dθ) n = 0. By Darboux's theorem [22,23], there exist local coordinates (called Darboux coordinates) q 0 , q 1 , · · · , q n , γ 1 , · · · , γ n for N such that, locally: Then, in order to show that P(T * M) for any (n + 1)-dimensional manifold M is a contact manifold, consider the Liouville one-form α on the cotangent bundle T * M, expressed in natural cotangent bundle coordinates for T * M as α = ∑ n i=0 p i dq i . Consider a neighborhood where p 0 = 0, and define the homogeneous coordinates: which, together with q 0 , q 1 , · · · , q n , serve as local coordinates for P(T * M). This results in the locally-defined contact form θ as in (3) (with α = p 0 θ). The same holds on any neighborhood where one of the other coordinates p 1 , · · · , p n is different from zero, in which case division by the non-zero p i results in other homogeneous coordinates. This shows that P(T * M) is indeed a contact manifold. Furthermore [22,23], P(T * M) is the canonical contact manifold in the sense that every contact manifold N is locally contactomorphic to P(T * M) for some manifold M. Taking M = Q e , it follows that coordinates for the thermodynamical phase space P(T * Q e ) are obtained by replacing the coordinates p e = (p, p S , p E ) for the fibers T * q e Q e by homogeneous coordinates for the projective space P(T * q e Q e ). In particular, assuming p E = 0, we obtain the homogeneous coordinates: defining the intensive variables of the energy representation. Alternatively, assuming p S = 0, we obtain the homogeneous coordinates (see [21] for a discussion of p S , or p E , as a gauge variable): defining the intensive variables of the entropy representation.
Example 1. Consider a mono-phase, single constituent, gas in a closed compartment, with volume q = V, entropy S, and internal energy E, satisfying Gibbs' relation dE = TdS − PdV. In the energy representation, the intensive variable γ is given by the pressure −P, and γ S is the temperature T. In the entropy representation, the intensive variable γ is equal to P T , while γ E equals the reciprocal temperature 1 T .
In order to provide the geometric formulation of the equations of state on the thermodynamic phase space P(T * Q e ), we need the following definitions. First, recall that a submanifold L of T * Q e is called a Lagrangian submanifold [22,23] if the symplectic form ω := dα is zero restricted to L and the dimension of L is equal to the dimension of Q e (the maximal dimension of a submanifold restricted to which ω can be zero).
Definition 1.
A homogeneous Lagrangian submanifold L ⊂ T * Q e is a Lagrangian submanifold with the additional property that: (q e , p e ) ∈ L ⇒ (q e , λp e ) ∈ L, for every 0 = λ ∈ R In the Appendix A, cf. Proposition A2, homogeneous Lagrangian submanifolds are geometrically characterized as submanifolds L ⊂ T * Q e of dimension equal to dim Q e , on which not only the symplectic form ω = dα, but also the Liouville one-form α is zero.
Importantly, homogeneous Lagrangian submanifolds of T * Q e are in one-to-one correspondence with Legendre submanifolds of P(T * Q e ). Recall that a submanifold L of a (2n + 1)-dimensional contact manifold N is a Legendre submanifold [22,23] if the locally-defined contact form θ is zero restricted to L and the dimension of L is equal to n (the maximal dimension of a submanifold restricted to which θ can be zero). Proposition 10.16). Consider the projection π : T * Q e → P(T * Q e ). Then, L ⊂ P(T * Q e ) is a Legendre submanifold if and only if L := π −1 (L) ⊂ T * Q e is a homogeneous Lagrangian submanifold. Conversely, any homogeneous Lagrangian submanifold L is of the form π −1 (L) for some Legendre submanifold L.
In the contact geometry formulation of thermodynamic systems [1][2][3]5], the equations of state are formalized as Legendre submanifolds. In view of the correspondence with homogeneous Lagrangian submanifolds, we arrive at the following.
Definition 2.
Consider Q e and the thermodynamical phase space P(T * Q e ). The state properties of the thermodynamic system are defined by a homogeneous Lagrangian submanifold L ⊂ T * Q e and its corresponding Legendre submanifold L ⊂ P(T * Q e ).
The correspondence between Legendre and homogeneous Lagrangian submanifolds also implies the following characterization of generating functions for any homogeneous Lagrangian submanifold L ⊂ T * Q e . This is based on the fact [22,23] that any Legendre submanifold L ⊂ N in Darboux coordinates q 0 , q 1 , · · · , q n , γ 1 , · · · , γ n for N can be locally represented as: for some partitioning I ∪ J = {1, · · · , n} and some function F(q I , γ J ) (called a generating function for L), while conversely, any submanifold L as given in (8), for any partitioning I ∪ J = {1, · · · , n} and function F(q I , γ J ), is a Legendre submanifold. Given such a generating function F(q I , γ J ) for the Legendre submanifold L, we now define, assuming p 0 = 0 and substituting γ J = − p J p 0 , G(q 0 , · · · , q n , p 0 , · · · , p n ) : Then a direct computation shows that: implying, in view of (8), that: In its turn, this implies that G as defined in (9) is a generating function for the homogeneous Lagrangian submanifold L = π −1 (L). If instead of p 0 , another coordinate p i is different from zero, then by dividing by this p i = 0, we obtain a similar generating function. This is summarized in the following proposition.
Any homogeneous Lagrangian submanifold L can be locally represented as in (11) with generating function G of the form (9), and conversely, for any such G, the submanifold (11) is a homogeneous Lagrangian submanifold.
Note that the generating functions G as in (9) are homogeneous of degree one in the variables (p 0 , · · · , p n ); see the Appendix A for further information regarding homogeneity.
The simplest instance of a generating function for a Legendre submanifold L and its homogeneous Lagrangian counterpart L occurs when the generating F as in (8) only depends on q 1 , · · · , q n . In this case, the generating function G is given by: G(q 0 , · · · , q n , p 0 , · · · , p n ) = −p 0 F(q 1 , · · · , q n ), (12) with the corresponding homogeneous Lagrangian submanifold L = π −1 (L) locally given as: A particular feature of this case is the fact that exactly one of the extensive variables, in the above q 0 , is expressed as a function of all the others, i.e., q 1 , · · · , q n . At the same time, p 0 is unconstrained, while the other co-extensive variables p 1 , · · · , p n are determined by p 0 , q 1 , · · · , q n . For a general generating function G as in (9), this is not necessarily the case. For example, if J = {1, · · · , n}, corresponding to a generating function −p 0 F(γ), then q 0 , · · · , q n are all expressed as a function of the unconstrained variables p 0 , · · · , p n .
Remark 1.
In the present paper, crucial use is made of homogeneity in the co-extensive variables (p, p S , p E ), which is different from homogeneity with respect to the extensive variables (q, q S , q E ), as occurring, e.g., in the Gibbs-Duhem relations [31].
The two most important representations of a homogeneous Lagrangian submanifold L ⊂ T * Q e , and its Legendre counterpart L ⊂ P(T * Q), are the energy representation and the entropy representation. In the first case, L is represented, as in (12), by a generating function of the form: yielding the representation: In the second case (the entropy representation), L is represented by a generating function of the form: yielding the representation: Note that in the energy representation, the independent extensive variables are taken to be q and the entropy S, while the energy variable E is expressed as a function of them. On the other hand, in the entropy representation, the independent extensive variables are q and the energy E, with S expressed as a function of them. Furthermore, in the energy representation, the co-extensive variable p E is "free", while instead in the entropy representation, the co-extensive variable p S is free. In principle, also other representations could be chosen, although we will not pursue this. For instance, in Example 1, one could consider a generating function −p V V(S, E) where the extensive variable V is expressed as function of the other two extensive variables S, E.
As already discussed in [1,2], an important advantage of describing the state properties by a Legendre submanifold L, instead of by writing out the equations of state, is in providing a global and coordinate-free point of view, allowing for an easy transition between different thermodynamic potentials. Furthermore, if singularities occur in the equations of state, L is typically still a smooth submanifold. As seen before [21], the description by a homogeneous Lagrangian submanifold L has the additional advantage of yielding a simple way for switching between the energy and the entropy representation.
Remark 2.
Although the terminology "thermodynamic phase space" for P(T * Q e ) may suggest that all points in P(T * Q e ) are feasible for the thermodynamic system, this is actually not the case. The state properties of the thermodynamic system are specified by the Legendre submanifold L ⊂ P(T * Q e ), and thus, the actual "state space" of the thermodynamic system at hand is this submanifold L; not the whole of P(T * Q e ).
A proper analogy with the Hamiltonian formulation of mechanical systems would be as follows. Consider the phase space T * Q of a mechanical system with configuration manifold Q. Then, the Hamiltonian H : T * Q → R defines a Lagrangian submanifold L H of T * (T * Q) given by the graph of the gradient of H. The homogeneous Lagrangian submanifold L is analogous to L H , while the symplectized thermodynamic phase space T * Q e is analogous to T * (T * Q).
The Metric Determined by the Equations of State
In a series of papers starting with [26], Weinhold investigated the Riemannian metric that is locally defined by the Hessian matrix of the energy expressed as a (convex) function of the entropy and the other extensive variables. (The importance of this Hessian matrix, also called the stiffness matrix, was already recognized in [31,32].) Similarly, Ruppeiner [27], starting from the theory of fluctuations, explored the locally-defined Riemannian metric given by minus the Hessian of the entropy expressed as a (concave) function of the energy and the other extensive variables. Subsequently, Mrugała [3] reformulated both metrics as living on the Legendre submanifold L of the thermodynamic phase space and showed that actually, these two metrics are locally equivalent (by a conformal transformation); see also [9]. Furthermore, based on statistical mechanics arguments, [7] globally defined an indefinite metric on the thermodynamical phase space, which, when restricted to the Legendre submanifold, reduces to the Weinhold and Ruppeiner metrics; thus showing global conformal equivalence. This point of view was recently further extended in a number of directions in [10].
In this section, crucially exploiting the symplectization point of view, we provide a novel global geometric definition of a degenerate pseudo-Riemannian metric on the homogeneous Lagrangian submanifold L defining the equations of state, for any given torsion-free connection on the space Q e of extensive variables. In a coordinate system in which the connection is trivial (i.e., its Christoffel symbols are all zero), this metric will be shown to reduce to Ruppeiner's locally-defined metric once we use homogeneous coordinates corresponding to the entropy representation, and to Weinhold's locally-defined metric by using homogeneous coordinates corresponding to the energy representation. Hence, parallel to the contact geometry equivalence established in [3,7,10], we show that the metrics of Weinhold and Ruppeiner are just two different local representations of this same globally-defined degenerate pseudo-Riemannian metric on the homogeneous Lagrangian submanifold of the symplectized thermodynamic phase space.
Recall [33] that a (affine) connection ∇ on an (n + 1)-dimensional manifold M is defined as an assignment: for any two vector fields X, Y, which is R-bilinear and satisfies for any function f on M. This implies that ∇ X Y(q) only depends on X(q) and the value of Y along a curve, which is tangent to X at q. In local coordinates q for M, the connection is determined by its Christoffel symbols Γ a bc (q), a, b, c = 0, · · · , n, defined by: The connection is called torsion-free if: for any two vector fields X, Y, or equivalently if its Christoffel symbols satisfy the symmetry property Γ a bc (q) = Γ a cb (q), a, b, c = 0, · · · , n. We call a connection trivial in a given set of coordinates q = (q 0 , · · · , q n ) if its Christoffel symbols in these coordinates are all zero.
As detailed in [34], given a torsion-free connection on M, there exists a natural pseudo-Riemannian ("pseudo" since the metric is indefinite) metric on the cotangent-bundle T * M, in cotangent bundle coordinates (q, p) for T * M given as: Let us now consider for M the manifold of extensive variables Q e = Q × R 2 with coordinates q e = (q, S, E) as before, where we assume the existence of a torsion-free connection, which is trivial in the coordinates (q, S, E), i.e., the Christoffel symbols are all zero. Then, the pseudo-Riemannian metric I on T * Q e takes the form: Denote by G the pseudo-Riemannian metric I restricted to the homogeneous Lagrangian submanifold L describing the state properties. Consider the energy representation (15) of L, with generating function −p E E(q, S). It follows that 1 2 G equals (in shorthand notation): where: is recognized as Weinhold's metric [26]; the (positive-definite) Hessian of E expressed as a (strongly convex) function of q and S.
On the other hand, in the entropy representation (17) of L, with generating function −p S S(q, E), an analogous computation shows that 1 2 G is given as p S R, with: the Ruppeiner metric [27]; minus the Hessian of S expressed as a (strongly concave) function of q and E. Hence, we conclude that: This is basically the conformal equivalence between W and R found in [3]; see also [7,10]. Summarizing, we have found the following. Theorem 1. Consider a torsion-free connection on Q e , with coordinates q e = (q, S, E), in which the Christoffel symbols of the connection are all zero. Then, by restricting the pseudo-Riemannian metric I to L, we obtain a degenerate pseudo-Riemannian metric G on L, which in local energy-representation (15) for L is given by −2p E W, with W the Weinhold metric (24), and in a local entropy representation (17) by 2p S R, with R the Ruppeiner metric (25).
We emphasize that the degenerate pseudo-Riemannian metric G is globally defined on L, in contrast to the locally-defined Weinhold and Ruppeiner metrics W and R; see also the discussion in [3,5,7,9,10]. We refer to G as degenerate, since its rank is at most n instead of n + 1. Note furthermore that G is homogeneous of degree one in p e and hence does not project to the Legendre submanifold L.
While the assumption of the existence of a trivial connection appears natural in most cases (see also the information geometry point of view as exposed in [35]), all this can be directly extended to any non-trivial torsion-free connection ∇ on Q e . For example, consider the following situation. For the ease of notation, denote q S := S, q E := E, and correspondingly denote (q 0 , q 1 , · · · , q n−2 , q S , q E ) := (q, S, E). Take any torsion-free connection on Q e given by symmetric Christoffel symbols Γ c ab = Γ c ba , with indices a, b, c = 0, · · · , n − 2, S, E, satisfying Γ c ab = 0 whenever one of the indices a, b, c is equal to the index E. Then, the indefinite metric I on T * Q e is given by (again in shorthand notation): It follows that the resulting metric 1 2 G on L is given by the matrix: Here, the (n × n)-matrix at the right-hand side of −p E is the globally defined geometric Hessian matrix (see e.g., [36]) with respect to the connection on Q × R corresponding to the Christoffel symbols Γ c ab , a, b, c = 0, · · · , n − 2, S.
Dynamics of Thermodynamic Processes
In this section, we explore the geometric structure of the dynamics of (non-equilibrium) thermodynamic processes; in other words, geometric thermodynamics. By making crucial use of the symplectization of the thermodynamic phase space, this will lead to the definition of port-thermodynamic systems in Definition 3; allowing for open thermodynamic processes. The definition is illustrated in Section 4.2 on a number of simple examples. In Section 4.3, initial observations will be made regarding the controllability of port-thermodynamic systems.
Port-Thermodynamic Systems
In Section 2, we noted the one-to-one correspondence between Legendre submanifolds L of the thermodynamic phase space P(T * Q e ) and homogeneous Lagrangian submanifolds L of the symplectized space T * Q e . In the present section, we start by noting that there is as well a one-to-one correspondence between contact vector fields on P(T * Q e ) and Hamiltonian vector fields X K on T * Q e with Hamiltonians K that are homogeneous of degree one in p e (see the Appendix A for further details on homogeneity).
Here, Hamiltonian vector fields X K on T * Q e with Hamiltonian K are in cotangent bundle coordinates (q e , p e ) = (q 0 , · · · , q n , p 0 , · · · , p n ) for T * Q e given by the standard expressions: while contact vector fields X K on the contact manifold P(T * Q e ) are given in local Darboux coordinates (q e , γ) = (q 0 , · · · , q n , γ 1 , · · · , γ n ) as: [22,23] for some contact Hamiltonian K(q e , γ). Indeed, consider any Hamiltonian vector field X K on T * Q e , with K homogeneous of degree one in the co-extensive variables p e . Equivalently (see Appendix A, Proposition A1), L X K α = 0, with L denoting the Lie-derivative. It follows, cf. Theorem 12.5 in [23], that X K projects under π : T * Q e → P(T * Q e ) to a vector field π * X K , satisfying: for some function ρ, for all (locally-defined) expressions of the contact form θ on P(T * Q e ). This exactly means [23] that the vector field π * X K is a contact vector field with contact Hamiltonian: Conversely [22,23], any contact vector field X K on P(T * Q e ), for some contact Hamiltonian K, can be lifted to a Hamiltonian vector field X K on T * Q e with homogeneous K. In fact, for K expressed in Darboux coordinates for P(T * Q e ) as K(q 0 , q 1 , · · · , q n , γ 1 , ·, γ n ), the corresponding homogeneous function K is given as, cf. [23] (Chapter V, Remark 14.4), K(q 0 , · · · , q n , p 0 , · · · , p n ) = p 0 K(q 0 , · · · , q n , − p 1 p 0 , · · · , − p n p 0 ), and analogously on any other homogeneous coordinate neighborhood of P(T * Q e ). This is summarized in the following proposition (N.B.: for brevity, we will from now on refer to a function K(q e , p e ) that is homogeneous of degree one in the co-extensive variables p e as a homogeneous function and to a Hamiltonian vector field X K on T * Q e with K homogeneous of degree one in p e as a homogeneous Hamiltonian vector field).
Proposition 3.
Any homogeneous Hamiltonian vector field X K on T * Q e projects under π to a contact vector field X K on P(T * Q e ) with K locally given by (32), and conversely, any contact vector field X K on P(T * Q e ) lifts under π to a homogeneous Hamiltonian vector field X K on T * Q e with K locally given by (33).
Recall, and see also Remark 2, that the equations of state describe the constitutive relations between the extensive and intensive variables of the thermodynamic system, or said otherwise, the state properties of the thermodynamic system. Since these properties are fixed for a given thermodynamic system, any dynamics should leave its equations of state invariant. Equivalently, any dynamics on T * Q e or on P(T * Q e ) should leave the homogeneous Lagrangian submanifold L ⊂ T * Q e , respectively, its Legendre submanifold counterpart L ⊂ P(T * Q e ), invariant. (Recall that a submanifold is invariant for a vector field if the vector field is everywhere tangent to it; and thus, solution trajectories remain on it.) Furthermore, it is natural to require the dynamics of the thermodynamic system to be Hamiltonian; i.e., homogeneous Hamiltonian dynamics on T * Q e and a contact dynamics on P(T * Q e ).
In order to combine the Hamiltonian structure of the dynamics with invariance, we make crucial use of the following properties.
1.
A homogeneous Lagrangian submanifold L ⊂ T * Q e is invariant for the homogeneous Hamiltonian vector field X K if and only if the homogeneous K : T * Q e → R restricted to L is zero.
2.
A Legendre submanifold L ⊂ P(T * Q e ) is invariant for the contact vector field X K if and only if K : The homogeneous function K : T * Q e → R restricted to L is zero if and only the corresponding function K : P(T * Q e ) → R restricted to L is zero.
Item 2 is well known [22,23], and Item 1 can be found in [23,25], while Item 3 directly follows from the correspondence between K and K in (32) and (33).
Based on these considerations, we define the dynamics of a thermodynamic system as being produced by a homogeneous Hamiltonian function, parametrized by u ∈ R m , with K a restricted to L zero, and K c an m-dimensional row of functions K c j , j = 1, · · · , m, all of which are also zero on L. Then, the resulting dynamics is given by the homogeneous Hamiltonian dynamics on T * Q e :ẋ restricted to L. (In [24,25], (35) was called a homogeneous Hamiltonian control system.) By Proposition 3, this dynamics projects to contact dynamics corresponding to the contact Hamiltonian K = K a + K c u on the corresponding Legendre submanifold L ⊂ P(T * Q e ). The invariance conditions on the parametrized Hamiltonian K defining the dynamics on L and L can be seen to take the following explicit form. Since K is homogeneous of degree one, we can write by Euler's homogeneous function theorem (Theorem A1): where the functions f , f S , f E , as well as the elements of the m-dimensional row vectors of functions g, g S , g E are all homogeneous of degree zero. Now, recall the energy representation (15) of the Lagrangian submanifold L describing the state properties of the system: By substitution of (37) in (36), it follows that K restricted to L is zero for all u if and only if: for all p E , or equivalently: This leads to the following additional requirements on the homogeneous function K a . The first law of thermodynamics ("total energy preservation") requires that the uncontrolled (u = 0) dynamics preserves energy, implying that: Furthermore, the second law of thermodynamics ("increase of entropy") leads to the following requirement. Writing out K| L = 0 in the entropy representation (17) of L amounts to: Plugging in the earlier found requirement f E | L = 0, this reduces to: Finally, since for u = 0, the entropy is non-decreasing, this implies the following additional requirement: All this leads to the following geometric formulation of a port-thermodynamic system.
Definition 3 (Port-thermodynamic system).
Consider the space of extensive variables Q e = Q × R × R and the thermodynamic phase space P(T * Q e ). A port-thermodynamic system on P(T * Q e ) is defined as a pair (L, K), where the homogeneous Lagrangian submanifold L ⊂ T * Q e specifies the state properties. The dynamics is given by the homogeneous Hamiltonian dynamics with parametrized homogeneous Hamiltonian K := K a + K c u : T * Q e → R, u ∈ R m , in the form (36), with K a , K c zero on L, and the internal Hamiltonian K a satisfying (corresponding to the first and second law of thermodynamics): This means that, in energy representation (15): and, in entropy representation (17): Furthermore, the power-conjugate outputs y p of the port-thermodynamic system (L, K) are defined as the row-vector: Since by Euler's theorem (Theorem A1), all expressions f , f S , f E , g, g S , g E are homogeneous of degree zero, they project to functions on the thermodynamic phase space P(T * Q e ). Hence, the dynamics and the output equations are equally well-defined on the Legendre submanifold L ⊂ P(T * Q e ). Note that as a consequence of the above definition of a port-thermodynamic system: expressing that the increase of total energy of the thermodynamic system is equal to the energy supplied to the system by the environment.
Remark 3.
In case f , f S , f E , g, g S , g E do not depend on p e (and therefore, are trivially homogeneous of degree zero in p e ), they actually define vector fields on the space of extensive variables Q e (since they transform as vector fields under a coordinate change for Q e ). In this case, the dynamics on T * Q e and L is equal to the Hamiltonian lift of the dynamics on Q e ; see, e.g., [37].
Remark 4.
Whenever the dynamics on L is given as the Hamiltonian lift of dynamics on Q e (see the previous Remark), the properties (44) can be enforced by formulating the dynamics on Q e as the sum of a Hamiltonian vector field with respect to the energy E and a gradient vector field with respect to the entropy S, in such a way that S is a Casimir of the Poisson bracket and E is a "Casimir" of the symmetric bracket; see, e.g., [38,39]. The extension of this to the general homogeneous setting employed in Definition 3 is of much interest.
Remark 5. Definition 3 is generalized to the compartmental situation
corresponding, respectively, to total energy conservation and total entropy increase; see already Example 3.
Remark 6.
An extension to Definition 3 is to consider a non-affine dependence of K on u, i.e., a general function K : T * Q e × R m → R that is homogeneous in p e . See already the damper subsystem in Example 7 and the formulation of Hamiltonian input-output systems as initiated in [40] and continued in, e.g., [37,41,42].
Defining the vector of outputs as being power-conjugate to the input vector u is the most common option for defining an interaction port (in this case, properly called a power-port) of the thermodynamic system. Nevertheless, there are other possibilities, as well. Indeed, a port representing the rate of entropy flow is obtained by defining the alternative output y re as: which is the entropy-conjugate to the input vector u, This leads instead to the rate of entropy balance: where the second, non-negative, term on the right-hand side is the internal rate of entropy production.
Remark 7.
From the point of view of dissipativity theory [43,44], this means that any port-thermodynamic system, with inputs u and outputs y p , y re , is cyclo-lossless with respect to the supply rate y p u and cyclo-passive with respect to the supply rate y re u.
Finally, it is of interest to note that, as illustrated by the examples in the next subsection, the Hamiltonian K generating the dynamics on L is dimensionless; i.e., its values do not have a physical dimension. Physical dimensions do arise by dividing the homogeneous expression by one of the co-extensive variables.
Examples of Port-Thermodynamic Systems
Example 2 (Heat compartment). Consider a simple thermodynamic system in a compartment, allowing for heat exchange with its environment. Its thermodynamic properties are described by the extensive variables S (entropy) and E (internal energy), with E expressed as a function E = E(S) of S. Its state properties (in energy representation) are given by the homogeneous Lagrangian submanifold: corresponding to the generating function −p E E(S). Since there is no internal dynamics, K a is absent. Hence, taking u as the rate of entropy flow corresponds to the homogeneous Hamiltonian K = K c u with: which is zero on L. This yields on L the dynamics (entailing both the entropy and energy balance): with power-conjugate output y p equal to the temperature T = E (S). Defining the homogeneous coordinate γ = − p S p E leads to the contact Hamiltonian K c = E (S) − γ on P(T * R 2 ), and the Legendre submanifold: The resulting contact dynamics on L is equal to the projected dynamics π * X K = X K given as: Here, the third equation corresponds to the energy balance in terms of the temperature dynamics. Note that E (S) = T C , with C the heat capacitance of the fixed volume. Alternatively, if we take instead the incoming heat flow as input v, then the Hamiltonian is given by: leading to the "trivial" power-conjugate output y p = 1 and to the rate of entropy conjugate output y re given by the reciprocal temperature y re = 1 T .
Example 3 (Heat exchanger).
Consider two heat compartments as in Example 2, exchanging a heat flow through an interface according to Fourier's law. The extensive variables are S 1 , S 2 (entropies of the two compartments) and E (total internal energy). The state properties are described by the homogeneous Lagrangian submanifold: corresponding to the generating function −p E (E 1 (S 1 ) + E 2 (S 2 )), with E 1 , E 2 the internal energies of the two compartments. Denoting the temperatures T 1 = E 1 (S 1 ), T 2 = E 2 (S 2 ), the internal dynamics of the two-component thermodynamic system corresponding to Fourier's law is given by the Hamiltonian: with λ Fourier's conduction coefficient. Note that the total entropy on L satisfies: in accordance with (49). We will revisit this example in the context of the interconnection of thermodynamic systems in Examples 8 and 9.
Example 4 (Mass-spring-damper system). Consider a mass-spring-damper system in one-dimensional motion, composed of a mass m with momentum π, linear spring with stiffness k and extension z, and linear damper with viscous friction coefficient d. In order to take into account the thermal energy and the entropy production arising from the heat produced by the damper, the variables of the mechanical system are augmented with an entropy variable S and internal energy U(S) (for instance, if the system is isothermal, i.e., in thermodynamic equilibrium with a thermostat at temperature T 0 , the internal energy is U(S) = T 0 S). This leads to the total set of extensive variables z, π, S, E = 1 2 kz 2 + π 2 2m + U(S) (total energy). The state properties of the system are described by the Lagrangian submanifold L with generating function (in energy representation): This defines the state properties: The dynamics is given by the homogeneous Hamiltonian: where u is an external force. The power-conjugate output y p = π m is the velocity of the mass.
Example 5 (Gas-piston-damper system). Consider a gas in an adiabatically-isolated cylinder closed by a piston. Assume that the thermodynamic properties of the system are covered by the properties of the gas (for an extended model, see [13], Section 4). Then, the system is analogous to the previous example, replacing z by volume V and the partial energy 1 2 kz 2 + U(S) by an expression U(V, S) for the internal energy of the gas. The dynamics of a force-actuated gas-piston-damper system is defined by the Hamiltonian: where the power-conjugate output y p = π m is the velocity of the piston.
Example 6 (Port-Hamiltonian systems as port-thermodynamic systems). Example 4 can be extended to any input-state-output port-Hamiltonian system [28][29][30]: on a state space manifold x ∈ X , with inputs u ∈ R m , outputs y ∈ R m , Hamiltonian H (equal to the stored energy of the system), and dissipation R(e) satisfying e T R(e) ≥ 0 for all e. Including entropy S as an extra variable, along with an internal energy U(S) (for example, in the isothermal case U(S) = T 0 S), the state properties of the port-Hamiltonian system are given by the homogeneous Lagrangian submanifold L ⊂ T * (X × R 2 ) defined as: with generating function −p E (H(x) + U(S)). The Hamiltonian K is given by (using the shorthand notation e = ∂H ∂x (x)): reproducing on L the dynamics (65) with outputs y p = y. Note that in this thermodynamic formulation of the port-Hamiltonian system, the energy-dissipation term e T R(e) in the power-balance d dt H = −e T R(e) + y T u is compensated by the equal increase of the internal energy U(S), thus leading to conservation of the total energy E(x, S) = H(x) + U(S).
Controllability of Port-Thermodynamic Systems
In this subsection, we will briefly indicate how the controllability properties of the port-thermodynamic system (L, K) can be directly studied in terms of the homogeneous Hamiltonians K a and K c j , j = 1, · · · , m, and their Poisson brackets. First, we note that by Proposition A3, the Poisson brackets of these homogeneous Hamiltonians are again homogeneous. Secondly, we recall the well-known correspondence [22,23,33] between Poisson brackets of Hamiltonians h 1 , h 2 and Lie brackets of the corresponding Hamiltonian vector fields: In particular, this property implies that if the homogeneous Hamiltonians h 1 , h 2 are zero on the homogeneous Lagrangian submanifold L and, thus, by Proposition 4, the homogeneous Hamiltonian vector fields X h 1 , X h 2 are tangent to L, then also [X h 1 , X h 2 ] is tangent to L, and therefore, the Poisson bracket {h 1 , h 2 } is also zero on L. Furthermore, with respect to the projection to the corresponding Legendre submanifold L, we note the following property of homogeneous Hamiltonians: where the bracket on the right-hand side is the Jacobi bracket [22,23] of functions on the contact manifold P(T * Q e ). This leads to the following analysis of the accessibility algebra [45] of a port-thermodynamic system, characterizing its controllability.
Proposition 5.
Consider a port-thermodynamic system (L, K) on P(T * Q e ) with homogeneous K := K a + ∑ m j=1 K c j u j : T * Q e → R, zero on L. Consider the algebra P (with respect to the Poisson bracket) generated by K a , K c j , j = 1, · · · , m, consisting of homogeneous functions that are zero on L and the corresponding algebra P generated by K a , K c j , j = 1, · · · , m, on L. The accessibility algebra [45] is spanned by all contact vector fields X h on L, with h in the algebra P. It follows that the port-thermodynamic system (L, K) is locally accessible [45] if the dimension of the co-distribution d P on L defined by the differentials of h, with h in the Poisson algebra P, is equal to the dimension of L. Conversely, if the system is locally accessible, then the co-distribution d P on L has dimension equal to the dimension of L almost everywhere on L.
Similar statements can be made with respect to local strong accessibility of the port-thermodynamic system; see the theory exposed in [45].
Interconnections of Port-Thermodynamic Systems
In this section, we study the geometric formulation of interconnection of port-thermodynamic systems through their ports, in the spirit of the compositional theory of port-Hamiltonian systems [28][29][30]43]. We will concentrate on the case of power-port interconnections of port-thermodynamic systems, corresponding to power flow exchange (with total power conserved). This is the standard situation in (port-based) physical network modeling of interconnected systems. At the end of this section, we will make some remarks about other types of interconnection; in particular, interconnection by exchange of the rate of entropy.
Consider two port-thermodynamic systems with extensive and co-extensive variables: and Liouville one- With the homogeneity assumption in mind, impose the following constraint on the co-extensive variables: This leads to the summation of the one-forms α 1 and α 2 given by: on the composed space defined as: Leaving out the zero-section p 1 = 0, p 2 = 0, p S 1 = 0, p S 2 = 0, p E = 0, this space will be denoted by T * Q e 1 • T * Q e 2 and will serve as the space of extensive and co-extensive variables for the interconnected system. Furthermore, it defines the projectivization P(T * Q e 1 • T * Q e 2 ), which serves as the composition (through E i , p E i , i = 1, 2) of the two projectivizations P(T * Q e i ), i = 1, 2. Let the state properties of the two systems be defined by homogeneous Lagrangian submanifolds: with generating functions −p E i E i (q i , S i ), i = 1, 2. Then, the state properties of the composed system are defined by the composition: with generating function −p E (E 1 (q 1 , S 1 ) + E 2 (q 2 , S 2 )). Furthermore, consider the dynamics on L i defined by the Hamiltonians K i = K a i + K c i u i , i = 1, 2. Assume that K i does not depend on the energy variable E i , i = 1, 2. Then, the sum K 1 + K 2 is well-defined on L 1 • L 2 for all u 1 , u 2 . This defines a composite port-thermodynamic system, with entropy variables S 1 , S 2 , total energy variable E, inputs u 1 , u 2 , and state properties defined by L 1 • L 2 .
Next, consider the power-conjugate outputs y p1 , y p2 ; in the sequel, simply denoted by y 1 , y 2 . Imposing on the power-port variables u 1 , u 2 , y 1 , y 2 interconnection constraints that are satisfying the power-preservation property: yields an interconnected dynamics on L 1 • L 2 , which is energy conserving (the p E -term in the expression for K 1 + K 2 is zero by (76)). This is summarized in the following proposition.
Proposition 6.
Consider two port-thermodynamic systems (L i , K i ) with spaces of extensive variables Q e i , i = 1, 2. Assume that K i does not depend on E i , i = 1, 2. Then, (L 1 • L 2 , K 1 + K 2 ), with L 1 • L 2 given in (75), defines a composite port-thermodynamic system with inputs u 1 , u 2 and outputs y 1 , y 2 . By imposing interconnection constraints on u 1 , u 2 , y 1 , y 2 satisfying (76), an autonomous (no inputs) port-thermodynamic system is obtained.
Remark 8.
The interconnection procedure can be extended to the case of an additional open power-port with input vector u and output row vector y, by replacing (76) by power-preserving interconnection constraints on u 1 , u 2 , u, y 1 , y 2 , y, satisfying: y 1 u 1 + y 2 u 2 + yu = 0 (77) Proposition 6 is illustrated by the following examples.
Example 7 (Mass-spring-damper system). We will show how the thermodynamic formulation of the system as detailed in Example 4 also results from the interconnection of the three subsystems: mass, spring, and damper. I. Mass subsystem (leaving out irrelevant entropy). The state properties are given by: with energy κ (kinetic energy) and dynamics generated by the Hamiltonian: corresponding toπ = u m , y m = π m . II. Spring subsystem (again leaving out irrelevant entropy). The state properties are given by: with energy P (spring potential energy) and dynamics generated by the Hamiltonian: corresponding toż = u s , y s = kz. III. Damper subsystem. The state properties are given by: involving the entropy S and an internal energy U(S). The dynamics of the damper subsystem is generated by the Hamiltonian: with d the damping constant and power-conjugate output: equal to the damping force. Finally, interconnect, in a power-preserving way, the three subsystems to each other via their power-ports (u m , y m ), (u s , y s ), (u d , y d ) as: This results (after setting p κ = p P = p U =: p) in the interconnected port-thermodynamic system with total Hamiltonian K m + K s + K d given as: which is equal to the Hamiltonian for u = 0 as obtained before in Example 4, Equation (63).
Example 8 (Heat exchanger).
Consider two heat compartments as in Example 2, with state properties: The dynamics is given by the Hamiltonians: with v 1 , v 2 the incoming heat flows and power-conjugate outputs y 1 , y 2 , which both are equal to one. Consider the power-conserving interconnection: with λ the Fourier heat conduction coefficient. Then, the Hamiltonian of the interconnected port-thermodynamical system is given by: which equals the Hamiltonian (59) as obtained in Example 3.
Apart from power-port interconnections as above, we may also define other types of interconnection, not corresponding to the exchange of rate of energy (power), but instead to the exchange of rate of other extensive variables. In particular, an interesting option is to consider interconnection via the rate of entropy exchange. This can be done in a similar way, by considering, instead of the variables E i , p E i , i = 1, 2, as above, the variables S i , p S i , i = 1, 2. Imposing alternatively the constraint p S 1 = p S 2 =: p S yields a similar composed space of extensive and co-extensive variables, as well as a similar composition L 1 • L 2 of the state properties. By assuming in this case that the Hamiltonians K i do not depend on the entropies S i , i = 1, 2 and by imposing interconnection constraints on u 1 , u 2 and the "rate of entropy" conjugate outputs y re1 , y re2 leads again to an interconnected port-thermodynamic system. Note however that while it is natural to assume conservation of total energy for the interconnection of two systems via their power-ports, in the alternative case of interconnecting through the rate of entropy ports, the total entropy may not be conserved, but actually increasing.
Example 9.
As an alternative to the previous Example 8, where the heat exchanger was modeled as the interconnection of two heat compartments via power-ports, consider the same situation, but now with outputs y i being the "rate of entropy conjugate" to v i , i.e., equal (cf. the end of Example 2) to the reciprocal temperatures 1 T i with T i = E (S i ), i = 1, 2. This results in interconnecting the two heat compartments as, equivalently to (89), This interconnection is not total entropy conserving, but instead satisfies y 1 v 1 + y 2 v 2 = λ( 1 y 2 − 1 y 1 )(y 1 − y 2 ) ≥ 0, corresponding to the increase of total entropy.
Discussion
While the state properties of thermodynamic systems have been geometrically formulated since the 1970s through the use of contact geometry, in particular by means of Legendre submanifolds, the geometric formulation of non-equilibrium thermodynamic processes has remained more elusive. Taking up the symplectization point of view on thermodynamics as successfully initiated in [21], the present paper develops a geometric framework based on the description of non-equilibrium thermodynamic processes by Hamiltonian dynamics on the symplectized thermodynamic phase space generated by Hamiltonians that are homogeneous of degree one in the co-extensive variables; culminating in the definition of port-thermodynamic systems in Section 4.1. Furthermore, Section 3 shows how the symplectization point of view provides an intrinsic definition of a metric that is overarching the locally-defined metrics of Weinhold and Ruppeiner and provides an alternative to similar results in the contact geometry setting provided in [3,5,7,10]. The correspondence between objects in contact geometry and corresponding homogeneous objects in symplectic geometry turns out to be very effective. An additional benefit of symplectization is the simplicity of the expressions and computations in the standard Hamiltonian context, as compared to those in contact geometry. This feature is also exemplified by the initial controllability study in Section 4.3. As noted in [38], physically non-trivial examples of mesoscopic dynamics are infinite-dimensional. This calls for an infinite-dimensional extension, following the well-developed theory of infinite-dimensional Hamiltonian systems (but now adding homogeneity) of the presented definition of port-thermodynamic systems, encompassing systems obtained by the Hamiltonian lift of infinite-dimensional GENERIC [38] and dissipative port-Hamiltonian [46] formulations; see also Remark 4. From a control point of view, one of the open problems concerns the stabilization of thermodynamic processes using the developed framework.
Geometrically, Euler's theorem can be equivalently formulated as follows. Recall that the Hamiltonian vector field X h on T * M with symplectic form ω = dα corresponding to an arbitrary Hamiltonian h : T * M → R is defined by i X h ω = −dh. It is immediately verified that h : T * M → R is homogeneous of degree r iff: Define the Euler vector field (also called the Liouville vector field) E on T * M as the vector field satisfying: dα(E, ·) = α (A4) In co-tangent bundle coordinates (q, p) for T * M, the vector field E is given as ∑ n i=0 p i ∂ ∂p i . One verifies that h : T * M → R is homogeneous of degree r iff (with L denoting Lie-derivative): In the sequel, we will only use homogeneity and Euler's theorem for r = 0 and r = 1. First, it is clear that physical variables defined on the contact manifold P(T * Q e ) correspond to functions on T * Q e , which are homogeneous of degree zero in p. On the other hand, as formulated in Proposition 3, a Hamiltonian vector field on T * Q e with respect to a Hamiltonian that is homogeneous of degree one in p projects to a contact vector field on the contact manifold P(T * Q). Such Hamiltonian vector fields are locally characterized as follows.
Proposition A1. If h : T * M → R is homogeneous of degree one in p, then X = X h satisfies: Conversely, if a vector field X satisfies (A6), then X = X h for some locally-defined Hamiltonian h that is homogeneous of degree one in p.
Summarizing, Hamiltonian vector fields with Hamiltonians that are homogeneous of degree one in p are characterized by (A6); in contrast to general Hamiltonian vector fields X on T * M, which are characterized by the weaker property L X dα = 0. Similar statements as above can be made for homogeneous Lagrangian submanifolds (cf. Definition 1). Recall [22,23,33] that a submanifold L ⊂ T * M is called a Lagrangian submanifold if the symplectic form ω := dα is zero on L, and dim L = dim M.
proving that {h 1 , h 2 } is homogeneous of degree one. Hence, by Proposition A1, L X {h 1 ,h 2 } α = 0, and thus: where in the fourth equality, we use (A12) for h 1 and h 2 . | 13,477.8 | 2018-11-10T00:00:00.000 | [
"Mathematics"
] |
Semantic Focusing Allows Fully Automated Single-Layer Slide Scanning of Cervical Cytology Slides
Liquid-based cytology (LBC) in conjunction with Whole-Slide Imaging (WSI) enables the objective and sensitive and quantitative evaluation of biomarkers in cytology. However, the complex three-dimensional distribution of cells on LBC slides requires manual focusing, long scanning-times, and multi-layer scanning. Here, we present a solution that overcomes these limitations in two steps: first, we make sure that focus points are only set on cells. Secondly, we check the total slide focus quality. From a first analysis we detected that superficial dust can be separated from the cell layer (thin layer of cells on the glass slide) itself. Then we analyzed 2,295 individual focus points from 51 LBC slides stained for p16 and Ki67. Using the number of edges in a focus point image, specific color values and size-inclusion filters, focus points detecting cells could be distinguished from focus points on artifacts (accuracy 98.6%). Sharpness as total focus quality of a virtual LBC slide is computed from 5 sharpness features. We trained a multi-parameter SVM classifier on 1,600 images. On an independent validation set of 3,232 cell images we achieved an accuracy of 94.8% for classifying images as focused. Our results show that single-layer scanning of LBC slides is possible and how it can be achieved. We assembled focus point analysis and sharpness classification into a fully automatic, iterative workflow, free of user intervention, which performs repetitive slide scanning as necessary. On 400 LBC slides we achieved a scanning-time of 13.9±10.1 min with 29.1±15.5 focus points. In summary, the integration of semantic focus information into whole-slide imaging allows automatic high-quality imaging of LBC slides and subsequent biomarker analysis.
Introduction
Cervical cancer is the second most frequent cancer among women worldwide [1,2]. Cytology-based cervical cancer screening has led to a substantial reduction of cervical cancer incidence and mortality in many industrialized countries [3]. Despite its success, screening with conventional PAP smears faces several limitations: the single-test sensitivity to detect pre-cancerous stages is about 50-60% [4], and thus has to be repeated frequently to achieve high cumulative sensitivity. Further limitations result from difficulties in standardization, different sample preparation techniques, different cytological classifications, and the high inter-and intra-observer variability of cytology interpretation [5]. Over the last two decades, liquid-based cytology (LBC) has been increasingly used in cervical cytology screening [6,7]. LBC slides contain less debris and provide clearer cell preparations compared to conventional Pap smears. LBC allows preparing multiple slides from the same sample for biomarker studies. Several biomarkers have been evaluated to improve reproducibility and accuracy for cervical cancer screening. One of the most promising biomarkers is cytological staining for p16/Ki-67. Double staining for p16 and Ki-67 in the same cell highlights HPV-transformed cells and is a marker for cervical precancers. p16/Ki-67 staining is performed on liquid-based cytology (LBC) preparations and evaluated manually [8,9].
Recently, whole slide imaging (WSI) scanners have become available that are capable of generating full digital microscopic images of glass slides [10,11]. Multiple WSI scanners are available on the market and have been compared in detail in literature [10]. They are frequently used for digitization of full histological slides [12] or Tissue Microarrays (TMAs) [13]. Accordingly, their focusing technology, being a key feature of WSI scanners, has been developed primarily for histological sections. Principally, WSI should also enable the high throughput analysis of the enormously large batches of cytological cervix samples occurring during screening as has been postulated earlier [9][10][11]. In a previous publication [14], we reported the first implementation of a fully automatic image evaluation system for detecting p16 + cells on fully imaged cytological ThinPrep TM slides. But the core problem of applying WSI scanners up till now remained that their focusing technology is not adapted to sparsely populated cytology specimens. For example, in [15], the authors showed that the diagnostic accuracy of virtual slides is comparable to glass slides despite an inherent difficulty of acquiring microscopically well focused virtual slides due to the three-dimensional nature of cytological preparations. Also, another study among cytology technologists [16] reported the principal feasibility of whole-slide imaging cytological slides but described the need for extensive focusing in many z-layers. Thus, from our own experience and the other published studies it became apparent that while working mostly perfectly for histological specimen, in cytology focusing is the key bottleneck hindering.
The problems of focusing a LBC slide can be circumvented by multi-layered (z-stack) scanning. However, multi-layer scanning leads to a multiplication in scanning times by a factor of the number of layers. Furthermore, multi-layering severely complicates manual and automatic image analysis.
We here set out to develop a highly efficient autofocusing approach for LBC slides. Slide scanners like the one used here, first determine a set of unblurred (focused) candidate focus images of the slide at chosen focus points [17]. From the determined zheights at the focus points, a three-dimensional ''focus map'' is generated, extrapolating the measured height variations to the full slide. Thereby, too few focus points, inadequately sampling the 3D landscape, will lead to a partially unfocused virtual slide. As the movement of the microscope objective during focusing is relatively slow, too many focus points will in turn significantly increase the total time needed for scanning. The main source of erroneous focus maps are focus points targeting undesired objects like dirt or artifacts within the sample, or dust or streaks located on top of the cover slip. The quality of the focus map is also dependent on the optimal spatial distribution of the focus points. Cell numbers on cytology slides may range from less than 100, to more than 150000 cells and can show varying spatial distribution patterns. Technical problems encountered with LBC preparations were analyzed by Song et al. [18]. The authors described preparative difficulties such as too few cells on the slide, thick preparations, cellular material accumulated in some regions, and blood/debris on the slides.
Concluding, the underlying problem in focusing LBC slides is that semantic information about the scanned sample is missing in general-purpose focusing routines of the slide scanners. Generalpurpose focusing algorithms are not able to determine whether a focus point is correctly targeting a cell or incorrectly an artifact or whether the whole set of focus points is correctly chosen to capture the essence of the slide. This is because such routines lack any conceptual understanding of cytological liquid based preparation samples. Thus, in this publication we make a first step to incorporate such cytopathological knowledge into the automatic focusing of LBC slides. Such focusing would be optimal if a ''master-focus layer'' is found, representing the full 3D focus map of the LBC slide. Slide scanning with this master-focus layer could capture each region of the slide in-focus ( Figure 1). In the optimal case, one focus layer would be sufficient for scanning and multilayering would not be needed or only as a supplement to cover thick cell clusters.
To achieve this, we first performed a systematic analysis of the height variations within cytological samples in the z-dimension (section 3.1). Then, two image processing algorithms were developed, one cell based, and one slide based. The first one (section 3.2) decides whether a focus point is valid, i.e. detects a cell instead of an artifact. This cell based algorithm implements a semantic auto-focus function neglecting undesired non-cellular objects. The second algorithm determines the total focus quality of a virtual slide from an LBC glass slide. The algorithm thereby yields an objective measurement of a virtual slide's focus quality comparable to a human observer's assessment (3.3). A complete automatic workflow (3.4) was then created to automatically set valid focus points, measure the virtual slide's focus quality and automatically re-scan it in total or partially. In this way, the master-focus layer is iteratively determined. To our best knowledge this is the first reported systematic analysis of whole-slide imaging of LBC slides and the first development of a system, capable of fully automated single-layer focusing of LBC slides.
Materials and Methods
Technical setup LBC slides were digitized using the NanoZoomer HT Scan System (Hamamatsu Photonics, Japan, http://sales.hamamatsu. com/assets/pdf/hpspdf/e_ndp20.pdf) capable of scanning whole slides. The NanoZoomer can scan up to 210 brightfield or multicolor fluorescently stained slides automatically. It is able to digitize the whole slides and it has a Z-stack (or multilayer) capability that allows the focus to move three dimensionally to any part of the slide. The imaging system consists of three 4096664 pixel TDI-CCD sensors (cell size 8 mm*8 mm) and a 206objective lens (NA0.75). Flat field correction can be done easily with an empty blank slide. Misaligned lanes can be corrected with a calibration slide provided by Hamamatsu. Standard glass slides were scanned at 20-fold magnification (0.46 mm/pixel). The resulting virtual slides had an averaged compressed file size of 250 MB (JPEG compression, quality factor = 0.9), while uncompressed the file size was about 9 GB. The spatial dimensions of the scanned areas are about 65000650000 pixels. A direct connection to the scanner control routines was provided by an application programming interface (API). This API provided several methods to call certain control routines, e.g. start scan, start focusing, load slide, unload slide, etc.. This enables a bidirectional communication with the scanner by obtaining live scan information and also sending back control commands to the scanner during the scanning process. The software which controls the main workflow is written mainly in ANSI/ISO C++, and calls MATLAB functions for the image processing tasks during runtime. Scan software and the developed algorithms were running on a personal computer with an Intel Xeon H E5430 Dual Core, 2.66 GHz, 4 GB RAM with Windows 7 Professional 32 bit operating system.
Cytological samples and immunostaining
Liquid-based cytology cervical specimens were acquired from women enrolled in a large cross-sectional study of women attending a colposcopy clinic at the University of Oklahoma [19]. Written informed consent was obtained from all women enrolled into the study and Institutional Review Board approval was provided by OUHSC (University of Oklahoma Health Sciences Center) and the US National Cancer Institute. All analyses were conducted on anonymized liquid-based cytological specimens generated using the Thinprep system [20][21][22][23]. Liquidbased cytology is a method of preparing cytological samples for microscopic examination. Instead of conventional smear preparations, it involves making a suspension of cells from the sample that is used to produce a thin layer of cells on a slide [24]. Slides were generated using the T2000 processor, an automated slide preparation unit. Briefly, the cytological sample is obtained from the transition zone of the uterine cervix. The sample is then dispersed in vial containing a liquid suspension (PreserveCytH). The vial is then placed under the T2000 processer and the suspension gets centrifuged and passed through a filter to remove obscuring material (blood and mucus) leaving relevant cells on the filter surface. Finally, the cells on the filter are transferred onto a ThinPrep glass slide within a circular area measuring 22 mm in diameter. The slides are then immediately deposited into a fixative bath to be held for staining. All slides were stained using the CINtecH PLUS kit (Roche mtm laboratories AG, Heidelberg, Germany) according to the manufacturer's instructions. Briefly, slides were incubated subsequently with two monoclonal antibodies. The first one (E6H4) is directed against p16, followed by a second antibody linked to horseradish peroxidase and detected by adding DAB substrate, generating a brown stain. The second primary antibody is directed against Ki-67, a cellular proliferation marker which is highlighted by a red stain (Fast Red chromogen). All slides were counterstained with Hematoxylin. In this study, 555 LBC slides were analyzed (67 slides for focus point analysis, 88 for slide sharpness analysis and 400 for the analyzing the complete workflow).
Z-dimension Analysis of LBC Slides
In general, cytological samples do not maintain a perfect planar surface when transferred onto a glass slide in a liquid based preparation. To capture the whole glass slide in an optimal quality, the scanner must be able to detect the height profile of the cells inside the liquid preparation. To obtain information about how many focus points are needed to capture the whole focal variation of a LBC slide, the z-dimension ranges of six LBC slides were systematically analyzed. We evaluated how many focus points are necessary to cover the whole z-range variation of the cytological samples. 800 focus points were set on each slide distributed over the whole area covering the cells. These focus points were focused by the scanners' auto focus routine. The scanner then returned the distance relative to a normalized origin of the z-axis located inside the microscope objective. For each focus point, the corresponding image and its coordinates (X, Y and Z) were stored ( Figure 2a). All focused objects had a minimum spacing of over 1.9 centimeter to the z-axis origin, so we normalized all z-data by this distance. The images at the focus points are acquired with a linear array sensor (4096696 pixels). The plotted 3D graphs of every slide showed two different layers of focus points on every slide (Figure 2b with a representative graph of an exemplary slide). The first layer (in red) results from focus points from dust on the cover slip and so was incorrectly accepted by the general-purpose auto-focus routine of the scanner. For obtaining the objective height variations of the cell layer, all z-values belonging to measurements of the dust layer were removed. Subsequently, the focus point images were manually inspected and images which were blurred or were focused on artifacts were removed from the dataset. Also, the focus points which were set beyond the borders of the cell circle area were manually removed. The average number of the remaining focus points was 570 per slide. Based on these data, statistical values were extracted in order to obtain information about the focal variation of the focus points like the min, max, and average z values of the particular slides; also the difference between the slides was measured. A mesh was plotted over the surface constructed by the focus point dataset, enabling a visual 3D interpretation of the slides (Figure 2c and 2d). It is apparent that cells inside liquid based LBC slides do not exhibit a planar surface, and that cells with a whole range of different z values are located all over the slide. Figure S1 shows a boxplot of the z values of all 6 analyzed slides. The biggest z range within a slide was about 29.5 mm (Table 1). A multilayer scan with a spacing of 2 mm would at least require 15 layers to cover the whole focal variation. The standard deviation of the z values was at least 2.1 mm and at the maximum was 5.7 mm. Scanning one of these slides with one planar layer would necessarily result into out-of-focus regions. The data also shows that the height of the cell layer is different from slide to slide; In multilayer scanning, the most parts of the layers are out-of-focus and thereby an unnecessary amount of data is generated. The lower cross section shows the principle of a single-layer scan. A ''master-focus layer'' (green line) represents the full 3D focus map of the LBC slide. In the optimal case, one focus layer would be sufficient and multi-layering would not be needed anymore or only as a supplement to cover thick cell clusters (transparent green lines). doi:10.1371/journal.pone.0061441.g001 due to this inter-slide variability an individual focal plane has to be calculated for each particular slide. Also the results show, that the liquid based preparation method does not produce a planar layer of cells but instead a three dimensional gel of probably varying thickness in which the cells are embedded.
Semantic focus point analysis
The built-in auto-focus algorithm of the scanner is a contrastbased method which finds the best in-focus image for a given focus point; the inbuilt focus routine calculates the contrast of several images along the optical axis (z-plane) of each focus point. The image with the highest contrast is then used for calculating the focus map. However, a contrast-based method in itself is not able to decide whether a focused object is a cell or an artifact. The goal here was to determine and further include criteria enabling the decision whether a focus point image is valid or not. We define a focus point image as only valid if a cell is in focus. If an image is considered not containing cellular material, the corresponding focus point is removed. Figure 3 (a-f) show six different focus point images which were accepted by the built-in auto-focus routine of the scanner although just the above two reflect valid focus points.
From the results of the previous section (3.1) we hypothesized that three filter criteria should be sufficient to judge the validity of an individual focus image. These were maximal object size, the presence of visual object edges and a certain range of frequent color values. We tested then in how far these criteria would apply.
Size-filter. Binary objects from the image are obtained by performing a simple thresholding with the average grayscale background intensity from background areas of the slides as a threshold. Subsequently, the number of pixels of each object in the binary focus point image was counted. If none of the objects of the binary image has at least the minimum size of 200 pixels (the size of a typical small nucleus of a superficial cell), the focus point is classified as invalid.
Edge-filter. To detect and remove blurred images, an edge detect like the Canny edge (threshold 0.07 and sigma 1.41) detector [25] can be applied to grayscale intensity images. The result of the edge detector is a binary image containing edges of the objects which are present on the images. If an image is blurred, the number of its edges is much smaller compared to an image containing in-focus objects. A simple binary decision classifies the image as invalid if no edges are present.
Color-filter. Finally, the color values of the pixels provide valuable information whether they belong to cells. We determined the following image pixel classes which represent all pixelcategories: nuclei, cytoplasm, p16-staining or KI-67 staining. A pixel training dataset encompassing 340 images of nuclei, cytoplasm, background, p16-stained and KI-67 cells (85 images for each class obtained from 10 different slides) was collected. We then manually cropped the corresponding region for each class from the image. Within these regions, 100 pixels were randomly selected, yielding a training dataset of 85,000 pixels per class. On this dataset, an analysis of the individual objects in their respective HSV (hue, saturation, value) channels was carried out. The determined narrow ranges are depicted in Table S1. The focus point image was classified as invalid if less than 10 object pixel belonged to one of these classes. We created a potential classifier encompassing three criteria: images are required to have objects larger than a typical nucleus, at least one edge has to present in the image and 10 pixels have to fall into one of the four valid color categories.
We tested the in-total classification ability of the described criteria with a total data set of 2295 focus point images (containing 190051 objects), obtained from 51 LBC slides. The focus point images are RGB images and had a size of 4096664 pixels and were acquired in a 206 magnification. The results of the automatic focus point analysis were tested against a manual reference inspection of the focus point images. Valid reference images contained in-focus cells or parts of cells whereas invalid images were out-of-focus, focused on dust, debris or other artifact, or in general were blurred. Table 2 shows the results of the applied algorithm combining edge and color analysis. Both, sensitivity (98.1%) and specificity (99.1%) of the algorithm that classified the focus point images were very high. The positive prediction value, which is the proportion of objects with positive test results that are correctly classified, was 98.9%. The negative prediction value, proportion of objects with a negative test result that are correctly classified as negative was 98.2%.
Slide Sharpness Analysis
After having obtained a set of valid focus point, the question is whether they are a set in such a way so they accurately sample the three-dimensional height-profile of the LBC glass slide. Figure 3 (g-h) shows a well-focused cell image in contrast to a typical outof-focus cell image from our data set. Commonly, a slide can exhibit three different quality states which can be determined by a sophisticated classifier: in-focus, partially out-of-focus or completely out-of-focus. To assess this sharpness we developed a special, ThinPrep-Slide dedicated measurement algorithm. The goal of this analysis is to enable an objective assessment on the quality of a slide by a classifier that corresponds to the subjective assessment by a human viewer. Figure 4 shows the concept we propose to perform this measurement.
In the first step, all slides were automatically divided into 16 regions and sample images were extracted automatically from all those regions as follows to determine the overall slide's image quality. The regional division allows a time-saving re-scan of parts of the virtual slide. The scores for the individual regions were later averaged to describe the sharpness of the whole slide. From each region a low resolution overview was extracted from the virtual slide. This overview image was converted into grayscale, and objects were separated from the background using Otsu's segmentation method [26]. The HSV-values of the objects were analyzed to test if the objects are cells. The coordinates of the detected cells were then used to randomly extract up to 200 cell images in the original 206 magnification. An analysis yielded that a higher number of cell images does not increase the accuracy of the slide focus quality analysis ( Figure S2 (f)).
To quantitatively measure the sharpness quality of the individual sample images extracted a blind image assessment measurement is required. A blind image assessment technique is independent from any subjective reference standard but requires more sophisticated analysis techniques. We therefore used five different features to quantify the sharpness of a cell image (The features are described in detail in Table S2)). To classify each image a support vector machine (SVM) was used [27]. An SVM maps feature input vectors into a higher dimensional space and constructs an optimal hyper plane separating a set of training data into two groups [28][29][30]. The authors used a Gaussian Radial Basis Function (RBF) kernel with a default scaling factor (sigma) of 1. After initial computation of this hyper plane the SVM can be used as a sharpness classifier. To achieve higher accuracy in the classification, cells which are deemed inappropriate were removed from the training dataset. This includes huge cell clusters (.4 megapixels) and very small cells. A training set constructed containing one class of in-focus cell images and one class of out-offocus images. The percentage of in-focus cells for each region is then stored and used for calculating an average sharpness score for each slide. In detail, to calculate these sharpness scores, at first the percentage of in-focus cells is determined for each of the 16 regions by examining a maximum number of 200 cells per region. To calculate the average sharpness score for the slides, the percentages of in-focus cells of each region are added and then divided by the total number of regions. The resulting score is then compared with a user defined threshold. If the final score is lower than the threshold, the slide has to be rescanned. These scores basically represent the proportion of in-focus to out-of-focus cells that exist in the slides. Based on the outcome of this analysis, a decision can be made whether to re-scan the whole slide or re-scan specific regions. Another important issue is that this analysis also returns location-specific information of the cells. This is important especially for slides which contain a small number of cells as this information can be used to improve the setting of focus points on these slides.
For the training of the SVM a training set A of 1600 cell images was used. We divided the training set into two classes. The first class was composed of 800 in-focus cell images whereas the other half contained out-of-focus cell images. Cell images were obtained from 63 different slides which were manually scanned. Figure S2 shows the plots of the training dataset of the five different features used for classification. The plots show that it is possible to separate the in-focus cell images from the out-of-focus ones based on these five features (Table S2). For the first 4 features, a linear classifier would be satisfactory to separate the data. For the fifth feature, a To determine classification performance, the accuracy, sensitivity and specificity were computed from the test set. Big cell clusters and small cells were not used in the sharpness analysis. Based on these criteria, 1552 cell images were removed from the test set. The remaining 3232 cell images were then classified by the trained SVM. The results of the classification task were manually inspected by two reviewers (The percent agreement regarding focus status for the two observers was: 98%; n = 50 cell images). Table 3 shows the results of the classification task.
Complete workflow
Lastly, we evaluated the focusing quality when integrating the previous two algorithms into a single, automated workflow for cytological samples ( Figure 5, pseudocode S1). After the slide is loaded, the circular overall region-of-interest on the slide containing all cells is automatically detected. We detect this region by converting a macro image of the slide into a binary image by using Otsu's method (threshold 0.1). ThinPrep slides have black borders surrounding the cell region (cell circle). These borders are very easy to segment by Otsu's method. Furthermore the coordinates of these borders remain nearly constant on every slide. The cell circle is usually placed into the middle of these borders. Therefore, a segmentation of these borders provides the middle point of the cell circle which has a diameter of 22 mm. A set of 12 focus points is then automatically placed on the slide in this area. A smaller number of focus points was in many cases not enough for scanning LBC slides, while a higher number would significantly increase scanning time. After setting the focus points, the built-in auto-focus routine of the scanner commences with the focusing operation and subsequently the focus point analysis begins. If the resulting number of valid focus points is higher than five, the subsequent scanning of the slide is started. If the number of focus points is fewer than five, the number of focus points distributed over the slide is increased, and the scanner repeats the focusing operation. This step is repeated for a maximum number of five times. If the slide is not scanned after the fifth iteration, new focus points are automatically re-set and the whole procedure is restarted. If the slide passes focus point analysis within five iterations, then sharpness analysis is applied on the scanned slide. If the slide is completely out-of-focus, the number of focus points is increased and the slide is re-scanned. If the slide is partially out-of-focus, the out-of-focus regions are re-focused by increasing the number of focus points in these regions and the slide is re-scanned. Consequently, if the slide contains out-of-focus regions, the total sharpness is increased by stepwise improving the sharpness of those particular regions. The re-scanning of the slide is repeated for a maximum number of seven times. Slides which are not infocus enough for further analysis after these iterations are denoted as ''not scannable''. Performance analysis of the complete workflow The complete setup was integrated into the operating software of the scanner to measure performance. This enables the fully automated LBC slide scanning without any user interaction. Slides were determined as focused if their sharpness score was higher than 90%. The integrated workflow was tested with a total set of 400 LBC samples. We measured the time until the slides reached the mark of 90% sharpness in a single layer. Table 4 shows the results of the workflow. An average scan time of 13.9 min per slide was achieved in a single master-layer. The maximum scan time was 55.2 min for one slide and the fastest scan time was 5.7 min. The average number of focus points was 29.1. The maximum number of focus points was 85. The scan iterations which were needed to reach the sharpness criteria of 90% were also measured ( Table 5). Nearly 50% of the slides were already successfully scanned after the first slide-scanning iteration. The remaining slides which were not in-focus enough after the first round were refocused on their out-of-focus regions within the next scanning rounds to achieve the required sharpness criteria. The slides which were not completed after the seventh round were automatically aborted (3 slides). These slides contained only a very small number of cells. An exemplary cluster-analysis illustrates the scanning process ( Figure S3).
Discussion
Liquid-based cytology (LBC) preparation techniques open new possibilities for a systematic biomarker analysis in cytology. They create clear and rather uniform slides that can facilitate interpretation of Pap-stained and biomarker-enhanced slides. LBC slides are also amenable to high throughput automated analysis. Especially for the detection of rare events on LBC slides, Whole-Slide Imaging (WSI) and subsequent image-processing is of crucial importance for guaranteeing a standardized high quality read out. Unfortunately, digitization of cytological samples is a complex process compared to the routinely used histological tissue samples. Cytological samples have a pronounced three-dimensional profile due to their liquid based preparation and are therefore are harder to capture in sufficient quality. Up till now LBC slides have been digitized by scanning the slides with several layers to cover the whole cell distribution along the z-axis. For example in [31], 15-20 mm is noted as a suitable range for scanning LBC slides, in [15] the slides are scanned at 31.5 mm, in [16] the authors used 20 mm and in [32] the authors conclude that the optimal number of focal planes remains unknown for cytology. Multi-layered imaging substantially hinders the subsequent manual or automatic image processing. There is no guarantee that even multi-layering acquires all necessary objects on the slide. There is no ''optimal'' number of layers nor of an ''optimal'' spacing between them [32]. Our results show that multi-layering has so far only been used for LBC slides to circumvent the determination of a ''master-focus'' layer, which allows the imaging of the far majority of all cells in all regions of the slide in-focus. Therefore we set out to provide the first systematic analysis of ThinPrep slides to determine if such a master-layer can be determined and which proportion of in-focus cells it comprises. As a result of this analysis we stepwise developed an automated imaging procedure for these slides. We then evaluated the resulting overall system and showed, for the first time, that it is a highly effective whole-slide imaging system, capable of the highest quality focusing. Although our approach is based on a specific slide scanner, the Hamamatsu Nanozoomer NDP HT, it can be transferred to all similarly working scanning devices.
To obtain data about the 3D spatial distribution of the cell layer in LBC-prepared cytological samples, the height variation of six different slides were analyzed in a first step. The results showed that cells are indeed arranged in some kind of ''mono-layer'', but in a three-dimensional, complex folded height profile. The range of cells along the z-axis is up to 29 mm in our examples. Hence, capturing of the corresponding height profile will require a substantial number of focus points to cover the whole height variation of the slide. But scanning with an excessive number of set focus points would result in a substantial loss of time performance. We therefore developed a novel, semantic focus routine, capable of intelligently checking focus points for whether they truly reflect individual cells or cell clusters. We then observed that still this would not be sufficient as it is necessary to place the focus points in such a way that the overall three-dimensional profile of the liquid based preparation is captured. In this process the initial number of focus points deployed is minimal at the beginning but is automatically increased to the necessary extent. Also the location of the placement has to be adapted to allow slide-scanning. During cover slipping of LBC slides, the spatial distribution of the cells is slightly perturbed. This perturbation occurs around points of similar pressure and shows only little variation in a small distance (several millimeters). Therefore, cells that are located in close proximity also have very similar location in the z-axis. Thus, it is possible to find a surface which allows single-layer scanning of the LBC slide. An automated focus analysis must be able to autonomously distinguish between in-focus and out-of-focus cells, and accordingly provide an objective quality measurement for cytological samples. Therefore, we trained a support vector machine with five different features on a training set consisted of over 1600 single cells. The classifier was able to correctly classify in-focus cells with an accuracy of 94.8%. The assembly of the individual steps into a general workflow results in the first system capable of automatically scan liquid based preparation slides.
This was validated with a complete series consisting of 400 LBC slides and only 3 of them were finally not scanned. These slides exhibited only a very small number of cells. In routine diagnostics, such slides would be discarded as inadequate samples. With the implemented approach, we achieved an average total scanning time of 13.9 min. This average time allowed for scanning approximately a hundred slides at one day which is acceptable for high throughput processing. However, a major time consuming part of this approach are the image processing algorithms. Sharpness analysis takes at least 60 sec per slide. If a slide has to be scanned 3 times, then 3 min of the total scan time are used for calculating the sharpness of the slide. Also the scanning-time of the hardware can be expected to improve in future. Thus, several options for accelerating the so far achieved scanning times exist. Already with our current system, cytological samples are scanned within an adequate timeframe in a fully automated manner, Figure 5. A simplified schematic of the complete workflow for scanning one slide. The slide is loaded and the area to be scanned is detected automatically. Focus points are set and after autofocussing, the focus point images are analyzed. If the number of valid focus point is higher than five, the slide is scanned and its sharpness is analyzed. From the results of sharpness analysis, a decision is made whether to re-scan the slide or not. The slide is re-scanned until the quality is sufficient for further analysis. doi:10.1371/journal.pone.0061441.g005 without generating too much data. Our results show that indeed a master-focus layer can be determined in LBC slides and that scanning in this layer captures more than 90% of the cells in focus. Thus, our system show that in principle the user does not have to switch manually between multiple z-layers and the implementation of image processing algorithms is simpler and more reliable. However, on some slides thick cell clusters may appear. These cell clusters cannot be covered along their entire z-axis by a single layer. Using the determined master-layer as a base layer, with very few additional layers now also these cells can be efficiently captured if requested. An added benefit of the automated sharpness analysis is objectivity compared to manual inspection. All regions of the slide are processed which reduces considerably the probability of ''unseen'' out-of-focus regions. Concluding, high quality image processing with an effective technique is essential for high quality screening. Some scanners provide a dynamic focusing option. There, the sample surface profile is tracked while scanning and the focus layer is adapted on the fly. This occurs extremely fast based on physical surface parameters and does not and also cannot comprise any semantic analysis like performed in our work. Our detailed analysis shows that often focusing is impaired by dust or preparation artifacts. In such a case, a dynamic focus would continue to focus in the incorrect z-layer (dust) instead on the cells. Our approach shows for the first time that it is possible to scan LBC slides in a single layer. Current limitations of the approach include high investment costs for the instrument and possibly long scanning times. To obtain information about the reproducibility of the scan quality of the scanner, we scanned one slide 5 times with exactly the same settings and compared the focus quality (98.07%; 97.51%; 97.41%; 97.39%; 97.15%) of the slides. The calculated coefficient of variation (CV) was very low (0.35%) showing that the scanner was very robust by reproducing nearly the same slide quality by scanning with equal settings.
The most exciting result from our work is that we achieved a routine scanning of LBC slides in only one single layer that generated virtual slides of highest quality and suitable for further high throughput analysis. Thus, our system allows fast imaging, and expands the possibilities of automated image based cytological screening. Pseudocode S1 Pseudo-code for the whole control flow and the total focus quality analysis. (PDF)
Supporting Information
Author Contributions | 8,670.2 | 2013-04-09T00:00:00.000 | [
"Computer Science"
] |
Calculation of total cross sections for electron capture in collisions of Carbon ions with H(D,T)(1s)
The calculations of total cross sections of electron capture in collisions of Cq+ with H(1s) are reviewed. At low collision energies, new calculations have been performed, using molecular expansions, to analyze isotope effects. The Classical Trajectory Monte Carlo method have been also applied to discuss the accuracy of previous calculations and to extend the energy range of the available cross sections.
Introduction
Carbon ions are one of the main impurities in present tokamak plasmas, where carbon composites are used in first wall tiles, specially in the divertor. It is known that these materials are not appropriate for D-T plasmas because of the tritium deposition problem, and it is planned that ITER first wall will be completely made of Be and W tiles. However, small amounts of carbon impurities will be present in ITER, and it is possible that future devices will include materials which will release C q+ ions. The importance of carbon impurities has stimulated many theoretical and experimental works on collisions of these ions with Hydrogen. In particular, the electron capture reactions (EC) were reviewed in 2006 by Suno and Kato [1]; they reviewed the bibliography and proposed a set of recommended data for both total and state-selective electron capture cross sections.
The aim of the present work is to discuss the existing calculations on the EC reactions: The collisions involving the fully stripped projectile C 6+ have been studied in many works because their relevance in charge exchange diagnostics [2] and also because, being a one-electron system, it is relatively easier to describe theoretically than the collisions with other Carbon ions, which require the use of sophisticated quantum chemistry techniques to obtain accurate results. The collision C 4+ + H, can also be treated as a one-electron system, where the interaction of the active electron with the C 4+ core is described by means of an effective potential. In order to discuss the accuracy of previous calculations and to fill up some gaps of the existing database, we have carried out new calculations employing classical trajectory Monte Carlo (CTMC) methods for energies above 10 keV/u. We have applied the CTMC method to H collisions with partially stripped projectiles by employing a model potential to represent the interaction of the active electron with the ionic core, as previously employed in the classical calculations of Stancil et al [3].
At low energies, we have employed the molecular-functions-close-coupling (MFCC) method 1 . our calculations aim to extend the energy range of the computed EC total cross sections and to discuss the isotopic effect. These two aspects have been less studied because they are not relevant in core plasma diagnostics, but can be important in passive diagnostics and in plasma modeling. For collisions of low charged (q=1,2) Carbon ions, the EC reaction is endothermic, the EC channels C (q−1)+ are closed at low energies, and the total cross section decreases rapidly as the collision energy, E, decreases. However, for q=3-6, the reaction (1) is exothermic for some EC channels and, consequently, there is not a threshold in the EC total cross sections, which increase approximately as E −1/2 as E decreases, as predicted by the Langevin model [4,5], leading to large cross sections at energies below E ≈ 10 eV/u. Another relevant aspect of low-energy EC is the presence of shape resonances, found in several theoretical works [6,7,8,9], although the precision of the experimental techniques has precluded to observe them hitherto.
The MFCC method
The MFCC calculations have been carried out using the method described ion detail in previous publications [8,10]. In this method, the scattering wave function Ψ J is solution of the stationary Schrödinger equation: where E is the collision energy and H is the total Hamiltonian of the system given by with µ the reduced mass of the nuclei and H el the Born-Oppenheimer electronic Hamiltonian. Ψ J is expanded in a molecular basis set, {φ j }: In this expression, J is the total angular momentum and r are the electronic coordinates. ξ is a common reaction coordinate [11], a linear combination of the internuclear vector R and the electronic coordinates that ensures that a truncated expansion fulfills the boundary conditions. We have employed a common reaction coordinate based on the switching function of reference [12] for the many-electron systems, while for C 4+ and C 6+ + H collisions the common reaction coordinate is defined by means of the switching function of references [13,14]. The molecular functions φ j are (approximate) eigenfunctions of H el : For a N -electron system, H el has the form: where V (r, R) is the potential that includes electron-nuclei and electron-electron interactions. Substitution of the expansion (4) into the Schrödinger equation leads to system of differential equations for the nuclear functions χ J j , which is solved numerically. The asymptotic behavior of these functions allows us to evaluate the S-matrix, as explained in detail in [15], and the total cross section for the transition between two states φ i → φ j , σ ij , is computed from the S-matrix elements using the equation: The molecular orbitals of the one-active electron systems CH 4+ and CH 6+ were obtained using the method of Power [16]. For the many-electron systems we have computed the molecular wave functions, the potential energy curves and non-adiabatic couplings by applying the multireference configuration interaction (MRCI) methods. In the present work we have employed the package MOLPRO [17] for calculating the molecular functions of the four-active electron system CH + , and the package MELD [18] for the three-and two-active electron quasimolecules CH 2+ and CH 3+ .
The CTMC method
At E > 10 keV/u, we have employed the CTMC method. In our treatment the nuclei follow rectilinear trajectories (R = b+vt) and the electronic motion is described by means of a classical distribution function ρ(r, p, t) for an ensemble of N (N ≈ 10 5 in our calculations) independent electron trajectories, (r(t), p(t)), which are obtained by integration of the Hamilton equations determined by the electronic Hamiltonian. The standard CTMC method [19] is restricted to one-electron systems, but it can be applied to one-effective electron systems by employing an effective potential. In our calculations we have used the model potential: to represent the interaction of the active electron with the C q+ ion. In equation (8), r C is the electron distance to the C nucleus, N c the number or core electrons and the parameter α (see table 1) has been obtained by fitting the ionization energy of the C (q−1)+ ion. The total EC cross section is given in this method by: where the EC transition probability is calculated as: being N EC the number of trajectories where the electron is bound to the projectile at the end of the collision (vt fin = 500 a.u. in our calculation); i.e., those with negative energy with respect to the projectile and positive energy with respect to the target. At low collision energies, the accuracy of the CTMC method is determined by the quality of the initial distribution, ρ i (r, p), that, in our case, is the classical distribution for the ground state of the Hydrogen atom. The standard CTMC treatment employs a microcanonical distribution, where all electron trajectories have the energy −0.5 a.u. of the quantal level. In order to compare with the quantal distributions, one can introduce the radial and momentum distributions: It is well known (e.g. [20]) that, although the microcanonical distribution leads to a momentum distribution identical to the quantal one, it is unable to describe the tail of the radial quantal distribution, which includes electron-nucleus distances in the classically forbidden region. At low collision energies, this limitation is specially relevant because the EC probability is high for the initially loosely bound electron trajectories. Some alternatives have been suggested in order to solve this limitation of the microcanonical distribution. In particular, we have employed the so-called hydrogenic distribution [21,22], which is a linear combination of several (ten in our calculations) microcanonical distributions with different energies; the coefficients of the combination are obtained by fitting the quantal radial distribution with the restriction that the mean energy of this distribution is approximately equal to −0.5 a.u.. In practice, it is found that the hydrogenic spatial and momentum distributions agree with their quantal counterparts. Previous calculations on collisions Hydrogen atoms and fully stripped ions with q ≥ 6 have shown that the CTMC EC cross sections at high impact energy (E 500 keV/u) are larger than those obtained with perturbative treatments, which are expected to be accurate in this energy range. Therefore, we present in table 2 our CTMC cross sections for E < 300 keV/u.
C + + H collisions
Janev et al [23] deduced a recommended total cross section for the reaction based on the experimental data of references [24,25] and [26]. Stancil et al [3] have performed a combined experimental and theoretical study of this process. In order to cover a wide energy range, they reported merged-beam experiments and calculations with several methods: MFCC, CTMC, multielectron hidden crossing and the decay model. Although the new results agree with the high-energy measurements of references [24] and [26], they disagree with those of [25] at collision energies between 0.3 and 1 keV/u. However, the MFCC results of Stancil et al [3] agree with the merged-beam experimental cross sections reported in the same paper, which lead to the authors to suggest a new recommended cross section for reaction (12) that clearly differs from that of reference [23] for energies below 1 kev/u. As explained in reference [3], the potential energy curves of the CH + system do not show any avoided crossing between the potential energy curves of the entrance channels and those of the molecular states leading to EC. The main mechanism of reaction (12) involves transitions from the state a 3 Π to 2 3 Π, with a energy difference of about 0.2 a.u., that yield a cross section that decays rapidly as E decreases.
We have performed a MRCI calculation with an aug-cc-pVQZ basis set [27]. The molecular orbitals were obtained by a 5-state CASSCF procedure, and the ensuing MRCI calculation included up to 5 × 10 5 configuration state functions. We have employed these molecular wave functions to calculate the EC total cross section, which is practically identical to that of reference [3]. Moreover, the merged-beams experiment of Stancil et al. [3] was carried out with Deuterium, while that of Nutt et al. [25] was performed with Hydrogen. We have checked that the isotopic dependence of the calculated cross section for the reaction (12) is negligible above E = 10 eV/u and does not explain the difference between both measurements.
On the other hand, the CTMC results of Stancil et al [3] underestimate the experimental data for energies between 1 and 25 keV/u. These can be due to the inadequacy of the microcanonical classical distribution for H(1s) [19,20], which leads to a decrease of the EC total cross section at low energies (see e.g. reference [28]). The use of a model potential, whose functional form is not shown in the paper of reference [3], might also limit the validity of that calculation. For E < 1 keV the CTMC calculation overestimates the experiment, which is explained by the authors as due to the trajectories leading to capture with energies below those in the phase space bin associated to the 2p level and that would correspond to the electron capture into quantum levels that are already occupied. The above-mentioned difficulties are probably not important at 25 < E < 200 keV/u, where the CTMC calculation agrees with the available experiments [24,26]. The recommended data are not supported by either calculations or experiments at E > 200 keV/u. To further study the workings of the CTMC method we have calculated the total cross section with the eikonal CTMC method and with the model potential of equation (8). A hydrogenic distribution [22] has been used to represent the initial distribution of the H(1s). The comparison of the present calculation with the recommended data of Stancil et al [3] and the CTMC data from the same reference (figure 1) indicates that the underestimation of the total cross section for energies below 25 keV/u is reduced by using the hydrogenic distribution; our results with the microcanonical distribution, not shown in figure 1, are indistinguishable from those from reference [3]. In the energy range of figure 1 we have not found a sizable effect of trajectories leading to nonphysical EC.
C 2+ + H collisions
The system has been studied theoretically by Gu et al [29] and Errea et al [30], both calculations used a MFCC method within a semiclassical treatment. This system was studied experimentally by Nutt et al [31], Phaneuf et al [24], Goffe et al [26], Gardner et al [32] and Voulot et al [33]. The calculation of reference [30] tabulated total and state-selective cross section for the EC reaction: From the experimental point of view, the main difficulty to study this system is the presence of unknown quantities of metastable ions [C 2+ (1s 2 2s2p 3 P)] in the beams. However, references [33] and [30] pointed out that the experimental total cross sections are not modified by a contamination of the beam as high as 20% for energies above 0.1 keV/u. Nonetheless, the calculated cross sections overestimate the experimental ones at E < 1 keV/u (see figure 2). Errea et al [30] suggested that the discrepancies could be due to a normalization problem of the experiment, but neither new experiments nor calculations have been performed. At E > 1 keV/u, target ionization is not negligible, and the molecular close-coupling treatment, which does not include continuum wave functions, is not applicable. To our knowledge, no other theoretical methods have been applied to this particular reaction. [3], recommended data of reference Stancil et al [3] and experimental data [24,26].
At low energies (E < 100 eV/u), the eikonal approximation employed in references [29] and [30] is not accurate, and we have carried out a fully quantal calculation. Since energy differences are critical in low-energy calculations, we have recalculated the potential energy curves and dynamical couplings of the doublet molecular states of the CH 2+ quasimolecule. In our calculation, the asymptotic energy difference between the states dissociating into C 2+ (1s 2 2s 2 1 S) + H(1s) (the entrance channel) and C + (1s 2 2s2p 2 2 D) (the main exit channels) is 0.0577 a.u., which has to be compared to the value 0.0384 a.u. reported in [30] and the experimental value 0.0547 a.u. [34]. The cross sections for reaction (13) are shown in figure 2. It is clear that the new MFCC calculation with rectilinear trajectories reproduces the results of Errea et al [30], and the difference with the experiment of Nutt et al [31] is essentially due to the population of the state C + (2s 2 p 2 2 S). In fact, the 3-state calculation, including the entrance channel and the most populated 2 Σ + states [those dissociating into C + (2s 2 2p 2 P) + H + and C + (2s2p 2 2 D) + H + ], yields a cross section practically identical to the experimental one and to that reported by Gu et al [29], who only included the populations of 2 P and 2 D exit channels.
At energies below 0.1 keV/u, the trajectory effects are sizable, but a relatively simple 2state calculation can be carried out, as can be deduced from the comparison of 2-and 3state cross sections. The two-state quantal calculation leads to the cross sections for different isotopes shown in figure 2. As for C + + H collisions, the EC cross section of figure 2 shows a threshold at low E, as expected for a endothermic reaction. The isotope effect is small but noticeable, and with a behavior (σ T > σ D > σ H ) already observed in other systems (see [35,36]). This isotopic dependence is typical of systems where the electron capture reaction takes place at short internuclear distances where the ion-atom interaction potential that defines the trajectory is positive. We do not present CTMC results for reaction (13) because this collision is not accurately described by means of a single electron treatment given that two- [30] and the experimental results of Nutt et al [31].
electron processes are known to play a significant role. In particular, the formation of the main exit channel C + (1s 2 2s2p 2 2 D) of the EC reaction involves a two-electron process from the initial state, C 2+ (1s 2 2s 2 1 S) + H(1s), in which one electron is captured and another one excited.
C 3+ + H collisions
Total cross sections for electron capture in C 3+ + H collisions were measured by Phaneuf et al [24,37], Goffe et al [26], Crandall et al [38], Gardner et al [32],Ćirić et al [39] and Havener et al [40]. Previous calculations have been carried out using MFCC treatments within semiclassical [41] and quantal formalisms [42,43,44], atomic orbital close-coupling (AOCC) treatments [45] and the so-called Electron-Nuclear-Dynamics method [46]. The results of previous works are plotted in figure 3. In the present work we have performed a MFCC calculation with MRCI wave functions. As a confirmation of the quality of the molecular wave functions, the asymptotic energy differences between the most important channels differ in about 0.01 a.u. from the spectroscopic values, which is a significant improvement with respect to those employed by Errea et al [41] and Herrero et al [44].
In the present work, we have evaluated the EC cross section by using an improved set of 33 [42,43,41,44,45,46] and experiments [40] are also included, as indicated in the figure.
molecular functions (15 singlets and 18 triplets) in the eikonal calculation and 10 functions (5 singlets and 5 triplets) in the quantal one. Our quantal total cross section is lower than that of Herrero et al [44] and the eikonal one is somewhat lower than that of Errea et al [41]; the differences are due to the changes in the potential energy curves and couplings with respect to those of the valence bond calculation [47] used in previous MFCC calculations. The new results show very good agreement with the merged-beams experiment [40]. At energies above E ≈ 2 keV the molecular expansion converges slowly, and the AOCC results of Tseng and Lin [45] are probably more accurate. Moreover, the AOCC calculation agrees with the experimental data and the cross section computed by Guevara et al [46]. At low velocities, the EC cross section (see figure 4) increases rapidly as E decreases, following approximately the Langevin model [4], where the cross section is proportional to E −1/2 . As already pointed out by Herrero et al [44], the isotopic dependence is small in the triplet subsystem, which leads to a small but noticeable isotopic dependence of the total cross section for v < 2 × 10 −3 a.u. (E < 0.1 eV/u). One can also note the spikes in the cross sections due to the presence of quasi-stationary vibrational states in the effective potential of the initial molecular state. This effective potential is obtained by adding the centrifugal term and the electronic energy, whose asymptotic behavior: corresponds to the ion-induced-dipole interaction, with α the atom polarizability. One can also note the oscillations of the EC cross section for 0.1 < E < 0.3 eV/u, caused by the interferences [40] are included.
between the nuclear wave functions in two avoided crossings between the potential energy curves of the entrance and the main exit channel in the triplet subsystem. At high energy (see figure 5), there are no theoretical data available, and we have performed a calculation similar to that for C + + H (figure 1). The calculation employs the eikonal-CTMC method with the hydrogenic initial distribution and the model potential of equation (8). At E > 30 keV/u, our cross section shows good agreement with the experiments and the recommended data. For E < 30 keV/u the calculation agrees with the experimental results of Goffe et al [26], but these energies are probably too low to employ the CTMC method to asses the accuracy of the experimental values.
C 4+ + H collisions
Several calculations have been carried out of EC total cross sections in C 4+ + H collisions. In particular, MFCC calculations of [48,49], expansions in terms of atomic orbitals [50], wavepacket treatments [51] and the hyperspherical close coupling method [52]. The low-energy behavior of the cross section has been studied in reference [9]. The EC cross sections is almost constant for collision energies between 100 eV/u and 10 keV/u. For E > 10 keV/u, target ionization is the dominant process and the EC cross section decreases rapidly as E increases. The cross section at high energies was measured by Phaneuf et al [24] and Goffe et al [26], but there is no theoretical counterpart of these measurements.
At E < 10 keV/u, there is a remarkable good agreement between different calculations, but the agreement with the experiments [53,54] is less satisfactory. In this respect, Liu et al [52] concluded that the low-energy measurements are not supported by the theories, and that further experimental work is needed. In our opinion, the calculated cross sections are more accurate [24,26], the AOCC calculation of Tseng and Lin [45] and the recommended data of Suno and Kato [1].
than the experimental ones for this particular system, in contrast with the recommended values of reference [1] that follow the experimental data at E < 100 eV/u. One should also note the excellent agreement between different calculations (see reference [52]) for the partial cross sections for populating C 3+ (3l) states.
In this work we have recalculated the total EC cross section at very low energies using the molecular basis set of Barragán et al [9] to study the isotope effect. Our results are displayed in figure 6. In contrast with the dependence illustrated in figure 2, characteristic of reactions that take place at internuclear distances where the interaction potential is repulsive, the isotopic dependence (σ T > σ D > σ H ) shown in figure 6 is similar to that found for C 3+ + H and comes from trajectory effects for the relative motion in the attractive ion-induced dipole interaction potential [36]. Our CTMC results are compared to the experimental results in figure 7, where one can note the excellent agreement between experiments and calculations.
7. C 5+ + H collisions C 5+ + H collisions have been considered in MFCC calculations within both quantal [55,56] and semiclassical [57,56] formalisms. CTMC calculations have been reported in reference [57]. Total cross sections have been measured in references [38,58,26]. Recently, Draganić et al [59] have performed merged-beams experiments. The MFCC calculations lead to a Langevin-type cross section in satisfactory agreement with the experimental data, although the energy grid of the calculation does not allow to observe the spikes due to shape resonances.
We have applied the CTMC method with the corresponding model potential to evaluate the EC total cross section (figure 8), which shows a general good agreement with experiments [26] and the calculation of reference [57], although our calculation extends over a larger energy range than the previous one, It can be noted that the energy dependence of our cross section is identical to that of the recommended data of Suno and Kato.
C 6+ + H collisions
As already mentioned, EC in C 6+ + H(1s) collisions has been studied in several publications. Experimental data have been reported in references [26], [58] and [60]. High-energy calculations include the first Born approximation [61] and the eikonal impulse approximation [62]. The AOCC method has been employed in references [63,64,65,66] and [67]. CTMC calculations have been carried out in references [22,67,68] and [69]. MFCC calculations were performed in references [70,71] and [72]. The hyperspherical-close-coupling method was applied in the reference [73]. The mentioned calculations provide accurate values of the EC total cross section for energies above 10 eV/u together with state-selective cross sections. At E <10 eV/u, the work of Liu et al [73] has pointed out the relevance of the EC into C 5+ (n = 5) and the Langevintype behavior of the total cross section, but values of this cross section have not been reported.
The EC reaction at low energies takes place via transitions between the molecular orbitals 650 and 450 (in the united atom notation), at the avoided crossing at large internuclear distances (R ≈ 21.3 bohr). In the present work, we have calculated the EC total cross section by using a molecular basis set that includes the entrance channel and the exit channels dissociating into C 5+ (n = 5) + H + ; the cross section is shown in figure 9. We have also checked that EC into C 5+ (n = 4) + H + is negligible for the collision energies of figure 9. We have also plotted in figure 9 the total cross section for collisions with H, D and T to show that the isotope effect is almost unnoticeable for this collision system, in accordance with the fact that, at the large internuclear distances where the transitions take place, the trajectory effects that give rise to the isotopic dependence are very small. [26] and Phaneuf et al [24].
Recently, we have carried out CTMC calculations for this collision [68], and the results are displayed in figure 10, which also contains previous calculations and experimental data. Our results agree satisfactorily with the available experimental data. We also find a good agreement with the AOCC calculation of Igenbergs et al [67] for E < 100 keV/u. At higher energies, the CTMC calculation agrees with the first Born approximation [61], but it overestimates the results of the reference [62] (eikonal impulse approximation), which may be due to a limitation of the CTMC method to describe the EC into low n shells in the high-velocity limit.
Concluding remarks
In this work we have reviewed the calculations of electron capture total cross sections in collisions of Carbon ions with H(D,T)(1s). Although there exist several calculations for C 6+ + H(1s) with different methods, few calculations have considered collisions of partially stripped ions, and the recommended data are based on experimental data, with the exception of C 4+ + H(1s); this system can be accurately described by employing a one-active electron treatment, and the agreement between different calculations is conspicuous. For the other collision systems, molecular calculations have been carried out systematically, employing multireferenceconfiguration-interaction wave functions. The atomic expansion is difficult to apply to manyelectron systems and the CTMC method has not been employed routinely until now to evaluate the EC total cross sections.
The new calculations presented in this work are focused on two points. First, the low-energy behavior of the EC cross section. It has been shown that the cross sections for Hydrogen collisions with the ions with q ≥ 3 show the Langevin-type energy dependence, and the isotopic effect can be explained as due to trajectory effects. In the particular case of C 6+ + H(1s), the isotopic effect is negligible. Although the EC cross section at E < 20 eV/u for the collision [26], the CTMC calculation of Shipsey et al [57] and the recommended data of Suno and Kato [1].
C 2+ + H(1s) is small, its isotopic dependence is noticeable. The second aspect covered by our work is the calculation of EC cross sections at energies E 25 keV/u. We present CTMC calculations for q = 1, 3, 4, 5 and 6, with an excellent agreement with experiments. Our results support the usefulness of CTMC method and show that the one-electron approximation, with the model potential of equation (8), provides a remarkably good representation of EC in these collisions at high energies. In particular, the comparison with the experimental data indicates that capture into occupied shells is not significant. The method is not applicable to treat C 2+ + H(1s) collisions, and new calculations are required for this system, for instance using a many-electron AOCC expansion, to compute the EC cross section for energies above 1 keV/u. Figure 10. Total cross section for electron capture in C 6+ + H(1s) collisions as function of the collision energy. Our CTMC results are compared to the experimental results [26], the AOCC calculations of Toshima [65] and Igenbergs et al [67], the eikonal impulse approximation (EIA) [62] and first Born approximation (FBA) [61]. | 6,690.6 | 2015-01-15T00:00:00.000 | [
"Physics"
] |
Plastics Recycling – Technology and Business in Japan
The Japanese government promotes 3R policies in the country and to developing countries [1]. However, there are many obstacles and discrepancies between the idea and reality. After the straggles in technologies and consumer movements over thirty years, a legislative approach to waste-plastic recycling started in Japan in the year 2000. The first target was plastic containers and packaging from household wastes. Approximately ten years have passed, and we still face many problems within the country: high recycling costs, low quality of recycled resin with respect to the market value, and so on. Some of our challenges and achievements, or fact data itself would be good material for people that have an interest in waste plastics recycling. Many discussions and data in this field are not published in academic journals, and the reports of the national government, municipalities and companies are mostly written in Japanese. The fact data, commercial technologies, and businesses for recycling waste plastics in Japan are reviewed in this chapter.
Introduction
The Japanese government promotes 3R policies in the country and to developing countries [1].However, there are many obstacles and discrepancies between the idea and reality.After the straggles in technologies and consumer movements over thirty years, a legislative approach to waste-plastic recycling started in Japan in the year 2000.The first target was plastic containers and packaging from household wastes.Approximately ten years have passed, and we still face many problems within the country: high recycling costs, low quality of recycled resin with respect to the market value, and so on.Some of our challenges and achievements, or fact data itself would be good material for people that have an interest in waste plastics recycling.Many discussions and data in this field are not published in academic journals, and the reports of the national government, municipalities and companies are mostly written in Japanese.The fact data, commercial technologies, and businesses for recycling waste plastics in Japan are reviewed in this chapter.
Generation of waste plastics, and the legislation for their management 2.1. Generation of waste plastics
The Ministry of Environment of Japan announces the current status of generation and treatment of general wastes [1,2] and industrial wastes every year.Municipal wastes of the total collection amount of 46.3 million tons include those produced from households (25.6 million tons) and those from small businesses (13.3 million tons).Waste plastic contents in municipal wastes are regularly monitored by each municipality.Depending on separation categories decided by municipalities, waste plastics are incinerated as mixed burnable wastes, or recycled like plastic containers and packaging.Some plastic materials such as those in toys and in electronic devices go to landfill.A composition survey of mixed wastes that were brought into the Chuo incineration plant in Tokyo gave 16.0 wt% as a mean value of waste-plastics content in burnable wastes from households (fiscal year 2010) [3].This mean value is an average of four times a year.The detailed composition and property of burnable wastes is shown in Table 1.For industrial wastes, waste plastic is one of 20 collection criteria of industrial wastes, and the recent survey shows the generation of 5.67 million tons in 2009 [4].
Category
Mean The world plastics production is 265 million tons (2010) [5].Production of typical synthetic resins in Japan (2010) is 12.2 million tons (Table 2) [6], and it accounts for 4.6% of the world production.The generation and the macro flow of the waste plastics in Japan are published annually by the plastic waste management institute.The latest data of the year 2010 are shown in Figures 1 though 3 [7].The total generation of waste plastics increased until 2004 (10.13 million tons), and gradually decreased to 9.12 million tons (2009) due to the shrinking economy in Japan.In 2010, the total generation increased to 9.45 million tons because of the recovery from the economic crisis in 2008 [7].by municipalities.The containers and packaging recycling law allows a municipal government to decide on the separation rules for the collection of municipal wastes.
In Entry 1 of Table 2, annual PET generation is estimated as 594,689 tons (fiscal year 2010), which is given by the total sales of PET bottles including the sales of domestic products (579,782 tons) and imported products (14,907 tons) [8].The total sales of PET bottles are the weights of PET resin that does not include screw caps and labels of non-PET plastics.The breakdown of PET in general wastes is based on the current status of implementation of PET-bottles collection by municipalities under the Containers and Packaging Recycling Law in FY 2010 (Table 3) [9].The recycled amount of PET bottles, 286,009 tons, corresponds to PET-resin production from PET bottles in general wastes.If all municipalities implements PET-bottles collection, the expected recovery from general wastes by municipalities is 290,364 tons (=286,009 tons collection / 0.985 population coverage).The difference between the total sales of PET bottles and expected total recovery from general wastes is 305,000 tons, which would correspond to PET in industrial wastes.
The generation amount of mixed plastics is 1,155,000 tons (=705,707 tons collection / 0.611 population coverage), which is plastic containers and packaging other than PET bottles and plastic trays.In a waste sorting facility of a municipality, mixed plastics are compacted to prepare a cubic bale for the cost-effective transportation.The bale often contains serious amounts of contaminants such as metals and moisture.Table 4 shows an example of the composition of a bale that used for the life cycle assessment of waste-plastics recycling [10].If the plastic content of mixed plastics that are collected under the containers and packaging law is 90.1%, the total plastic amount is 1,040,658 tons (= 1,155,000 0.901).
There are 1,742 municipalities in Japan (April 1, 2012).As shown in Table 4, 61.6 % of all municipalities adopted separate collection and recycling rules for mixed plastics under the Containers and Packaging Recycling Law.The other municipalities adopt incineration of mixed wastes or landfills avoiding the high cost of collection and handling because the recycling law in Japan still allows a municipality to choose mixed collection of waste plastics with the other general wastes for incineration or separate collection of waste plastics for recycling.Due to the economic benefits of PET recycling for municipalities, the implementation rate of separate collection and recycling of PET bottles reaches to 99.1% of the whole municipality, population coverage 99.5% under the same scheme of the recycling law.About 4.16 million end-of-life-vehicles (ELV) generates in 2011 [11].When mean contents of plastic parts is 8 wt% [12,13] and averaged weights of ELVs are assumed as 1,200 kg, waste plastics generation is 399 thousand tons per year, which includes plastics with commercial values as replacement parts and automobile shredder residues without any commercial value.Waste plastics generation from automobiles are estimated as 399 thousand tons a year (= 4,160,000 1.2 ton 0.08).Disassembly of ELV and crushing process give automobile shredder residue (ASR) of 584,305 ton in FY 2007 [14].Typical composition of ASR is as follows: Plastics of non-polyurethane 33, polyurethane 16, textile 15, rubber 7, wood 3, paper 2, iron 8, nonferrous metals 4, wire harness 5, glass 7 (wt%) [15].Thus, the generation amounts of plastics of non-polyurethane and polyurethane are estimated as 193 and 93 thousand tons, respectively.
Plastics Recycling -Technology and Business in Japan 207 The Home Appliance Recycling law provides safe treatment and effective recovery of valuable materials from the five home appliances, air conditioners, television sets, refrigerators/freezers, washing machine and clothes dryers [16].Personal computers are also encouraged to collect for recycling, but total weights of plastics are not clear.
In the fiscal year 2006, the total recovery of plastics was 102,257 ton was recycled with commercial values.The recovered plastics are further processed in mechanical recycling (60,020 ton, 59%), energy recovery (as crushed plastics 1,400 ton, 1%; as RPF 4,800 ton, 5%), and the rest (36,037 ton, 35%) was disposed [17].In the fiscal year 2010, the total amount of recycled plastics was 181,884 ton [18].The details of recycling are not announced.Polyurethane is widely used as heat insulation, and this is a major material that is not suitable for any type of recycling application but incineration with special attention to the complete combustion of fluorocarbons in polyurethane.
Aizawa et al. estimated the annual generation of the wastes of small electrical and electronic equipment, and estimated the total plastics as 20,000 ton [19].The estimate is based on the equation of a typical content times shipment of video cassette recorder, DVD player, video camera, digital camera, flash memory player, HDD-equipped audio equipment and gaming equipment in 2007.The paper gave the estimated amounts of various metals and plastics of the potential resources.
There are the other sources of waste plastics.Recycling laws targeting various types of waste plastics would be considered based on the lifestyles of each country.Following to the recycling of waste plastics of containers & packaging, the more efforts will be considered to expand recycling activities.Substantial amounts of plastics are also used as daily commodities such as kitchen utensils and clothes cabinets.Plastics are also one of the main components of toys and E-wastes (wastes of electric and electronic equipment), which have been treated by landfill.Considerable amounts of plastic products are imported to the Japanese market.The amount of those imported plastic products is not clear.As an agriculture material, plastic film is a widely used product.Mulching, tunnel and green house are the typical uses in agriculture.As shown in Table 5, the current methods for the treatment of these plastic films are incineration, landfill and recycling including mechanical recycling and production of solid fuel [20].The details of recycling application are not clear in the report.Waste generation data from industry has been collected through questionnaire survey by the plastic waste management institute [21,22].Table 6 shows the typical gate fees of waste plastics by various treatment methods.Gate fee is a payment from a waste generator to a waste management company for waste treatment often including transportation cost.When the waste has a commercial value, a waste management company buys and sells it after a suitable processing.When a waste has a commercial value of higher than transportation cost, it is usually considered to be a commercial article rather than waste.Business sectors can handle them without any license or permission of business or facility installation for transportation, treatment or the other commercial dealing.
Waste plastics from general wastes such as containers and packaging contain various types of plastics such as a sheet, film, bag and bottle of polyethylene, polypropylene, polystyrene, polyamide and PET.And many items are laminates of two or more plastics, paper or aluminum.The recycling cost increase due to these complex compositions of mixed plastics as a feedstock of recycled resins.There are many business sectors buying thermoplastics of good quality for recycled resin production.Waste plastics, especially polyethylene, propylene, polystyrene, PET and PVC, are exported to China over some 1.5 million tons per year.In 2011, 1.6 million ton of waste plastics of commercial values were exported to mainly China for mechanical recycling (Table 7) [25].China's country share as the exporting destination is 1.48 million ton (90.5 % of total exported amounts from Japan), which includes the mainland 890 thousand ton and Hong Kong 586 thousand ton.Polypropylene is considered as the major component of the waste plastics defined as "the other plastics."Themean price of the waste plastics is 46 yen/kg.It is varied depending on the conditions of wastes plastics.Generally, shredded, clean and colorless plastics are of the higher commercial value.Ref. [23] Ref. [22] 4 Cokes oven treatment Mixed plastics, C&P recycling law (household) Waste plastics from industry 45 35 Ref. [23] Ref. [ Ref. [24] Ref. [22] 9 Incineration (no energy recovery) Mixed wastes from household Mixed wastes from industry 19 30 Ref. [24] Ref.
[22] 10 Landfill Industrial plastic wastes 8 yen/m 3 Ref.[22] *C&P designates containers and packaging.**The symbol "▲" designates payment from a waste business sector to a waste generator.The apparent treatment cost of incineration is very low (entries 8 and 9 of Table 6).Substantial subsidies from the national government make the construction cost of an incineration plant lower than the other recycling methods such as mechanical recycling and liquid fuel production.Different from a recycler as a private company, municipalities use the different accounting system, in which fixed costs are not involved in the treatment cost.
The actual cost would be 35 -50 yen/kg by accounting construction costs of incineration plants.The lower apparent cost of incineration derives some municipalities to incinerate mixed wastes rather than recycling.
Legislation system for recycling waste plastics
To establish a sustainable society throughout Japan, the Basic Law for Establishing the Recycling-based Society as the basic framework.It is also called as the Sound Material-Cycle Society.Table 8 lists the laws for waste management and recycling [26].Under the individual recycling laws, each targeted wastes have been recycled for several years.
For waste plastics recycling, there are some differences of the preferred recycling and business system among several recycling laws.For example, mechanical recycling is preferred in the Containers & Packaging Recycling Law, but heat recovery through incineration is allowed in the ELV recycling law.The contract system is different between the plastic mixed wastes under the Containers & Packaging Law and the other plastic wastes such as ELV and home appliances.Recyclers receive the mixed plastic wastes based on the competitive bidding among recyclers of mechanical recycling.When the additional amounts of wastes are left, the second bidding will be held among the recyclers of feedstock recycling.The contractor is fixed by each stock yard.The contract is for one year.In the treatment of ELV and wastes of home appliances, limited numbers of waste management companies constantly receive the wastes in connections with automobile manufacturers or home appliance manufacturers.There are strong arguments on the C&P recycling from many stakeholders in the points of recycling cost, bidding system with the preference of mechanical recycling to the other methods and participation of the recyclers of solid fuel production.We also started a discussion for widening the coverage of waste plastics to plastic articles such as dairy necessaries, toys and electronic equipment.
Recycled resin production
Recycled resin or recycled plastic goods are produced from 2.17 million ton of waste plastics (Figure 3).Some 1.6 million ton of waste plastics are exported (Table 7).These plastics are considered to go to mechanical recycling.The difference between 2.17 and 1.6 million is about 600,000 ton, which is the feedstock for recycled resin in the domestic market.The domestic demands for recycled resin are quite weak than ones in China.
Many Japanese manufacturers tend to avoid the use of recycled resin, especially from mixed waste plastics of containers and packaging because of the low quality such as strength, color Plastics Recycling -Technology and Business in Japan 211
Law Content Basic law for establishing the recycling-based society
Basic framework determining the role of stakeholders for establishing the sound material-cycle society.
Waste management and public cleansing law
Defines municipal wastes and industrial wastes.The roles and duties of a municipality, waste generator, waste management company, and other stakeholders are strictly provided.The related regulations and rules define both technical and social conditions and guidelines to keep the sound business in addition to construction of a facility, installation and operation of equipment.
Law for promotion of effective utilization of resources
Promotion of waste reduction through recycling.The roles and duties of the stakeholders are mentioned.Promoting reduction of wastes through recycling and suitable disposal in several fields of industries and products such as steel production, paper production, construction, automobile, electric and electronic equipment, batteries, metal cans and PET bottles.
Containers and packaging recycling law
Promotion of recycling containers and packaging through separate collection of those wastes made from paper, metal, glass, PET and the other plastics by municipalities with cooperation of citizens.Producers of the material, manufacturers of the commercial products with containers and packaging and retail stores cover recycling costs.
Recycling methods are provided in the related regulations.Electric household appliance recycling law (Home appliance recycling law) Forcing consumers to give wastes of home appliances to retailers with paying recycling fees.Air conditioner, refrigerator/freezer, television set, washing machine and cloth dryer are recycled with suitable treatment of fluorocarbons and other potential hazardous substances.
End-of-life vehicle recycling law
Forcing car owners to cover the cost for suitable disposal of hazardous wastes and wastes of no commercial value with recovering valuable resources from end-of-life vehicles.
Construction material recycling act
Reducing the amounts of construction and demolition wastes through recycling.
Food recycling law
Reducing the amounts of food residues from restaurants, food processing industry and supermarkets through recycling waste foods.
Law on promoting green purchasing
Promoting the national and local governments to buy products that made from recycled materials.
Table 9.Major laws for waste management and recycling and smell.As a result, the selling price of recycled resin pellets (typically, a mixture of polyethylene and polypropylene) from the mixed waste plastics is generally very low, 20 to 40,000 yen/ton, whereas the recycling cost of mixed waste plastics is 72,000 yen/ton in average due to the contamination of various components that are not suitable for the production of recycled pellets.The recyclers convert the separated portion of mixed waste plastics into recycled pellets or recycled products such as transportation pallets and imitation wood at about 45 wt%, and the rest goes to incineration with heat recovery or solid fuel production with paying gate fees.
Some recyclers for plastic containers and packaging from general wastes make the efforts to raise the market value of recycled resin.There are three countermeasures for it.One is to change separation categories of waste collection by municipalities, for example, a separate collection of hard plastics like HDPE bottles and laminated soft plastics.The second is to introduce a sophisticated material sorting facility with optical sorting equipment under the cooperation with municipalities.Stable supply and constant production of recycled resin will be possible by the more precise selection of suitable plastics for mechanical recycling at the larger scale.The third is to develop a new application in cooperation with many companies in the wider business fields across many countries.
To improve the quality of recycled resin, collection of hard plastic wastes and recycled resin production with hard plastic wastes were conducted as a research by Akita Eco Plash Co., Ltd. in cooperation with Akita prefecture and Noshiro city authorities based on funding by the New Energy and Industrial Technology Development Organization (NEDO) [27].
Recycled resin has low melt-flow rate (MFR) because the original form of the polyethylene and polypropylene that recovered from C&P wastes is film and sheet.But the major products from recycled resin are not film or bag but hard plastic products.When hard plastic wastes of non C&P wastes were added to recycled resin of C&P at 10 %, the MFR was improved from 3.2 to 3.8.This result suggests the improvement of the qualities of recycled resin and products with reducing an additive.Minato-ward authority in Tokyo has a total collection system of plastic C&P wastes with the wastes of hard plastic products in the criteria of "Resource Plastics."Thecollection of the wastes of plastic products increased the collection amount of polypropylene in hard plastic wastes [28], and it will help the MFR, which leads to the reduction of additives and cost reduction.
Home appliance manufacturers and automobile manufacturers are actively seeking the idea and technologies to raise the recycling rate of the waste plastics recovered from their wastes.Electric appliance manufacturers have been made efforts the cost reduction of their products.They took actions in recycling waste plastics in the products to reduce the waste amounts.Additionally, some companies started commercial operation of the precision separation system of some plastics by applying the difference of electrostatic properties of plastics [29].In 2010, Green Cycle Systems Corporationl aunched Japan's first large-scale, high-purity plastic recycling center under the technical and business support by Mitsubishi Electric Corporation.The announcement states that Green Cycle Systems takes the shredded mixed plastic chips recovered by Hyper Cycle Systems and separates them into reusable plastic on a scale of unprecedented magnitude.And it also tells that the combined output of Plastics Recycling -Technology and Business in Japan 213 these two enterprises have increased Mitsubishi Electric's rate of recycled, industrial-grade plastic from 6% to a paradigm-shifting 70%.
Refuse-derived solid fuel
There is a variety of solid fuel that has been prepared from wastes, which include wood, straw, rice husk, garbage from households, plastics and so on.To control moisture contents, some processes such as drying and carbonization are often performed.Any non-hazardous combustible wastes can be used as the raw material for solid fuel with or without preparation of pellets and briquettes.Preparation of pellets and briquettes contributes to constant quality of heating values, easy transportation and smooth feeding to a combustor such as a boiler.
Table 9 shows the heating values of various combustible wastes and fuels [30][31][32][33].Waste plastics have high heating values, and coal substitutes can be prepared by mixing them with the wastes of low heating values.Thermoplastics act as a suitable binder, and paper, textile and thermosetting plastics can form pellets and briquettes despite to their properties that they are not solidified each other.From municipal wastes includes kitchen wastes, refuse-derived fuel (RDF) are used in various countries over US [34], Europe [34,35] and Japan [36].It is also called as solid recovered fuel (SRF).In 2005, RDF was produced from general wastes in 58 facilities [37].
Waste
There are 27 facilities are cooperated with power generation plants and the feedstocks are collected from 92 municipalities.Table 10 summarizes the compositions of municipal wastes in Japan and the properties of RDF [36].A specification guideline of RDF is shown in a technical specification document of the Japanese Industrial Standard as shown in We need technical countermeasures to the formation of hydrogen chloride and dioxins upon combustion of RDF due to chlorine-containing plastics and salt.The specification in Table 9 does not mention anything about chlorine content because of the difficulty of the removal of chlorine from municipal wastes.Due to the low heating value and high contents of ash and moisture, RDF is considered as low-quality fuel.There are not so many users of RDF except the power generation plants in the cooperation with municipalities.
Different from RDF, densified solid fuel called as RPF (Refuse derived paper and plastics densified fuel) are popular as coal substitutes.Recently, the specification was defined in the Japanese Industrial Standards (Table 12) [32].It is produced by using paper, wood, plastics, textile and the other wastes.The raw material is dry and non-hazardous combustibles.It does not include any putrefactive wastes.About 1.6 million ton of RPF was shipped to, mainly, paper and steel manufacturers as coal-substitute for coal-combustion boilers in 2011 [39].It is possible to vary the calorific value of solid fuel in the range of 5,000 -10,000 kcal/kg by controlling the ratio of input paper and plastic.A 50:50 mix, for example, provides a calorific value of 6,190 kcal/kg (measured LCV), which is about the same as that of coal.Use of RPF a s a n a l t e r n a t i v e t o f o s s i l f u e l h e l p s t o r e d u c e C O 2 emissions and is considered environmentally friendly.
The most serious trouble in the use of RPF is the formation of hydrogen chloride upon combustion and the resulting corrosion of equipment in the RPF users.Some users often stop purchasing RPF because of the high chlorine content.
Liquid fuel
Some thermoplastics, for example polyethylene, polypropylene and polystyrene, are thermally decomposed under an inert gas to yield liquid hydrocarbons at about 450 ˚C or above [40].The resulting liquid hydrocarbons have the similar heating values to those of fuels from petroleum.
Decomposition products and fuel quality depends on the types of plastics and decomposition conditions.Polyamides and polyurethane give oily products of high nitrogen content at low yields.Poly(ethylene terephthalate) known as PET does not give liquid hydrocarbon upon pyrolysis but solid products including terephthalic acid.
Many types of reactors of tank, screw, externally-heating rotary kiln and fluidized-bed are developed.Some plants were used in demonstration, and some are commercially operated in Japan [41].Under the containers and packaging law, mixed plastics were converted into fuel oil through pyrolysis using a 20-ton/day tank reactor in Niigata and four 10-ton/day rotary kilns in Sapporo until recently.Those commercial operations were shut down due to the higher cost (about 80 yen/kg) than the other treatment costs like that of cokes oven treatment (about 40 yen/kg).
However some recyclers bearing pyrolysis plants commercially produce liquid fuel from plastics of industrial wastes.Most recyclers have a tank reactor with a simple distillation system (Figure 4).In 2011, a recycler in Fukuoka prefecture started fuel oil production using a pyrolysis reactor with a paddle mixer (Figure 5).This reactor is commercially operated for mixed plastics from a separate collection of municipal wastes.The product fuel mixed with commercial fuel at 1:1 is used for the boilers in public facilities.Currently, there is no specification standard of liquid fuel from waste plastics because it is not widely produced.Table 13 summarizes the technical specification that were announced by the Japanese Industrial Standards Committee and expired in 2010 [42].Table 14 shows typical specifications of plastics-pyrolysis oil, commercial diesel fuel and heavy oil.There are two major methods of thermal decomposition of plastics.One is pyrolysis and another is catalytic decomposition.Acidic catalyst such as silica-alumina and zeolite are used in the latter case.A decomposition temperature range is lowered to around 400 ˚C comparing pyrolysis temperature 450 to 550 ˚C.Advantage of catalytic decomposition is in the lower energy consumption for decomposing polymers.Disadvantage is the economic expense of purchasing catalysts, regeneration or disposal of waste catalyst.Catalytic decomposition often yields a large amount of gasoline fraction in certain reaction conditions.In this case, the product oil is not suitable for the use of diesel engines or heavy oil burners if the product oil is used without distillation.It is noteworthy that plastics pyrolysis oil does not contain lubricant portion.There is little knowledge on mechanical durability of an engine cylinder and a fuel injection pump of a burner in the use of plasticsderived oil.Mixing with a lubricant or commercial fuel is highly recommended by practitioners and.
Under the containers and packaging law, some recyclers constructed pyrolysis plants to convert mixed plastics into fuel oil.About thirty companies introduced such plants since 2000, when collection and recycling of mixed plastics started under the recycling law.There were three large-sized plants (20 to 40 ton/day) and the others were small-sized plants (1 to 1.5 ton/day).After ten years practice, all plants were shut down.Fuel oil production in the small scale costs high, and the large scale production also had a difficulty to collect such a large amount of waste plastics under the gate fee competition [43].On the other hand, middle-sized plants of about 3 to 6 ton/day capacity are commercially operated for pyrolysis of industrial wastes.Recently, a waste management company in Fukuoka started new middle-sized plant for fuel oil production from separated waste plastics from the municipalities nearby in order to supply the fuel to public facilities.
Gaseous fuel
Gaseous products from pyrolysis of waste plastics are categorized into two major types.One is syngas, a mixture of hydrogen and carbon monoxide [44].Another is gaseous hydrocarbon such as methane and ethylene [45].Depending on gasification conditions, a mixture of hydrogen and methane will be obtained [46].
Gas composition depends on temperature of a reactor, residence time of decomposing species during gasification of plastics, and other reaction conditions, which are often governed by a reactor structure.Syngas is originally for the production of methanol from hydrogen and carbon monoxide, or ammonia from hydrogen.However, syngas, a mixture of hydrogen and carbon monoxide, is also used as gaseous fuel in some facilities for generating electricity.
Table 15 summarizes the compositions of pyrolysis gas from waste plastics and the gases generating in steel production facilities.Gasification plants in Ciba city produces syngas at the scale of two series of 150 ton/day, and it is used for power generation in the adjacent steel manufacturer through combustion.The two-stage pressurized process in Ube city for mixed plastic wastes under the container and packaging law was already shut down due to the higher operation cost comparing with that of the other methods such as cokes oven treatment.
Gasification technology covers not only waste plastics and the other combustibles in municipal wastes but also biomass and automobile shredder residue (ASR).
There is another type of gasification for the production of gaseous hydrocarbon.Thermal treatment of polyethylene and polypropylene at about 600 ˚C or above with the residence time around 20 min gives gaseous hydrocarbons mainly.The initial decomposition products Applying the effective heat transfer of a heating medium in an external-heating rotary kiln, waste polyethylene was gasified in the coprocessing with asbestos-containing waste building material.The resulting flammable gas was used as fuel for a heating gas to melt asbestos(Figure 6) [46].In asbestos removal works, waste polyethylene generates as protective clothing and shielding curtain Addition of a flux to asbestos-containing demolition wastes makes the melting range of asbestos at 800 to 900 ˚C.At the similar temperature range, polyethylene was readily converted into a mixture of hydrogen, methane, ethylene and other hydrocarbons.The gaseous products were supplied to a furnace to generate heating gas for the pyrolyzer of the asbestos melting system.Technology providers should develop the technologies of suitable specifications and easy operation with being aware of the local conditions.For example, a plant size should meet average collection amounts of target components of wastes.Table 16 shows an average generation amounts of waste plastics in each factory.
The average amounts 2.8 and 6.8 ton/day suggest a general idea of the suitable capacity of the equipment for waste treatment.In Japan, strict laws and regulations, a waste management company has to have a special permission in case that a treatment facility or equipment has a capacity larger than 5 ton/day.The permission is given by a prefectural government after strict check of planning, inspection of the entire facility including equipment, buildings and yard.In addition, consensus building with local residents is required as the one of conditions of the permission by the local government.A long history of pollution problems and conflicts between a polluter and the local residents resulted in the severe conditions to both private companies and municipalities including waste management and recycling works.For waste plastic recycling, there is a limited pattern of successful business.It is a narrow pathway to connect waste plastics with a product of commercial value.The compositions, quantities and qualities of plastics determines a recycling method, system configurations and the business scale, which also leads to the type of the product with a certain commercial value.The product price, number of users and consumption are the important factors for establishing a sound flow of waste plastics recycling.
Technology assessment methodology has been discussed in UNEP [50,51], and its application to recycling technologies for waste plastics will be developed through the discussion among the stakeholders and experts.The promotion of technology transfer is fulfilled through the cooperative efforts among the experts of policies, economics, technologies and the people in local communities.The Global Partnership on Waste Management, an open-ended partnership for everyone was launched in November 2010 [52].The more discussions and experiences are required to find the effective solutions for managing and recycling wastes to make our society sustainable.
Figure 3 .
Figure 3. Material flow of waste plastics in Japan (2010)
Figure 4 .
Figure 4. Typical layout of a liquid fuel production plant equipped with a tank reactor.
Figure 5 .
Figure 5. Fuel production system of a pyrolyer with a paddle mixer.Copyright ECR Co., Ltd.
Figure 6 .
Figure 6.Gaseous fuel production from waste polyethylene in the coprocessing with asbestoscontaining demolition wastes
Table 1 .
Composition and property of burnable wastes in Chuo incineration plant
Table 2 .
Production of typical synthetic resins in Japan (2010) Figure 1.Waste plastics generation by user's application (Total generation 9.45 million tons, 2010)
Figure 2 .
Waste plastics generation by the types of plastics (Total generation 9.45 million tons, 2010) Entry Type of waste plastics / waste source Generation amount ton/year 1 Total PET / Total domestic sales of PET resin as bottles Estimated amount of PET resin from bottles in general wastes Estimated amount of PET resin from bottles in industrial wastes 594,689 290,000 305,000 2 Mixed plastics / containers and packaging in general wastes 1,040,658 3 Plastic parts in Automobiles Plastics of non-polyurethane in ASR
Table 3 .
Waste plastics generation from various sources in Japan Plastics Recycling -Technology and Business in Japan 205 *1 Pyrolysis includes cokes-oven treatment, blast furnace treatment and gasification.*2 Incineration with power generation 1.84, incineration with heat recovery 0.36 and incineration without use of combustion heat 0.67 million tons.*3 Incineration with power generation 1.18, incineration with heat recovery 0.66 and incineration without use of combustion heat 0.30 million tons.*4 Incineration with power generation 3.03 (32%), incineration with heat recovery 1.03 (11%) and incineration without use of combustion heat 0.97million tons (10%).
Table 2
shows waste-plastic generation from various sources.Some figures are based on actual collection amounts, and some are estimated amounts based on the references cited in the latter.The target wastes of the containers and packaging recycling law is the containers and packaging that made of paper, glass, metal and plastics in general wastes.For waste plastics of containers and packaging in general wastes, PET bottles, food tray made of expanded polystyrene sheet and mixed plastics of the other plastics are separately collected
Table 4 .
Current status of the implementation of waste plastics under the containers and packaging law in the fiscal year 2010
Table 5 .
Typical composition of composition of mixed plastics
Table 6 .
Treatment of used agriculture films in the fiscal year 2008 (unit: ton)
Table 7 .
Examples of the gate fees of waste plastics by various treatment methods *Free On Board.The seller of goods pays for transportation of the goods to an exporting port and the loading cost.The buyer covers the transportation cost after loading on a ship.
Table 8 .
Export amounts and prices of exported waste plastics from Japan(2011)
Table 10 .
Heating values of combustible wastes and fuels
Table 11 [
38].Typical dimension of RDF briquettes is 10 -50 mm diameter and 10 -100 mm length.A simply-densified brick and crashed product are not considered as RDF in this guideline.
Table 11 .
Typical compositions of municipal wastes in Japan and the properties of RDF
Four classes are defined depending on the values in the category.JIS Z7311:2010. *
Table 13 .
Specification guideline of RPF in JIS and typical analytical values of RPF and coal
Table 14 .
Plastics Recycling -Technology and Business in Japan 217 Specification of pyrolysis oil from waste plastics in TS Z0025:2004
Table 16 .
Compositions of pyrolysis gas from waste plastics and the gases in steel production facilities of plastics are liquid hydrocarbon, and the further heat transfer to the vaporized portions results in the conversion of vaporized hydrocarbons into gaseous hydrocarbons of methane, ethylene, ethane, propylene, propane, and the other gaseous hydrocarbons with the formation of liquid hydrocarbons at about 10 to 20 %.This is still in research stage to demonstration stage[45].
Table 17 .
Plastics Recycling -Technology and Business in Japan 221 Waste plastics generation in factories of manufacturing industry and waste management facilities | 8,135.4 | 2012-10-26T00:00:00.000 | [
"Economics"
] |
Association of cerebrospinal fluid NPY with peripheral ApoA: a moderation effect of BMI
Background Apoprotein A-I (ApoA-I) and Apoprotein B (ApoB) have emerged as novel cardiovascular risk biomarkers influenced by feeding behavior. Hypothalamic appetite peptides regulate feeding behavior and impact lipoprotein levels, which effects vary in different weight states. This study explores the intricate relationship between body mass index (BMI), hypothalamic appetite peptides, and apolipoproteins with emphasis on the moderating role of body weight in the association between neuropeptide Y (NPY), ghrelin, orexin A (OXA), oxytocin in cerebrospinal fluid (CSF) and peripheral ApoA-I and ApoB. Methods In this cross-sectional study, we included participants with a mean age of 31.77 ± 10.25 years, categorized into a normal weight (NW) (n = 73) and an overweight/obese (OW/OB) (n = 117) group based on BMI. NPY, ghrelin, OXA, and oxytocin levels in CSF were measured. Results In the NW group, peripheral ApoA-I levels were higher, while ApoB levels were lower than in the OW/OB group (all p < 0.05). CSF NPY exhibited a positive correlation with peripheral ApoA-I in the NW group (r = 0.39, p = 0.001). Notably, participants with higher CSF NPY levels had higher peripheral ApoA-I levels in the NW group and lower peripheral ApoA-I levels in the OW/OB group, showing the significant moderating effect of BMI on this association (R2 = 0.144, β=-0.54, p < 0.001). The correlation between ghrelin, OXA and oxytocin in CSF and peripheral ApoB in both groups exhibited opposing trends (Ghrelin: r = -0.03 and r = 0.04; OXA: r = 0.23 and r=-0.01; Oxytocin: r=-0.09 and r = 0.04). Conclusion This study provides hitherto undocumented evidence that BMI moderates the relationship between CSF NPY and peripheral ApoA-I levels. It also reveals the protective role of NPY in the NW population, contrasting with its risk factor role in the OW/OB population, which was associated with the at-risk for cardiovascular disease.
Introduction
Abnormal lipoprotein metabolism represents a significant risk factor for cardiovascular disease (CVD), especially within the obese population, where CVD incidence and mortality rates are significantly increased [1].CVD, a leading cause of global morbidity and mortality [2], affects approximately 330 million individuals, as reported in the 2019 China Cardiovascular Disease Health and Disease study [3].As the Chinese population continues to grow and age, the prevalence of CVD continues to rise [4].A cohort study tracking individuals for 6-19 years identified abnormal lipid metabolism as a relative risk factor for cardiovascular and all-cause mortality, particularly when high-density lipoprotein (HDL) levels fall below 1.3 mmol/L [5].Therefore, early attention to abnormal lipoprotein profiles can help in reducing CVD risk and maintaining cardiovascular health.
Lipoproteins mainly encompass apolipoprotein A-I (ApoA-I), apolipoprotein B (ApoB), HDL, low-density lipoprotein (LDL), and very low-density lipoprotein (VLDL) [6].LDL and HDL are the conventional cardiovascular risk biomarkers used during clinical practice [7].Numerous clinical studies have suggested that ApoA-I and ApoB are superior predictors of cardiovascular risk compared to conventional lipid parameters like LDL and HDL [8].ApoA-I, a primary apolipoprotein of HDL, facilitates cholesterol efflux from macrophage foam cells, initiating the reverse cholesterol transport pathway.This process involves hepatic uptake and subsequent gut excretion to prevent excessive cholesterol accumulation in macrophages [9][10][11].ApoA-I also modulates the inflammatory response [12] and mitigates endothelial cell toxicity [13], making it a widely recognized cardiovascular protective factor [14].ApoB, as the primary cholesterol transporter to the vascular artery wall, is linked to an increased risk of CVD [6,15].A reduction in ApoA-I or an elevation in ApoB can contribute to atherosclerosis and raise the risk of cardiovascular events [8,[16][17][18].Recent research indicates that transgenic mice expressing high levels of human ApoA-I exhibit increased HDL levels, potentially offering important therapeutic insights for safeguarding against endothelial damage and reducing the incidence of CVD events [19].While the association of ApoA-I and ApoB with CVD has been extensively documented, their interplay with feeding behavior-related hormones remains less explored [20].
Hypothalamic appetite peptides acting as appetitestimulating neurotransmitters or neuromodulators not only regulate feeding behavior [21] but also exert a certain influence on lipid metabolism [22,23].These hypothalamic appetite peptides encompass neuropeptide Y (NPY), ghrelin, orexin A (OXA), and oxytocin.Central administration of NPY has been shown to elevate adiposity and plasma triglyceride (TG) levels [24][25][26].Animal studies have demonstrated that NPY can enhance VLDL production in rat livers by activating the sympathetic nerves in the liver, potentially accompanied by increased levels of ApoB [27].Additionally, another study indicated that NPY receptor activation could stimulate the expression and secretion of ApoA-I in the liver [28].There is an increasing consensus suggesting an interaction between Ghrelin with TG-rich lipoproteins, HDL, VHDL, and certain LDL [29,30].A study revealed that higher serum ghrelin concentrations were associated with an elevated ApoA-I/ApoB ratio in an adult Chinese population [31].Serum levels of OXA have shown a negative correlation with unfavorable plasma lipoproteins, specifically LDL and VLDL, in a population study related to coronary heart disease [32].In a study involving male subjects aged 50-85, higher oxytocin levels were associated with lower levels of protective plasma lipoprotein HDL [33].While it is evident that hypothalamic appetite peptides can impact lipoprotein metabolism, it remains essential to comprehend how body weight influences this effect.
It has been observed that appetite peptides have varying effects on lipoproteins in different weight states.Neuropeptide Y, primarily synthesized by the hypothalamus, plays a crucial role in lipid metabolism and CVD [34][35][36].While NPY exhibits a negative correlation with energy expenditure in the general population [37], in the obese population, NPY levels tend to increase [37,38] and stimulate the proliferation of peripheral white adipose tissue [39], possibly through neuron transmission or the bloodbrain barrier [37].Ghrelin, a growth hormone-releasing peptide secreted by the stomach during fasting, serves diverse biological functions [40,41].Serum ghrelin levels rise during fasting or in underweight patients [42] and are positively correlated with LDL and HDL levels in healthy males [43].Interestingly, fasting CSF ghrelin levels in obese individuals are reportedly approximately 16% lower than in individuals with a normal weight [44], which may represent a protective mechanism to maintain energy balance.Ghrelin exerts direct peripheral effects on lipid metabolism, including increased white adipose tissue mass and stimulation of hepatic lipogenesis [45].However, further research is warranted to ascertain whether different body weights influence the effect of ghrelin on peripheral lipid metabolism.OXA is produced by orexin neurons primarily located in the lateral and posterior hypothalamus, and its role in promoting appetite and lipid accumulation has been well-documented [46].Serum OXA levels have been significantly reduced in obese individuals [47] and increased after weight loss [48].Animal studies have revealed that OXA neurons in obese rats are sensitive to fat and are closely associated with elevated TG levels [49].Oxytocin and its receptors have been implicated in regulating energy metabolism and food intake.Several studies have shown that oxytocin levels are positively correlated with BMI [33,50].Conversely, a study revealed that in obese patients, oxytocin levels were decreased, accompanied by an increase in TG and LDL levels.Oxytocin exhibited a negative correlation with LDL and TG [51].Different BMI statuses result in varying oxytocin expressions and lipid metabolism, highlighting the influence of body weight on the relationship between appetite peptides and lipid metabolism.
Although ApoA-I and ApoB are currently considered better risk markers of CVD than lipoproteins, current research predominantly focuses on the impact of peripheral appetite peptides on lipoproteins despite appetite regulation primarily originating in the central nervous system [52].Therefore, this study aims to investigate the relationship between four appetite peptides in CSF and ApoA-I and ApoB in obese individuals and explore how body weight influences appetite peptides and lipid metabolism.These findings may provide valuable insights for preventing and treating dyslipidemia and CVD.
Participants
In this study, we recruited 190 Chinese men between September 2014 and January 2016.Participants were patients who had planned to undergo anterior cruciate ligament reconstruction surgery.According to the obesity diagnostic criteria outlined in the "Guidelines for the Prevention and Control of Overweight and Obesity in Chinese Adults", the participants were divided into two groups: the normal weight group (NW), consisting of 117 individuals (with a BMI ranging from 18.5 kg/m² to less than 24 kg/m²), and the overweight/obese group (OW/ OB), comprising 73 individuals (with a BMI of 24 kg/m² or higher).Besides sociodemographic data such as age, years of education, and smoking status, we also collected information on substance abuse and dependence through self-reporting, which was subsequently verified by the subjects' next of kin and family members.
The exclusion criteria for this study encompassed the following: (1) a family history of mental or nervous system diseases or central nervous system disorders, as assessed through the international neuropsychiatric interview; (2) systemic diseases identified through medical history and admission diagnosis; and (3) individuals taking lipid-lowering medications.None of the participants had a history of drug dependence or abuse, including cigarettes, which was confirmed by their next of kin.
This study received approval from the Institutional Review Committee of Inner Mongolia Medical University (approval number: YKD2015003) and adhered to the principles of the Helsinki Declaration.To ensure that both the subjects and their guardians comprehended the study's content, an anesthesiologist explained the research plan.We obtained written informed consent from adult participants and the guardians of minors.No financial compensation was provided to the subjects in this study.
Assessments, biological sample collection, and laboratory tests
Upon admission, the height and weight of the subjects were measured with an accuracy of 0.5 cm and 0.1 kg, respectively, to calculate the body mass index (BMI) using the formula BMI (kg/m 2 ) = weight (kg)/ height (m) 2 .All subjects were required to fast for at least 8 h overnight, after which peripheral venous blood was drawn from the elbow in the morning on an empty stomach for various analyses, including blood routine, liver function, renal function, and blood lipid levels.These analyses included alanine aminotransferase (ALT), aspartate aminotransferase (AST), γ-glutamyltransferase (GGT), total cholesterol (CHO), TG, HDL, LDL, ApoA-I and ApoB.The measurements were conducted using a biochemical automatic analyzer (HITACH 7600, Hitachi, Tokyo, Japan).
For patients undergoing anterior cruciate ligament reconstruction in China, lumbar puncture is a routine part of the preoperative procedure.In the morning before surgery, an anesthesiologist performed lumbar puncture and collected 5 ml of CSF sample and 3 ml of peripheral blood samples.Each CSF and plasma sample was transferred to 0.5 ml test tubes and frozen at -80 °C.
The quantification of OXA levels in CSF was carried out using an enzyme-linked immunosorbent assay kit (Cloud Clone Corp., Houston, TX, USA).Additionally, NPY and oxytocin levels were assessed using radioimmunoassay kits (Phoenix Pharmaceuticals, Inc., Burlingame, CA, USA), and Ghrelin levels were measured using a radioimmunoassay kit (DIAsource ImmunoAssays S.A., Louvain-la-Neuve, Belgium).Laboratory technicians conducting these analyses were blinded to clinical data.
Statistical analysis
The Shapiro-Wilk and Levene tests were used to assess the distribution normality and variance homogeneity for continuous variables, respectively.Continuous variables with homogeneity of variance were compared using unpaired t-test, while the variables without homogeneity of variance, such as TG, GGT, and NPY, were used the Mann-Whitney test to compare differences between groups.Categorical variables were assessed using the chi-square test.Descriptive statistics, such as means and standard deviations (SD), were used for continuous variables, whereas frequencies and percentages were employed for categorical variables.Correlation analysis was conducted using partial correlation, adjusting for smoking status.Hierarchical regression was used to analyze the moderating effect, with adjustments for smoking status as covariates, and bonferroni correction for multiple comparisons was applied.A simple slope test was performed to assess the moderating effect in cases where the moderating effect was statistically significant.All statistical analyses were conducted using IBM SPSS, version 27.0 (IBM, Armonk, N.Y., USA), and the chart was generated using the corrplot function in the R programming language 4.3.1.All tests were two-tailed, with a significance threshold set at p < 0.05.
Demographic and clinical characteristics
In comparison to the OW/OB group, the NW group had a lower rate of smoking and higher levels of HDL (1.34 1).There were no significant differences in age, years of education, NPY, OXA, and oxytocin levels in CSF between the two groups (all p > 0.05, Table 1).
Correlation analysis
To investigate the relationship between CSF appetite peptides and peripheral ApoA-I and ApoB in the two groups, we conducted a partial correlation analysis, adjusting for smoking status [53] (Table 2; Fig. 1).In the NW group, a positive correlation was observed between CSF NPY and peripheral ApoA-I (r = 0.39, p = 0.001), while no correlation was found in the OW/OB group (p > 0.05).Notably, the correlation between CSF NPY and peripheral ApoA-I in the NW and OW/OB groups exhibited opposite directions (NPY: r = 0.39 and r=-0.13).Neither group had a significant correlation between CSF NPY and peripheral ApoB (p > 0.05).Furthermore, there was no correlation between CSF ghrelin, OXA, oxytocin, and peripheral ApoA-I and ApoB (all p > 0.05).However, the correlation between ghrelin, OXA and oxytocin levels in CSF and peripheral ApoB in the NW and OW/OB groups exhibited opposite trends (Ghrelin: r = -0.03and r = 0.04; OXA: r = 0.23 and r=-0.01;Oxytocin: r=-0.09 and r = 0.04, Table 2; Fig. 1).These findings suggest that different weight groups may influence the relationship between NPY, ghrelin, OXA, and oxytocin in CSF and peripheral ApoA-I and ApoB.To confirm these hypotheses, we constructed a moderated model to assess the role of BMI in the relationship between the four appetite peptides in CSF and peripheral ApoA-I and ApoB, respectively.
Moderation analysis for CSF appetite peptides and peripheral ApoA-I and ApoB
We next performed hierarchical multiple regression analysis on the peripheral ApoA-I, and standardized all variables.In Model 1, where peripheral ApoA-I served as The results from Model 2 indicated that, after controlling for smoking status, BMI groups exhibited a negative correlation with peripheral ApoA-I (β=-0.16,p < 0.001), while CSF NPY displayed a positive correlation with peripheral ApoA-I (β = 0.61, p < 0.001).Importantly, the BMI groups×NPY interaction was negatively correlated with peripheral ApoA-I (β=-0.54,p < 0.001) (Table 3).
To further explore the moderating effect of BMI, we conducted a simple slope analysis.In the NW group, individuals with higher levels of CSF NPY exhibited a significantly higher level of peripheral ApoA-I.In contrast, in the OW/OB group, individuals with higher levels of CSF NPY displayed a significantly lower level of peripheral ApoA-I (Fig. 2).
We employed the same hierarchical multiple regression analysis method to examine the influence of BMI on ghrelin, OXA, and oxytocin in CSF and their relationship with peripheral ApoA-I.The results revealed that BMI did not moderate the relationship between ghrelin, OXA, and oxytocin in CSF and peripheral ApoA-I (all p > 0.05).Additionally, analysis of the impact of BMI on the four appetite peptides in CSF and their relationship with peripheral ApoB showed that BMI did not regulate these relationships (all p > 0.05).
Discussion
This study marks the first attempt to explore the moderating effect of body weight on appetite peptides NPY, ghrelin, OXA, and oxytocin in CSF, and peripheral ApoA-I and ApoB in males.While many studies have employed ApoB/ApoA-I as a predictor of cardiovascular risk [17], it remains unclear whether the correlation between appetite peptides and ApoB/ApoA-I hinges on the presence of ApoA-I or ApoB alone or their combination through the ApoB/ApoA-I ratio.Therefore, we analyzed the relationships between appetite peptides and ApoA-I and ApoB, as well as the moderating role of weight in these relationships.Our primary findings revealed that different BMI groups moderated the relationship between CSF NPY and peripheral ApoA-I.In individuals with normal weight, high CSF NPY levels were associated with high peripheral ApoA-I levels, indicating that NPY is a protective factor for peripheral ApoA-I.In contrast, among OW/OB individuals, high CSF NPY levels were linked to low peripheral ApoA-I levels, making NPY a risk factor.It is widely acknowledged that NPY is synthesized within the arcuate nucleus of the hypothalamus and transported to the paraventricular nucleus via axons for secretion, participating in the regulation of food intake [54].Research has shown that intracerebroventricular administration of NPY in normal-weight rats induces hyperphagia, weight gain, and increased fat mass accumulation [55].NPY acts as a communication bridge between the hypothalamus and adipose tissue, with circulating NPY levels closely related to the central brain source of NPY.It can be integrated into the metabolic response of both the central and peripheral nervous systems [56].The central nervous system (CNS) NPY potentially regulates peripheral lipoproteins through the hypothalamus-sympathetic nervous system-adipose tissue innervation pathway and within adipose tissue [39].Animal studies have shown that the activation of NPY receptors in normal-weight rats can promote the synthesis and secretion of ApoA-I in hepatocytes through the extracellular signal-regulated kinase 1/2 and protein kinase A signal transduction pathways [28].Although there is no direct evidence of the impact of intracerebroventricular NPY injection on ApoA-I, Y5 receptors, primarily expressed in the CNS [57], act as a peripheral signal of energy availability [58].CSF NPY can directly access circulating hormones due to a semi-permeable blood-brain barrier [59].Therefore, it is highly conceivable that CNS NPY also promotes the secretion of ApoA-I in the liver through these channels.ApoA-I is the principal apolipoprotein of HDL and plays a crucial role in cholesterol metabolism.It removes cholesterol from tissues, transports cholesterol back to the liver from arterial walls, and facilitates its excretion through bile [60,61].In the NW population, increased CSF NPY levels promoted food intake and fat accumulation and enhanced cholesterol efflux, modulating free cholesterol and cholesterol esters in cells to maintain liver lipid metabolism balance.This, in turn, prevented lipid accumulation and reduces the formation of foam cells [62], potentially preventing atherosclerosis [63].Hence, in this study, an increase in NPY levels was associated with higher ApoA-I concentrations, a potentially protective mechanism to maintain lipid balance in the liver [64,65].Animal experiments revealed chronically elevated hypothalamic NPY activity in genetically obese ob/ob mice [66].The expression of NPY mRNA increased in the hypothalamus of rats that are made obese by an early diet [67], particularly in cases of chronic obesity where NPY was highly expressed in the dorsal hypothalamic nucleus [68,69].CSF NPY levels have been reported to significantly increase with body weight [37,38].While NPY is predominantly secreted by CNS neurons, there are lower concentrations in the peripheral system [39].Thus, the effect of CNS NPY on peripheral adipose tissue may be achieved indirectly through bidirectional neuronal and hormonal communication and possibly through direct circulation across the blood-brain barrier [39].NPY is secreted by sympathetic neurons to regulate inflammation [70].In obese tissues, NPY binds to macrophage Y1 receptors, increasing cholesterol uptake and intracellular cholesterol content.It significantly inhibits the efflux of extracellular cholesterol receptors ApoA1 and HDL in macrophages [71], reducing peripheral ApoA-I levels.This process is accomplished by decreasing the expression of ATP-binding cassette transporter A1, ATPbinding cassette G1, and scavenger receptor B1 in obese tissues [72][73][74].These findings suggest that ApoA-I levels are decreased with increasing CSF NPY levels among obese individuals.
Another noteworthy discovery in this study is the absence of an interaction between CSF NPY and peripheral ApoB and the interaction between CSF ghrelin, OXA, and oxytocin and peripheral ApoA-I and ApoB.In lean rats, intracerebroventricular administration of NPY into the third ventricle was found to stimulate sympathetic innervation of the liver and enhance VLDL-TG secretion through the CNS NPY Y1 receptor [24,75].However, in mice, acute central administration of NPY did not affect hepatic VLDL production [27].This difference may be attributed to variations in basal hepatic VLDL-TG production rates between rats and mice [24,76], indicating that species-specific factors influence hepatic VLDL metabolism.The interspecies differences may influence ApoB, a critical apolipoprotein in VLDL [77].Genetic association studies in humans have reported conflicting results regarding the role of NPY gene and receptor polymorphisms in serum TG metabolism [78,79], further underscoring the impact of species differences.This variability might explain why no correlation between CSF NPY and ApoB was observed in this study.
Ghrelin is the only appetite peptide hormone produced by the stomach [80], which promotes the uptake and synthesis of lipids in the fat and liver, and inhibits lipolysis [81].Some studies have reported that peripheral ghrelin affects TC, LDL, HDL, and more [29,30,82].In the human bloodstream, ghrelin circulates in two forms: octanoylated ghrelin and deacylated ghrelin, which interact with ApoB-containing and ApoA-Icontaining lipoproteins, respectively [30].However, our study found that there was no correlation between central ghrelin and peripheral ApoA-I and ApoB.On the one hand, circulating ghrelin crosses the blood-brain barrier and indirectly promotes appetite by binding to growth hormone secretagogue receptor 1a (GHS-R1a), which has no direct effect on peripheral lipoprotein.On the other hand, although studies have confirmed that central ghrelin can also directly regulate adipocyte metabolism [41], its role is reflected in increasing the expression of fat storage-promoting enzymes in white adipocytes and decreasing the expression of the thermogenesis-related mitochondrial uncoupling proteins 1 and 3 in brown adipocytes, and no direct effect on lipoprotein has been reported [41].Thus, further investigation is required to better understand the impact of CSF ghrelin on peripheral ApoA-I and ApoB.
The central OXA is able to cross the blood-brain barrier and promote lipogenesis and inhibit lipolysis in peripheral tissues [83,84].A clinical study revealed that individuals in the OXA increase group experienced a greater decrease in serum ApoB after bariatric surgery compared to the OXA decreased group.These results indicate a positive association between increasing orexin levels and ApoB improvement [85].However, these conflicting findings may be attributable to variations in the action of OXA receptors, emphasizing the need for further research to elucidate the specific mechanisms.Our study found no association between OXA and apolipoproteins, which may also be influenced by gender.Notably, endogenous androgens have been found to reduce the activation of OXA neurons in males [86].This factor might contribute to our inability to obtain positive results.Furthermore, the influence of CNS OXA on energy metabolism primarily centers on the thermogenesis and energy consumption of brown adipose tissue (BAT).Therefore, further research is needed to determine whether these effects align with lipid accumulation in white adipose tissue (WAT).
It was reported that the oxytocin receptor increases energy expenditure by stimulating BAT thermogenesis and promoting the browning of WAT [87].Nevertheless, this study did not reveal a relationship between oxytocin and peripheral apolipoproteins, possibly because oxytocin promotes the browning of WAT rather than its formation.Therefore, it may not affect triglyceride accumulation or apolipoprotein levels.
Several limitations should be noted in this study.Firstly, the study participants were individuals with anterior cruciate ligament injuries, and the preoperative stress response in these participants might have influenced CSF appetite peptide levels [88].Nonetheless, central CSF samples reflect more accurately human neuroendocrine metabolism than blood samples.Secondly, the study included only male participants.To generalize the results, further studies with female participants should be conducted.Lastly, the study did not include a low-weight group (BMI < 18.5 kg/m 2 ).It is known that the secretion of appetite peptides varies in individuals with low body weight [44,89,90], which may have different effects on peripheral ApoA-I and ApoB.Further research is warranted to explore this specific scenario.
Conclusion
This study provides preliminary evidence of the moderating role of BMI in the relationship between CSF NPY and peripheral ApoA-I levels.Our findings reveal that NPY plays a protective role in individuals within the normal weight range while acting as a risk factor for those who are overweight or obese.This distinction is crucial, given that it is associated with risk for cardiovascular disease.
Fig. 1
Fig. 1 Correlation of CSF appetite peptide with peripheral ApoA-I and ApoB between NW group and OW/OB group.Note: Adjusted for smoke status.*p < 0.05, **p < 0.01.(A) The correlation between CSF appetite peptide and peripheral ApoA-I and ApoB in the NW group.(B) The correlation between general demographic variables, CSF appetite peptide, and peripheral ApoA-I and ApoB in the OW/OB group
Table 1
Demographic and clinical characteristics of groups Abbreviations HDL, high-density lipoprotein; LDL, low-density lipoprotein; ALT, alanine aminotransferase; CHO, cholesterol; TG, triglyceride; GGT, gamma-glutamyl transferase; AST, aspartate aminotransferase; ApoA-I, Apolipoprotein A-I; ApoB, Apolipoprotein B; CSF, cerebral spinal fluid; NPY, neuropeptide Y; OXA, Orexin A; NW, normal weight group; OW/OB, overweight/obese group Note All data were reported by mean ± SD and using unpaired t-test, in addition to smoking status using Chi-square test, and TG, GGT, CSF NPY using Mann Whitney test, *p < 0.05 the dependent variable, we included smoking status, BMI groups, and CSF NPY.Model 2 introduced the interaction term BMI groups×NPY.
Table 2
Partial correlation analysis between CSF appetite hormone and peripheral ApoA-I and ApoB Abbreviations ApoA-I, Apolipoprotein A-I; ApoB, Apolipoprotein B; CSF, cerebral spinal fluid; NPY, neuropeptide Y; OXA, Orexin A; NW, normal weight group; OW/OB, overweight/obese group Note Partial correlation was used for the calculation of associations between variables.p < 0.05 was considered significant.*p < 0.05 Adjusted for smoke status
Table 3
Hierarchical multiple regression of peripheral ApoA-I Note Hierarchical multiple regression was used to analyze ApoA-I.All data have been standardized.Smoke status, BMI groups, and CSF NPY were entered in Step 1; BMI groups×CSF NPY were entered in Step 2. *Significant after Bonferroni correction for multiple comparisons Abbreviations B, non-standardized coefficient; SE, standard error; β, standardized coefficient beta; ApoA-I, Apolipoprotein A-I; CSF, cerebral spinal fluid; NPY, neuropeptide Y | 5,754 | 2024-07-25T00:00:00.000 | [
"Medicine",
"Biology"
] |
Quenched normal approximation for random sequences of transformations
We study random compositions of transformations having certain uniform fiberwise properties and prove bounds which in combination with other results yield a quenched central limit theorem equipped with a convergence rate, also in the multivariate case, assuming fiberwise centering. For the most part we work with non-stationary randomness and non-invariant, non-product measures. Independently, we believe our work sheds light on the mechanisms that make quenched central limit theorems work, by dissecting the problem into three separate parts.
Remark 1.1
There is no fundamental reason for working with one-sided time other than that the randomness in our paper is mostly non-stationary-a context in which the concept of an infinite past is perhaps unnatural. For stationary randomness there is no obstacle for twosided time. The other reason is plain philosophy: our concern will be the future, and whether the observed system has been running before time 0 we choose to ignore-without damage as long as our assumptions (specified later) hold from time 0 onward.
Consider an observable f : X → R. Introducing notations, we write as well as Given an initial probability measure μ, we writef i andW n for the corresponding fiberwisecentered random variables: Note that all of these depend on ω. Next, we define Note that σ 2 n depends on ω. It is said that a quenched CLT equipped with a rate of convergence holds if there exists σ > 0 such that d(W n , σ Z ) tends to zero with some (in our case, uniform) rate for almost every ω.
Here Z ∼ N (0, 1) and the limit variance σ 2 is independent of ω. Moreover, d is a distance of probability distributions which we assume to satisfy d(W n , σ Z ) ≤ d(W n , σ n Z ) + d(σ n Z , σ Z ) and d(σ n Z , σ Z ) ≤ C|σ n − σ |, at least when σ > 0 and σ n is close to σ ; and that d(W n , σ Z ) → 0 implies weak convergence ofW n to N (0, σ 2 ). One can find results in the recent literature that allow to bound d(W n , σ n Z ); see Nicol-Török-Vaienti [19] and Hella [13]. In this paper we supplement those by providing conditions which allow to identify a non-random σ and to obtain a bound on |σ n (ω) − σ | which tends to zero at a certain rate for almost every ω, which is a key feature of quenched CLTs.
Our strategy is to find conditions such that σ 2 n (ω) converges almost surely to This is motivated by two observations: (i) if lim n→∞ σ 2 n = σ 2 almost surely, dominated convergence should yield the equation above, and (ii) Eσ 2 n is the variance ofW n with respect to the product measure P ⊗ μ, since μ(W n ) = 0: Remark 1. 2 One has to be careful and note thatW n has been centered fiberwise, with respect to μ instead of the product measure. Therefore, Var P⊗μWn and Var P⊗μ W n differ by Var P μ(W n ): Eσ 2 n = Var P⊗μWn = Eμ(W 2 n ) = E Var μWn = E Var μ W n = Var P⊗μ W n − Var P μ(W n ). In special cases it may happen that Var P μ(W n ) → 0, or even Var P μ(W n ) = 0 if all the maps T ω i preserve the measure μ, whereby the distinction vanishes and the use of a non-random centering becomes feasible. We will briefly return to this point in Remark C.2 motivated by a result in [1]. A related observation is made in Remark A.3 which answers a question raised in [2] concerning the trick of "doubling the dimension".
To implement the strategy, we handle the terms on the right side of |σ 2 n (ω) − σ 2 | ≤ |σ 2 n (ω) − Eσ 2 n | + |Eσ 2 n − σ 2 | separately, obtaining convergence rates for both. Note that these are of fundamentally different type: the first one concerns almost sure deviations of σ 2 n about the mean, while the second one concerns convergence of said mean together with the identification of the limit. Remark 1. 3 That the required bounds can be obtained illuminates the following pathway to a quenched central limit theorem: (1) d(W n , σ n Z ) → 0 almost surely, (2) σ 2 n − Eσ 2 n → 0 almost surely, (3) Eσ 2 n → σ 2 for some σ 2 > 0, where the last step involves the identification of σ 2 . Remark 1. 4 Let us emphasize that in general we do not assume P to be stationary or of product form; μ to be invariant for any of the maps T ω i ; or P ⊗ μ (or any other measure of similar product form) to be invariant for the random dynamical system (RDS) associated to the cocycle ϕ.
Quenched limit theorems for RDSs are abundant in the literature, going back at least to Kifer [14]. Nevertheless they remain a lively topic of research to date: Recent central limit theorems and invariance principles in such a setting include Ayyer-Liverani-Stenlund [4], Nandori-Szasz-Varju [18], Aimino-Nicol-Vaienti [2], Abdelkader-Aimino [1], Nicol-Török-Vaienti [19], Dragičević et al. [9,10], and Chen-Yang-Zhang [8]. Moreover, Bahsoun et al. [5][6][7] establish important optimal quenched correlation bounds with applications to limit results, and Freitas-Freitas-Vaienti [11] establish interesting extreme value laws which have attracted plenty of attention during the past years. Structure of the paper the main result of our paper is Theorem 4.1 in Sect. 4. It is an immediate corollary of Theorem 2.14 of Sect. 2, which concerns |σ 2 n (ω) − Eσ 2 n |, and of Theorem 3.9 of Sect. 3, which concerns |Eσ 2 n − σ 2 |. In Sect. 4 we also explain how the results of this paper extend to the vector-valued case f : X → R d . As the conditions of our results may appear a bit abstract, Remark 4.5 in Sect. 4 contains examples of systems where these conditions have been verified.
At the end of the paper the reader will find several appendices, which are integral parts of the paper: in Appendix A we interpret the limit variance σ 2 in the language of RDSs and skew products. In Appendix B we present conditions for σ 2 > 0. In Appendix C, we discuss how the fiberwise centering in the definition ofW n affects the limit variance. For completeness, in Appendix D we elaborate on the structure of an invariant measure intimately related to the problem.
The Term
In this section identify conditions which guarantee that, almost surely, |σ 2 n (ω) − Eσ 2 n | tends to zero at a specific rate. Standing Assumption (SA1) throughout this paper we will assume that f is a bounded measurable function and μ is a probability measure. We also assume that a uniform decay of correlations holds in that Note already that and δ i j = 0 otherwise) and their centered counterparts Note that these are uniformly bounded. We also denotẽ Thus, our objective is to showσ 2 n → 0 at some rate. The following lemma is readily obtained by a well-known computation: Lemma 2.1 Assuming (2), there exists a constant C > 0 such that for all ω.
Proof First, we compute The last sums tend to zero by assumption.
We skip the elementary proof based on Lemma 2.1.
Remark 2.3
Of course, the upper bounds in the preceding results apply equally well tõ The following result, which has been used in dynamical systems papers including Melbourne-Nicol [17], will be used to obtain an almost sure convergence rate of 1 n n−1 i=0ṽ i to zero: ; see also ) Let (X n ) be a sequence of centered, square-integrable, random variables. Suppose there exist C > 0 and q > 0 such that for all m ≥ 0 and n ≥ 1. Let δ > 0 be arbitrary. Then, almost surely,
Remark 2.5
In this paper the theorem is applied in the range 1 ≤ q < 2. In particular, n q + m q ≤ (n + m) q then holds, so it suffices to establish an upper bound of the form Cn q .
Our application of Theorem 2.4 will be based on the following standard lemma: Bounding the last sum in each case yields the result.
Dependent Random Selection Process
It is most interesting to study the case where the sequence ω = (ω i ) i≥1 is generated by a nontrivial stochastic process such that the measure P is not the product of its one-dimensional marginals. Essentially without loss of generality, we pass directly to the so-called canonical version of the process, which corresponds to the point of view that the sequence ω is the seed of the random process. In the following we briefly review some standard details. Let π i : → 0 be the projection π i (ω) = ω i . The product sigma-algebra F is the smallest sigma-algebra with respect to which all the latter projections are measurable. For any I = (i 1 , . . . , i p ) ⊂ Z + , p ∈ Z + ∪ {∞}, we may define the sub-sigma-algebra F I = σ (π i : i ∈ I ) of F . (In particular, F = F Z + .) We also recall that a function u : → R is F I -measurable if and only if there exists an E p -measurable functionũ : p 0 → R such that u =ũ • (π i 1 , . . . , π i p ), i.e., u(ω) =ũ(ω i 1 , . . . , ω i p ). With slight abuse of language, we will say below that the sigma-algebra F I is generated by the random variables ω i , i ∈ I , instead of the projections π i . In particular, we denote Denote In the following (α(n)) n≥1 will denote a sequence such that for each n ≥ 1.
Standing Assumption (SA2) throughout the rest of the paper we assume that the random selection process is strong mixing: α(n) can be chosen so that 1 lim n→∞ α(n) = 0 and α is non-increasing.
as is well known. Ultimately, we will impose a rate of decay on α(n).
We denote by T * the pushforward of a map T , acting on a probability measure m, i.e., (T * m)(A) = m(T −1 A) for measurable sets A. We write We also write for l ≥ k. Note that all of these objects depend on ω through the maps T ω i . We use the conventions μ 0 = μ, μ r ,r +1 = μ and f k,k+1 = f here. Standing Assumption (SA3) throughout the rest of the paper we assume the following uniform memory-loss condition: there exists a constant C ≥ 0 such that for all whenever k ≥ r . The bound holds uniformly for (almost) all ω.
In the cocycle notation, (4) reads Note that, settingc Lemma 2.7 There exists a constant C ≥ 0 such that Proof The first bound holds, because while the choices g = f and g = f l,k+1 together yield Hence Note that here the expression in the curly braces only depends on the random variables by (6). On the other hand, the strong-mixing bound (3) implies Moreover, Collecting the bounds leads to the estimate Note that (6) immediately yields the estimate which by the boundedness of α results in Taking the minimum with respect to r proves the lemma.
The upper bound |E[c i jckl ]| ≤ Cη( j − i)η(l − k) of Lemma 2.7 yields the following intermediate result: In the third line we used the upper bound Next we investigate the remaining term by choosing r = j. Suppose furthermore that k − j ≥ l − k and recall η is non-increasing. Then the right side of the above display is bounded above by Cη(l − k). In other words, if i ≤ j ≤ k ≤ l ≤ 2k − j, then Cη( j − i) min r : j≤r ≤k {η(k − r ) + α(r − j)η(l − k)} is the tightest bound on |E[c i jckl ]| that Lemma 2.7 can provide. This observation motivates the following lemma.
Lemma 2.9 Define
(ii) There exist constants C 1 ≥ 0 and C 2 ≥ 0 such that Proof Part (i) is an immediate corollary of Lemma 2.7. As for part (ii), let us first prove the lower bound. Since all the terms in S(i, k) are nonnegative and α is non-increasing, we have for i < k that It remains to prove the upper bound in part (ii). We choose r = (k + j)/2 . Since η is summable, we have Next we split the last sum above into two parts, keeping in mind that α and η are non-increasing and η is also summable: This completes the proof.
The next two lemmas concern the case when η and α are polynomial.
Lemma 2.10
Let η(n) = Cn −ψ , ψ > 1 and α(n) = Cn −γ , γ > 0. Then Proof The lower bound follows immediately from Lemma 2.9(ii). Let first m ≥ 8. Then m/4 ≥ m/8. Thus Lemma 2.9(ii) yields Cm 2 by counting terms, we can choose a large enough C 2 such that the claimed upper bound holds also for 1 ≤ m < 8. Secondly, Regarding the last sum appearing above, observe that In other words, also Now, by Lemmas 2.9(i) and 2.10 we have Thus, Lemma 2.8 and bounds (7) and (8) yield The proof is complete.
Notice that for any ε > 0 we have n log n = O(n 1+ε ). Applying Theorem 2.4 with yields the claim.
almost surely.
We are now in position to prove the main result of this section: Then, for arbitrary δ > 0, almost surely.
Proof By Corollary 2.2, Combining this with Proposition 2.13 yields the following upper bounds on |σ 2 n − Eσ 2 n |: In each case the first term is the largest, so the proof is complete.
The Term |E 2 n − 2 |
In this section we formulate general condition that allow to identify the limit σ 2 = lim n→∞ Eσ 2 n and obtain a rate of convergence. Write for brevity. Then we arrive at Recall that
Asymptotics of Related Double Sums of Real Numbers
In this subsection we consider double sequences of uniformly bounded numbers a ik , (i, k) ∈ N 2 , with the objective of controlling the sequence for large values of n. In this subsection, we make the following assumption tailored to our later needs: We also denote the tail sums of η by We begin with a handy observation: Proof For all choices of 0 < K ≤ n we have The error is uniform because of the uniform condition |a ik | ≤ η(k).
uniformly, which concludes the proof.
The following lemma helps identify the limit of B n and the rate of convergence under certain circumstances: The series on the right side converges absolutely. Furthermore, denoting also |b k | ≤ η(k), so the series ∞ k=0 b k converges absolutely. Lemma 3.1 with L = K yields uniformly for all 0 < K ≤ n. Thus, the definition of r k (n) gives (11). To prove the convergence of B n , consider (11) and fix an arbitrary ε > 0. Fix K so large that R(K ) < ε/2C. Since K k=0 r k (n) + K n −1 tends to zero with increasing n, it is bounded by ε/2C for all large n. Then |B n − ∞ k=0 b k | < ε.
Convergence of E 2 n : A General Result
In this subsection we apply the results of the preceding subsection to the sequence Recall from (9) and (2) of (SA1) that the standing assumption in (10) is satisfied: |Ev ik | ≤ 2η(k) and ∞ k=0 η(k) < ∞. The next theorem is nothing but a rephrasing of Lemma 3.2 in the case a ik = Ev ik at hand.
Theorem 3.3 Suppose the limit
Ev ik exists for all k ≥ 0. The series In particular, σ 2 ≥ 0. Furthermore, there exists a constant C > 0 such that
Convergence of E 2 n : Asymptotically Mean Stationary P
For the rest of the section we assume P is asymptotically mean stationary, with meanP. In other words, there exists a measureP such that, given a bounded measurable g : → R, The measureP is then τ -invariant. We denoteĒg = g dP. We will shortly impose additional rate conditions; see (15).
Recall the cocycle property of the random compositions. In what follows, it will be convenient to use the notations and For the results of this section we need the following preliminary lemma, which crucially relies on the memory-loss property (SA3), assumed to hold throughout this text.
Proof Note that we may rewrite the memory-loss property in (5) as On the other hand, which completes the proof.
The following lemma guarantees that both limits lim n→∞ n −1 n−1 i=0 Eμ( f i f i+k ) and lim n→∞ n −1 n−1 i=0 Eμ( f i )μ( f i+k ) exist and can be expressed in terms ofP. Eg a ik = lim j→∞Ē g a jk .
In particular, the limits exist.
Proof First we make the observation that sinceP is stationary, (13) implies whenever i ≥ r . From assumption (2) it follows that lim r →∞ η(r ) = 0. The sequence (Ēg a ik ) ∞ i=0 is therefore Cauchy, so lim i→∞Ē g a ik exists and respects the same bound, i.e., We are now ready to show that lim n→∞ n −1 n−1 i=0 Eg a ik exists and in the process we see that it is equal to lim j→∞Ē g a jk . Let ε > 0. Choose r ∈ N such that Cη(r ) < ε/5, where C is the same constant as above. Then choose n 0 ∈ N that satisfies two following conditions. First, f 2 ∞ r /n 0 < ε/5.
Second, by (12), n −1 n−1 i=0 Eg a rk • τ i −Ēg a rk < ε/5 for all n ≥ n 0 . Next we show that n −1 n−1 i=0 Eg a ik − lim j→∞Ē g a ik < ε for all n ≥ n 0 . The following five estimates yield the desired result: In this first estimate, note that g a ik ∞ ≤ f 2 ∞ for all i, k ∈ N and a ∈ {1, 2}: In the second estimate, we apply (13): The third estimate follows the same reasoning as the first: The fourth estimate follows by the definition of n 0 : The last estimate holds by (14): These estimates combined, yield n −1 n−1 i=0 Eg a ik − lim j→∞Ē g a jk < ε for all n ≥ n 0 . Since lim j→∞Ē g a jk exists, then also lim i→∞ n −1 n−1 i=0 Eg a ik exists and is equal to it. Theorem 3.3 yields the next result as a corollary.
Theorem 3.6 The series
is absolutely convergent, and Now the rest of the claim follows from Theorem 3.3.
Standing Assumption (SA4) for the rest of the paper we assume that P is asymptotically mean stationary, and there exist C 0 > 0 and ζ > 0 such that for all n ≥ 1. Here the sup is taken over all r , k ≥ 0 and a ∈ {1, 2}.
Lemma 3.7
For all integers 0 < n 1 < n 2 , where C is uniform.
Next we use Lemma 3.7 to provide an upper bound on n −1 n−1 i=0 Eg a ik − lim r →∞Ē g a rk . Note that just making the substitutions n 1 = 0 and n 2 = n in Lemma 3.7 does not yield a good result. Instead we divide the sum n−1 i=0 Eg a ik into an increasing number of partial sums and then apply Lemma 3.7 separately to those parts.
Before proceeding to the next lemma, we define a function h ζ : N → R which depends on the parameter ζ in the following way Lemma 3.7 yields Lemma 3.7 also gives 1 n In the last line we used the fact that n/2 ≤ 2 n * ≤ n, implying n − 2 n * ≤ n/2. Collecting the estimates (20), (21) and (22), we deduce 1 We are finally ready to state and prove the main result of this section: Theorem 3.9 Assume (SA1) and (SA3) with η(n) = Cn −ψ , ψ > 1. Assume (SA4) with ζ > 0. Then Here σ 2 is the quantity appearing in Theorem 3.6.
Proof Let k ≥ 0. The previous lemma applied to case a = 1 yields Similarly in the case a = 2 Equations (23), (24) and Theorem 3.6 imply that We apply Theorem 3.3, which yields for all 0 < K ≤ n. The estimate on the right side of (25) is minimized, when h ζ (n) = K −ψ . Therefore choosing
Main Result and Consequences
Theorems 2.14 and 3.9 immediately yield the main result of the paper, given next. The bounds shown are elementary combinations of these theorems, so we leave the details to the reader. Let us remind the reader of the Standing Assumptions (SA1)-(SA4) in Sects. 2 and 3 . At the end of the section we also comment on the case of vector-valued observables. 3 Here δ k0 = 1 if k = 0, and δ k0 = 0 if k = 0.
is well defined, nonnegative, the series is absolutely convergent, and lim n→∞ σ 2 n (ω) = σ 2 for every ω ∈ * . Moreover, the absolute difference n (ω) = σ 2 n (ω) − σ 2 has the following upper bounds, for any ω ∈ * : Let us reiterate that Theorem 4.1 facilitates proving quenched central limit theorems with convergence rates for the fiberwise centeredW n . Recalling the discussion from the beginning of the paper, we namely have the following trivial lemma (thus presented without proof): In other words, once a bound on the first term on the right side has been established (e.g., using methods cited earlier), one can use Theorem 4.1 to bound the second term almost surely. Typical metrics satisfying (26) are the 1-Lipschitz (Wasserstein) and Kolmogorov distances. The results presented above allow to formulate some sufficient conditions for σ 2 > 0. For simplicity, we proceed in the ideal parameter regime Generalizations of the next result involving any of the other parameter regimes of Theorem 4.1 are straightforward, and left to the reader. Then σ 2 > 0.
Proof Suppose σ 2 = 0. We will derive a contradiction in each case.
We will return to the question of whether σ 2 = 0 or σ 2 > 0 in Lemma B.1.
Vector-Valued Observables
Let us conclude by explaining, as promised, how the results extend with ease to the case of a vector-valued observable f : X → R d . This time σ 2 n is a d × d covariance matrix and, if the limit exists, so is σ 2 = lim n→∞ σ 2 n . Define the functions n : R d → R by for all matrix norms.
Dropping the subindex n yields the limit matrix elements σ 2 αβ . Since α and β can take only finitely many values, simultaneous almost sure convergence for the matrix elements with the claimed rate follows.
According to the lemma, the rate of convergence of the covariance matrix σ 2 n to σ 2 can be established by applying the earlier results to the finite family of scalar-valued observables (e α + e β ) T f . Further, one may apply Corollary 4.3 (or Lemma B.1) to the observables v T f for all unit vectors v to obtain conditions for σ 2 being positive definite. Assuming now it is, for certain metrics (e.g. 1-Lipschitz) one has where Z ∼ N (0, I d×d ) and C = C(σ ), which again yields an estimate of the type We refer the reader to Hella [13] for details, including the hard part of establishing an almost sure, asymptotically decaying bound on d(W n , σ n Z ) in the vector-valued case.
Remark 4.5
As an application, Hella [13] establishes the convergence rate n − 1 2 log 3 2 +δ n for random compositions of uniformly expanding circle maps in the regime (27). Furthermore, Leppänen and Stenlund [16] establish the same result for random compositions of nonuniformly expanding Pomeau-Manneville maps.
Acknowledgements Open access funding provided by University of Helsinki including Helsinki University Central Hospital.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Appendix A. Random Dynamical Systems
In this section we interpret the limit variance of Theorems 3.6 and 3.9 from the point of view of RDSs. Like elsewhere in the paper, we will assume the system possesses the good, uniform, fiberwise properties of the Standing Assumptions.
Recall that τ preserves the probability measureP in (12), i.e., τ −1 F ∈ F andP(τ −1 F) = P(F) for all F ∈ F . One says that ϕ( · , · , · ) in (1) is a measurable RDS on the measurable space (X , B) over the measure-preserving transformation ( , F ,P, τ ). The map is called the skew product of the measure-preserving transformation ( , F ,P, τ ) and the cocycle ϕ(n, ω) on X . It is a measurable self-map on ( × X , F ⊗ B). In general, RDSs and skew products have one-to-one correspondence; in particular, the measurability of one implies the measurability of the other.
Thus, our task is to study the statistics of the projection of n (ω, x) to X . It now becomes interesting to study the invariant measures of . However, the class of all invariant measures of is unnatural, for we must incorporate the fact that τ preserves the measureP. For this reason, it is said that a probability measure P on F ⊗ B is an invariant measure for the RDS ϕ if it is invariant for and the marginal of P on coincides withP. In other words, * P = P and ( 1 ) * P =P, We will also need to consider the cocycle ϕ (2) (n, ω)(x, y) = (ϕ(n, ω)x, ϕ(n, ω)y) on the product space X × X . The corresponding skew product is (2) (ω, x, y) = (τ ω, ϕ (2) (1, ω)(x, y)).
Of particular interest will be the sequence of functions Z n : × X × X defined by Z n (ω, x, y) = S n (ω, x) − S n (ω, y).
For then
Notice already that writing yields the identity Standing Assumption (SA5) assume there exists an invariant measure P (2) for the RDS ϕ (2) that is symmetric in the sense that for all bounded measurable h : × X × X → R. The common marginal is then trivially an invariant measure for the RDS ϕ. Moreover, assume and are satisfied. While Standing Assumption (SA5) may, from the point of view of the initial setup of our problem, seem mysterious at a first glance, it is quite natural. We will later provide an example of a more concrete condition which implies (SA5), and stick to the abstract setting for now.
Lemma A.1 The function F satisfies
The latter has the upper bound Proof That F is centered is due to the symmetry property (30) of P (2) in (SA5). Since ϕ(i, ω, y))}, the same symmetry property also yields (35). The upper bound in (36) then follows from (33) and (34) in (SA5) together with (SA1).
Recall that in Theorems 3.6, 3.9 and 4.1 we have The next lemma connects this expression to the RDS notions when also (SA5) is assumed.
Lemma A.2
The limit variance σ 2 in Theorems 3.6, 3.9 and 4.1 satisfies Proof The first line is just the expression of σ 2 rewritten using (33) and (34). The second line then follows by (35). The last line holds by (29) together with (36) and (SA1).
Remark A.3
Note that the expression of σ 2 in (38) is exactly one half of the Green-Kubo formula in terms of the skew-product (2) , its invariant measure P (2) , and the observable F. This trick of "doubling the dimension" is not new. To our knowledge, however, (38) is a new observation at this level of generality. It answers a question raised in [2, Sect. 7] by Aimino, Nicol and Vaienti (who studied the special case where P, P and P (2) are product measures, allowing for a non-random centering of S n ): The key that makes (38) an algebraic fact is the symmetry property (30) of the measure P (2) . It deserves a separate remark that even though σ 2 does not in general (see Remark C.2) admit a classical Green-Kubo formula in terms of , P, and f , "doubling the dimension" still yields (38).
Appendix B. Positivity of 2
In this section we return to the question of positivity of the limit variance σ 2 . We shall assume (SA1) and (SA3)-(SA5), the strong-mixing assumption (SA2) being unnecessary here. Again we assume nice parameters-e.g. ψ > 2-for simplicity of the statements.
(ii) σ 2 > 0 is equivalent to each of the following conditions: There exist c > 0 and N > 0 such that Z 2 n dP (2) ≥ cn for all n ≥ N . (iii) If ζ > 1, then σ 2 > 0 is equivalent to each of the following conditions: (iv) If P is stationary, then σ 2 = 0 is equivalent to each of the following conditions: (v) If P is stationary, then σ 2 > 0 is equivalent to each of the following conditions: From the point of view of applications, parts (iii)(b and d), (iv)(b) and (v)(b and d) may be the most relevant ones as they involve the measures P and μ, and the process (S n ) n≥1 , which are immediately apparent from the definition of the system. Note that (iii)(b) is the same condition as in Corollary 4.3(ii).
Proof of Lemma B.1 By (36) we can appeal to a well-known result due to Leonov [15], which guarantees that the limit b = lim n→∞ Z 2 n dP (2) exists in [0, ∞], and b < ∞ if and only if sup n≥0 Z 2 n dP (2) < ∞. Moreover, the last condition is equivalent to the existence of G ∈ L 2 (P (2) ) such that F = G − G • (2) . On the other hand, standard computations and the formula for σ 2 in (38) yield Here ψ > 2 was used. Thus, σ 2 > 0 is equivalent to linear growth of Z 2 n dP (2) to infinity, while σ 2 = 0 is equivalent to sup n≥0 Z 2 n dP (2) < ∞. Parts (i) and (ii) are proved. As for part (iii), (28) and Theorem 3.9 with ζ > 1 yield If σ 2 > 0, the right side grows asymptotically linearly in n, and (a)-(d) are all satisfied. Finally, parts (iv) and (v) follow from (i) and (ii), respectively, because in the stationary case it holds that Z 2 n d(P ⊗ μ ⊗ μ) = Z 2 n dP (2) + O(1); see Lemma B.2 below.
We close the section with the lemma below, which was needed in the last part of the preceding proof.
Appendix C. Effect of the Fiberwise Centering of W n
In this section we discuss Remark 1.2 concerning the variance of W n , as opposed to the fiberwise-centeredW n = W n − μ(W n ). Note that and The difference of (43) and (44) equals . Under the assumptions of our paper (1). Therefore Var P⊗μ W n and Var P μ(W n ) either converge or diverge simultaneously. We now derive their asymptotic expressions in terms of series, restricting to the case where the law P of the selection process is stationary.
Remark C.2 Note that in the latter case This is the classical Green-Kubo formula in terms of the skew-product , its invariant measure P, and the observable f . Let us stress that it is not the expression of σ 2 , save for exactly the special case lim n→∞ Var P μ(W n ) = 0. The latter special case is the very same in which Abdelkader and Aimino [1] establish a quenched central limit theorem with nonrandom centering, assuming i.i.d. randomness (P = P N 0 ) in particular; see also Remark A.3.
Proof of Lemma C. 1 We prove the statements concerning Var P μ(W n ) first. We have We will apply Lemma 3.2 to show convergence as n → ∞. To that end, we need control of a ik in the limits i → ∞ and k → ∞. We begin with the first limit. By (5) below (SA3), we have a uniform bound whenever r ≤ i. Since P is stationary, this yields Thus, (Eμ( f i )) ∞ i=0 is Cauchy, so its limit exists and SinceP = P by stationarity, (14) gives Thus, the limit exists and as i → ∞. Since η is summable, as n → ∞. Both of the preceding bounds are uniform in k.
In order to bound a ik as k → ∞, first note that (45) allows to estimate the last estimate being true by strong mixing. Picking r k/2 yields uniformly in i. Since γ > 1 and ψ > 1, this bound is summable, so Lemma 3.2 can now be applied; recall (10). The bound in (11) becomes Now, choosing K n 1/ min{γ,ψ} yields the upper bound Cn 1/ min{γ,ψ}−1 claimed.
The expressions of the limits b k in term of the RDS notations is obtained with the help of (32)-(34), recalling again P =P due to stationarity.
Finally, the claims regarding Var P⊗μ W n = Var P⊗μWn + Var P μ(W n ) follow since we already have control of both terms on the right side: in the stationary case at hand, Theorem 3.9 applies with any ζ > 1, yielding Var P⊗μWn = σ 2 + O n 1 ψ −1 .
Appendix D. (SA5 ): A Less Abstract Substitute for (SA5)
Standing Assumption (SA5) is abstract in that it involves the invariant measure P (2) of the RDS ϕ (2) , and a number of properties of the measure, which are not obvious from the setup of the system at the beginning of the paper. For that reason we give in this section, as an example, another assumption which (i) is more concrete in that it involves only the initial measure μ and the basic cocycle ϕ, and (ii) is stronger than (SA5). Standing Assumption (SA5 ) throughout this section we assume following: the measures ϕ(n, ω) * μ have uniformly square integrable densities with respect to μ, i.e., there exists K > 0 such that dϕ(n, ω) * μ dμ for all n and ω. Moreover, for every bounded measurable g : X → R and ε > 0 there exists N ≥ 0 such that the memory-loss property hold for n ≥ N , m ≥ 0 and all ω. The rest of the section is devoted to investigating some consequences of (SA5 ). Note that (47) asks that the integrals of x → g((n, τ m ω)x) with respect to the two measures ϕ(m, ω) * μ and μ are essentially the same for large n, uniformly in m and ω. The role of (46) is to allow for uniform approximations of the compositions h • ( (2) ) n , n ≥ 0, by compositionsĥ • ( (2) ) n , where h is measurable andĥ is "simple": observe that (h −ĥ) • ( (2) ) n is not guaranteed to be uniformly (in n) small in L 1 (P ⊗ μ ⊗ μ), even if h −ĥ is small, without some assumption. To that end, let us already prove a little lemma: Lemma D.1 Let h : × X × X → R belong to L 2 (P ⊗ μ ⊗ μ). Then h • ( (2) ) n L 1 (P⊗μ⊗μ) ≤ K 2 h L 2 (P⊗μ⊗μ) holds for all n ≥ 0 with K as in (46). Proof Write λ =P ⊗ μ ⊗ μ for brevity. Observe that |h| • ( (2) by Hölder's inequality. Here sinceP is stationary. On the other hand, by (46). Combining the estimates and taking square roots yields the result.
Let H denote the set of all measurable functions h : × X × X → R such that lim n→∞ h • ( (2) ) n d(P ⊗ μ ⊗ μ) exists. Let A denote the set of all measurable cubes in × X × X . Clearly A is nonempty and closed under finite intersections, and it contains the product space × X × X . Clearly H is closed under linear combinations. Furthermore, the argument above shows 1 A ∈ H for all A ∈ A. Suppose now that h k ∈ H are nonnegative functions increasing to a bounded function h. Showing h ∈ H proves that H contains all bounded functions that are measurable with respect to the sigma-algebra σ (A) = F ⊗ B ⊗ B. We will show h ∈ H next.
Let ε > 0 be fixed. Since 0 ≤ h k ↑ h where h is bounded, by the bounded convergence theorem there exists k 0 = k 0 (ε) such that h − h k 0 L 2 (P⊗μ⊗μ) < ε. Thus, by Lemma D.1, for all n ≥ n 0 . Hence h ∈ H. Therefore, by the monotone class theorem H contains all bounded measurable functions.
This yields the claims concerning P.
We are in position to prove the promised fact:
The proof is complete.
D.2 Disintegration of the Invariant Measure P (2)
In this subsection we shed some light on the invariant measure P (2) of the RDS ϕ (2) with the aid of disintegrations. The mathematical constructions here are well known, and we include this part for completeness. The results call for nice structure of the measurable spaces: we assume that both (X , B) and ( 0 , E) are standard measurable spaces.
We are ready to state another basic fact: Lemma D. 5 (1) There exists a unique probability measureQ on (¯ ,F) which is invariant forτ and satisfies ( + ) * Q =P.
(2) There exists an essentially unique family of set functions q ω : F − → [0, 1], ω ∈ , such that (i) the map ω → q ω (E) is measurable for all E ∈ F − ; (ii) q ω is a probability measure forP-a.e. ω ∈ ; (iii) for all h ∈ L 1 (Q), Proof (1) Since ( 0 , E) is a standard measurable space, the shift-invariant measureQ hav-ingP as its marginal is uniquely constructed with the aid of Kolmogorov's extension theorem by requiring that the finite dimensional distributions are translation invariant and coincide with those ofP. See, e.g., Arnold [3, Appendix A.3] for details. | 9,479.4 | 2018-10-25T00:00:00.000 | [
"Mathematics"
] |
Behavioral and Electrophysiological Characterization of Dyt1 Heterozygous Knockout Mice
DYT1 dystonia is an inherited movement disorder caused by mutations in DYT1 (TOR1A), which codes for torsinA. Most of the patients have a trinucleotide deletion (ΔGAG) corresponding to a glutamic acid in the C-terminal region (torsinAΔE). Dyt1 ΔGAG heterozygous knock-in (KI) mice, which mimic ΔGAG mutation in the endogenous gene, exhibit motor deficits and deceased frequency of spontaneous excitatory post-synaptic currents (sEPSCs) and normal theta-burst-induced long-term potentiation (LTP) in the hippocampal CA1 region. Although Dyt1 KI mice show decreased hippocampal torsinA levels, it is not clear whether the decreased torsinA level itself affects the synaptic plasticity or torsinAΔE does it. To analyze the effect of partial torsinA loss on motor behaviors and synaptic transmission, Dyt1 heterozygous knock-out (KO) mice were examined as a model of a frame-shift DYT1 mutation in patients. Consistent with Dyt1 KI mice, Dyt1 heterozygous KO mice showed motor deficits in the beam-walking test. Dyt1 heterozygous KO mice showed decreased hippocampal torsinA levels lower than those in Dyt1 KI mice. Reduced sEPSCs and normal miniature excitatory post-synaptic currents (mEPSCs) were also observed in the acute hippocampal brain slices from Dyt1 heterozygous KO mice, suggesting that the partial loss of torsinA function in Dyt1 KI mice causes action potential-dependent neurotransmitter release deficits. On the other hand, Dyt1 heterozygous KO mice showed enhanced hippocampal LTP, normal input-output relations and paired pulse ratios in the extracellular field recordings. The results suggest that maintaining an appropriate torsinA level is important to sustain normal motor performance, synaptic transmission and plasticity. Developing therapeutics to restore a normal torsinA level may help to prevent and treat the symptoms in DYT1 dystonia.
Introduction
Dystonia is clinically defined as sustained muscle contractions that often involve both agonist and antagonist muscles, causing twisting and repetitive movements or abnormal postures [1,2]. There are more than 20 different forms of monogenic dystonia, but only half have been linked to a specific gene [3]. The most common generalized, early-onset form of dystonia is DYT1 dystonia [Oppenheim's dystonia; Online Mendelian Inheritance in Man (OMIM) identifier #128100, Dystonia 1]. DYT1 dystonia is inherited in an autosomal dominant fashion and has a reduced penetrance of 30% to 40% [4]. DYT1 dystonia is caused by mutations in DYT1/ TOR1A, which codes for torsinA with 332 amino acids. Most of the patients have a trinucleotide deletion (ΔGAG) corresponding to a glutamic acid (torsinA ΔE ) at 302 or 303 amino acid position in the carboxyl terminal region [5]. The ΔGAG mutation is related not only to generalized dystonia, but also to some forms of focal or multifocal dystonia [6,7]. Homozygous DYT1 ΔGAG mutation carriers have not been reported in humans. Although the ΔGAG mutation is the most common mutation, an 18 bp in-frame deletion mutation corresponding to 323-328 amino acid position [8][9][10], a 4 bp-deletion mutation resulting in a frame-shift at 311 amino acid position and a premature stop codon at 325 position [11], and an Arg288Gln missense mutation [12] have been reported. Whether torsinA ΔE leads to loss-of-function, toxicgain-of-function, or both remains unknown.
While many genetic animal models of DYT1 dystonia have been reported [13], the Dyt1 ΔGAG heterozygous knock-in (KI) mouse is an ideal genetic mouse model for DYT1 dystonia with the ΔGAG mutation because it expresses the mutant allele from the endogenous promoter [14,15]. TorsinA levels are reduced in Dyt1 KI mouse brains similar to those in the human fibroblasts derived from a DYT1 dystonia patient, suggesting that ΔGAG mutation in the endogenous gene causes a partial loss of torsinA function in both humans and mice [15][16][17]. It appears that WT and mutant torsinA have different degradation pathways, i.e. WT tor-sinA is degraded through the proteasome pathway, while torsinA ΔE is rapidly degraded via both the proteasome and lysosomal-autophagy pathways [18,19]. Support for the toxic gain-of-function including dominant-negative-function of torsinA ΔE largely come from over-expression models of the mutant protein. For example, overexpressing torsinA ΔE resulted in a redistribution of torsinA ΔE to the nuclear envelope in Drosophila melanogaster and various mammalian cell lines [20][21][22][23][24][25][26]. Redistributed torsinA ΔE recruits wild-type (WT) torsinA from the endoplasmic reticulum [27,28] and may lead to a loss-of-function of WT torsinA. TorsinA ΔE , but not WT torsinA, interacts with proteins involved in dopamine synthesis and storage in cultured cells [29,30]. These studies suggest that the ΔGAG mutation may introduce toxic gain-of-function effects.
Recombinant human torsinA ΔE produced by baculovirus expression system has ATPase activity in vitro with kcat and Km values similar to those of recombinant WT torsinA [21]. Another study showed an approximately 35% decrease of ATPase activity of recombinant human torsinA ΔE produced in Escherichia coli [31]. On the other hand, recombinant mutant torsinA carrying the −18 bp mutation produced by Escherichia coli has an approximately 75% decrease of ATPase activity [31]. These studies suggest that ATPase activity is different between the mutation types. DYT1 dystonia is a neuronal circuit disorder, rather than neurodegeneration disorder [32]. Electrophysiological recording in the hippocampal slice is one of the well-established experimental models to examine synaptic transmission and plasticity in the rodent brains and high quality recording data can be obtained with relative ease. The hippocampal CA3 pyramidal cells project to the CA1 pyramidal cells through Schaffer collaterals. Theta-burst stimulation in the CA3 region induces long-term potentiation (LTP) in the CA1 region. Dyt1 KI mice exhibit normal theta-burst-induced LTP in the hippocampal CA1 region and no significant difference in input-output curves [33]. On the other hand, Dyt1 KI mice exhibit enhanced pair pulse ratios (PPRs) in CA1 region. Consistent with PPR deficits, whole cell recording from the CA1 neurons showed a deceased frequency of sEPSC and normal frequency in mEPSC, suggesting action potential-dependent presynaptic neurotransmitter release deficits [34]. Although ΔGAG mutation may introduce loss-of-function, toxic gain-of-function including dominant-negative effects, or both, it is still not clear how each affects synaptic transmission and plasticity. Dyt1 heterozygous KO mouse, derived from a frame-shift mutation by the deletion of exons 3-4, is one of the ideal models to analyze the effect of partial torsinA loss on motor and electrophysiological deficits due to the loss of one allele. It is also clinically relevant because the Dyt1 heterozygous KO mouse mimics a frame-shift type DYT1 mutation in DYT1 dystonia patients [11]. Although it was suggested that it will be critical to compare the behaviors of Dyt1 heterozygous KI mice with Dyt1 heterozygous KO mice [35], motor performance in Dyt1 heterozygous KO mice has not been reported. Here, motor behavior, LTP, input-output curve, PPR, sEPSC and mEPSC were analyzed in Dyt1 heterozygous KO mice as a model of partial loss of torsinA function.
Motor deficits in Dyt1 heterozygous KO mice
Dyt1 heterozygous KO mice were produced as previously described [36]. Contrast to Dyt1 homozygous KO mice, Dyt1 heterozygous KO mice can mature to adulthood. Behavioral semi-quantitative assessments of motor disorders did not show any significant alterations in hindpaw clasping, hindpaw dystonia, truncal dystonia and balance adjustments to a postural challenge. The results suggest that Dyt1 heterozygous KO mice did not exhibit overt dystonic symptoms. Motor coordination and balance were further analyzed by the beam-walking test. Mice were trained to transverse a medium square beam for two days and the trained mice were tested twice on four different beams. The numbers of hindpaw slips were counted and compared between Dyt1 heterozygous KO mice and control mice. Dyt1 heterozygous KO mice showed 205% more slip numbers in the beam-walking test ( Fig. 1; p = 0.037), suggesting motor deficits. The results suggest that the decreased torsinA function resulting from a single null allele is sufficient to exhibit motor deficits. In an accelerated rotarod test, each mouse was put on an accelerated rotarod and the latency to fall was measured. Dyt1 heterozygous KO mice did not show a significant difference in latency to fall (p = 0.3, data not shown). Since mice can hold onto the rotarod with four paws and the latency to fall is an indicator of total motor performance, the results suggest no significant motor symptoms in total motor performance with four paws.
Since the acute hippocampal slices were used in the following electrophysiological experiments to examine synaptic transmission and plasticity, the hippocampal torsinA levels were also quantified in Dyt1 heterozygous KO mice. The hippocampi were dissected from the same Dyt1 heterozygous KO mice and the WT littermates and the torsinA levels were compared by Western blot analysis. Dyt1 heterozygous KO mice also showed significant reductions of the hippocampal torsinA levels (WT, 100.0 ± 14.7%, n = 4; heterozygous KO, 33.1 ± 2.4%, n = 4; p = 0.004; Fig. 2B). The hippocampal torsinA levels in Dyt1 heterozygous KO mice were reduced more than those in KI mice (KI, 66.4 ± 8.6%, n = 3; WT, 100 ± 7.5%, n = 3; p = 0.04), which has been reported in a previous paper [34]. Beam-walking test in Dyt1 heterozygous KO mice. Dyt1 heterozygous KO mice (+/Δ) showed significantly increased numbers of slips compared to control mice (CT). The data were analyzed after log transformation to obtain a normal distribution. Control mice were normalized to zero. The vertical bars represent means ± standard errors. *P < 0.05. Reduced striatal and hippocampal torsinA levels in Dyt1 heterozygous KO mice. The striatal (A) and hippocampal (B) torsinA levels were compared between wild-type (WT) and Dyt1 heterozygous KO (+/Δ) mice by western blot. Representative images of western blot (left) and the quantified torsinA levels (right) are shown for each case. The torsinA levels were normalized to β-tubulin and the vertical bars represent means ± standard errors. **p < 0.01, ***p < 0.001.
Normal mEPSCs in Dyt1 heterozygous KO mice
Since sEPSCs are the mixture of signals derived from action potential-dependent and independent transmitter releases, only action potential-independent spontaneous EPSCs were further analyzed by blocking voltage-dependent sodium channels in addition to blocking GABA A receptors in Dyt1 heterozygous KO mice and WT littermates (Fig. 4A). There were no significant changes in the frequency ( heterozygous KO mice compared to WT mice. The mEPSCs data suggest that the action potential-independent spontaneous pre-synaptic release was normal. On the other hand, there was a significant decrease in frequency of sEPSCs in Dyt1 heterozygous KO mice (Fig. 3B); these results suggest that the action potential-dependent pre-synaptic release was impaired in Dyt1 heterozygous KO mice. These results were similar to Dyt1 KI mice as previously reported [34].
Enhanced hippocampal CA1 LTP in Dyt1 heterozygous KO mice
Extracellular field-recording of the Schaffer collaterals pathway in the acute hippocampal slice is one of the established methods to examine synaptic transmission and plasticity. Theta-burst stimulations in the hippocampal CA3 region induce LTP in the CA1 region. In the previous studies, Dyt1 KI mice show no significant change of CA1 LTP in comparison to WT littermates [33,39]. Here, CA1 LTP was measured in Dyt1 heterozygous KO mice and their WT littermates. Unlike Dyt1 KI mice, Dyt1 heterozygous KO mice had significantly enhanced CA1 LTP compared to their WT littermates (WT, 132 ± 6%, 29 slices from 5 mice; +/Δ, 153 ± 5.7%, 28 slices from 6 mice, p < 0.05; Fig. 5). These results suggest that the decreased torsinA level, lower than those in Dyt1 KI mice, produces enhanced LTP.
No significant difference in input-output curves in Dyt1 heterozygous KO mice
To determine whether the stimulus intensity-dependent basal synaptic transmission is altered in Dyt1 heterozygous KO mice, input-output curves were obtained by measuring the post-synaptic potential slope with varying stimulus intensities. Dyt1 heterozygous KO mice showed no significant change in the input-output curve compared to WT littermates ( Fig. 6A; p = 1.00). The present results suggest that there is no significant change in the overall base line of theta burst-induced synaptic transmissions of Dyt1 heterozygous KO mice, which is similar to the normal input-output curve reported in Dyt1 KI mice [34,39].
No significant difference in PPRs
Extracellular field-recordings at various inter-stimulus intervals are commonly used to analyze the probability of synaptic vesicle release. When two consecutive action potentials are elicited in the presynaptic neurons, the amplitude of the second post-synaptic currents (EPSC) is inversely related to the amplitude of the first. Paired-pulse facilitation is observed when the first EPSC is smaller. On the other hand, paired-pulse depression is observed when the first EPSC is larger. The PPRs are inversely proportional to the probability of synaptic vesicle release [40]. A previous study showed enhanced PPRs in Dyt1 KI mice, suggesting pre-synaptic release deficits. To examine whether the enhanced PPRs are caused by loss of torsinA function, PPRs were examined in Dyt1heterozygous KO mice. Overall, Dyt1 heterozygous KO mice showed normal PPRs in comparison to WT littermates (Fig. 6B).
Discussion
Consistent with Dyt1 KI [14] and Dyt1 homozygous knock-down (KD) mice [41], Dyt1 heterozygous KO mice showed motor deficits in the beam-walking test. This result suggests that partial loss of torsinA function derived from a single null allele is sufficient to exhibit motor deficits similar to Dyt1 KI and Dyt1 homozygous KD mice. Dyt1 heterozygous KO mice expressed reduced hippocampal torsinA levels, which were lower than those in Dyt1 KI mice [34]. Whole-cell recordings of sEPSCs and mEPSCs from CA1 neurons revealed that Dyt1 heterozygous KO mice had action potential-dependent neurotransmitter release deficits similar to those in Dyt1 KI mice, suggesting that partial loss of torsinA function itself causes the synaptic transmission deficits. On the other hand, Dyt1 heterozygous KO mice showed enhanced CA1 LTP in contrast with normal CA1 LTP in Dyt1 KI mice [33]. The enhanced LTP in Dyt1 heterozygous KO mice are likely caused by lower torsinA levels. The results suggest that significant reduction of torsinA levels causes a profound deficit in synaptic plasticity. Overall, the current results suggest that maintaining an appropriate torsinA level is important to sustain normal motor performance, synaptic transmission and plasticity. Developing therapeutics to restore a normal torsinA level may help to prevent and treat the symptoms in DYT1 dystonia.
Dyt1 heterozygous KO mice showed reduced torsinA levels and motor deficits in beamwalking. Since Dyt1 KI mice also show reduced torsinA level [16,34] and motor deficits in beam-walking [14], the motor deficits found in Dyt1 KI mice may be caused by a partial loss of torsinA function derived from the single ΔGAG allele. The beam-walking deficits are commonly observed in genetic dystonia rodent models, such as knock-in [14], knock-down [41], conditional KO [17,36,37], and transgenic [42][43][44] models of DYT1 dystonia; KO [45] and conditional KO [46,47] models of DYT11 dystonia, and DYT12 dystonia model [48], suggesting that beam-walking is an excellent behavioral test to evaluate their motor symptoms in genetic dystonia mouse models.
Since Dyt1 KI mice show action potential-dependent pre-synaptic release deficits in the hippocampal acute slices, here synaptic transmission was measured in Dyt1 heterozygous KO mice to examine whether the release deficits in Dyt1 KI mice are caused by a partial loss of tor-sinA function. Dyt1 heterozygous KO mice showed decreased frequency in sEPSCs and normal mEPSCs similar to those in Dyt1 KI mice [34]. On the other hand, Dyt1 heterozygous KO mice exhibited normal PPRs and Dyt1 KI mice show enhanced PPRs. The reason why these differences appeared is not known. Although Dyt1 KI mice show enhanced PPRs, the enhancements appeared only in two out of eight inter-event intervals examined. Since Dyt1 KI mice show normal PPRs in other six inter-event intervals similar to Dyt1 heterozygous KO mice, the discrepancy of PPRs between Dyt1 KI mice and Dyt1 heterozygous KO mice are relatively small. Overall, the results suggest that the action potential-dependent pre-synaptic release deficits in Dyt1 KI mice may be caused by a partial loss of torsinA function.
The molecular mechanism of action potential-dependent presynaptic release deficits in Dyt1 heterozygous KO and KI mice is not known. One possible scenario could involve interaction between torsinA and snapin [49,50]. Snapin is one of the SNAP-25 binding proteins and is implicated in synaptic transmission [51], such as synchronization of synaptic vesicle fusion [52]. TorsinA binds to CSN4 and snapin and regulates synaptic release in neuroblastoma cells [53] and modulates synaptic vesicle recycling in other cultured cell lines [54]. Moreover, snapin is expressed in the hippocampal pyramidal neurons and regulates type VI adenylyl cyclase [55]. Therefore, partial loss of torsinA function may affect snapin functions not only for neurotransmitter releasing pathway, but also regulation of cAMP synthesis in the neurons. Recently, altered cAMP level and response have been reported in DYT1 animal model and patient cell lines [56]. Further investigation of the snapin pathways in Dyt1 mutant mice may be one of the important future studies.
Dyt1 heterozygous KO mice showed normal mEPSCs similar to those in Dyt1 KI mice [34]. These mEPSCs were recorded in CA1 pyramidal neurons of the acute brain slices derived from juvenile mice by blocking voltage-dependent sodium channels (TTX) in addition to blocking GABA A receptors. On the other hand, in the presence of TTX and absence of GABA A receptor antagonists, increased miniature release events were observed in cultured hippocampal neurons derived from a different line of Dyt1 knockin mice [57]. Although the reason why different results were obtained is not clear, it should be noted that each recording was performed under quite different conditions. CA1 pyramidal neurons in the acute brain slices still maintain intrinsic synaptic circuits of CA3 pyramidal neurons through Schaffer collaterals. On the other hand, the cultured neurons were derived from the CA3-CA1 regions dissected from the hippocampus in neonatal mice at postnatal days 0-1. The cells were seeded with a rat glial feeder layers and cultured for 11-14 days. The differences in developmental and recording conditions may account for the discrepancy between the results.
Dyt1 heterozygous KO mice had significantly enhanced CA1 LTP, while Dyt1 KI mice show no significant change of CA1 LTP [33,39]. These results suggest that the decreased torsinA level, lower than those in Dyt1 KI mice, produces enhanced LTP. There seems to be a threshold between 33.1 and 66.4% of torsinA levels to exhibit enhanced CA1 LTP. The reduction of tor-sinA in Dyt1 KI mice may not be sufficient to enhance CA1 LTP. Since the hippocampus contribute to spatial memory rather than motor performance, the pre-synaptic release deficits and enhanced LTP should not contribute directly to dystonia symptoms. However, the present results suggest that the partial loss of torsinA function mimics the effect of ΔGAG mutation on both behavioral and electrophysiological outputs. Recent findings showed an impaired corticostriatal LTD and its recovery by anticholinergics in Dyt1 KI mice [58,59]. Partial loss of tor-sinA function derived from a ΔGAG mutation allele may contribute to the electrophysiological alterations in other brain regions and pathophysiology of DYT1 dystonia.
Materials and Methods Mice
All experiments were carried out in compliance with the USPHS Guide for Care and Use of Laboratory Animals and approved by the IACUC at the University of Illinois at Urbana-Champaign, University of Alabama at Birmingham, and University of Florida. Dyt1 heterozygous KO mice were produced as previously described [36]. For the recording and Western blot experiments, only male mice were used to avoid the effect of estrus cycle in female mice. All mice were housed under a 12-hour light 12-hour dark cycle with ad libitum access to food and water.
Behavioral semi-quantitative assessments of motor disorders
Motor behaviors were assessed by semi-quantitative assessments, beam-walking and accelerated rotarod tests in this order with over one week intervals between each test. Behavioral semiquantitative assessments of motor disorders were performed as described earlier [14,60]. Dyt1 heterozygous KO (female, n = 9; male, n = 8) and control (female, n = 12; male, n = 13) mice from 3-7 months old were placed on a table and assessments of hindpaw clasping, hindpaw dystonia, truncal dystonia and balance adjustments to a postural challenge were performed. The hindpaw clasping was assessed as hindpaw movements for postural adjustment and attempt to straighten up while the mouse was suspended by the mid-tail. The hindpaw dystonia was assessed as the increased spacing between the limbs, poor limb coordination, crouching posture and impairment of gait. Truncal dystonia was assessed as the flexed posture. Postural challenge was performed by flipping the mouse onto its back and the ease of righting was noted.
Beam-walking test
Dyt1 heterozygous KO and control mice were used for beam-walking test as described earlier [14]. Briefly, the beam-walking test was performed within the last 8 hours of the light period after acclimation to a sound-attenuated testing room for 1 hour. Mice were trained to transverse a medium square beam (14 mm wide) in three consecutive trials each day for 2 days and tested twice each on the medium square beam and a medium round beam (17mm diameter) on the third day. The mice were then tested twice each on a small round beam (10 mm diameter) and a small square beam (7 mm wide) on the fourth day. Slips number of the hind paw on each side during transverse on the 80 cm-length beams were counted by investigators blind to the genotypes.
Accelerated rotarod test
The motor performance was further assessed with Economex accelerating rotarod (Columbus Instruments) as described earlier [14]. The apparatus started at an initial speed of 4 rpm. Rod speed was gradually accelerated at a rate of 0.2 rpm/s. The latency to fall was measured with a cutoff time of 2 min. Dyt1 heterozygous KO and control mice were tested for three trials on each day for 2 days. The trials within the same day were performed at approximately 1 hour intervals.
Western blot for torsinA
The striata and hippocampi were dissected from 4 heterozygous KO, 4 WT littermates at 2 months old and quickly frozen in liquid nitrogen. The striata and hippocampi were homogenized in 200 μl and 400 μl of ice-cold lysis buffer [50 mM TrisÁCl (pH 7.4), 175 mM NaCl, 5 mM EDTAÁ2Na, complete Mini (Roche)], respectively, and sonicated for 10 sec. One ninth volume of 10% Triton X-100 in lysis buffer was added to the homogenates. The homogenates were incubated on ice for 30 min, and then centrifuged at 10,000×g for 15 min at 4°C. The supernatants were then collected and the protein concentration was measured by Bradford assay with bovine serum albumin as standards [61]. The homogenates were mixed with sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) loading buffer and boiled for 5 min, incubated on ice for 1 min, and then centrifuged for 5 min to obtain the supernatant. Forty μg of total protein was separated by SDS-PAGE and transferred to a PROTRAN nitrocellulose transfer membrane (Whatman). The membrane was blocked in 5% milk in TBS-T buffer [20 mM TrisÁCl (pH 7.6), 137 mM NaCl, 0.1% (v/v) Tween 20] for 1 hour at room temperature. The membrane was cut into two and probed for torsinA and β-tubulin separately. Each membrane was either incubated in the rabbit anti-torsinA (Abcam; ab34540; 1:1,000 dilution) or HRP-conjugated anti-β-tubulin (Santa Cruz; sc-5274 HRP; 1:250 dilution) in blocking buffer [TBS-T containing 5% milk (Bio-rad)] overnight at 4°C. The high specificity of the anti-tor-sinA has already been confirmed by Western blot using Dyt1 KO mouse striatum protein extract as previously described [16]. After overnight incubation with the primary antibody, the membranes for torsinA were washed in TBS-T buffer for 10 min three times, incubated with bovine anti-rabbit IgG-HRP (Santa Cruz; sc-2370; 1:2,000 dilution) in the blocking buffer at room temperature for 1 hour, and washed for 10 min each nine times. On the other hand, the membranes incubated with HRP-conjugated anti-β-tubulin were washed in TBS-T buffer for 10 min each nine times. The bands were detected by SuperSignal West Pico Chemiluminescent Substrate (Thermo Scientific) and the signals were captured by an Alpha Innotech FluorChem FC2. The density of each band was quantified with UN-SCAN-IT gel software (Silk Scientific). Molecular masses were estimated from the migration of Precision Plus Protein Standards All Blue marker (Bio-rad). The density of torsinA band was normalized to that of β-tubulin. Western blot analysis was performed in triplicate. The average torsinA levels in WT mice were normalized to 100% for those in Dyt1 heterozygous KO mice.
Whole-cell recordings
Recordings were conducted from 5 juvenile Dyt1 heterozygous KO and 5 WT littermate mice (11-21 days old). Animals were anesthetized by the inhalation of isoflurane, decapitated and the brains were rapidly removed. 350 μm-thick coronal brain slices were cut with a Vibratome (Technical Products International, St. Louis, MO). Slices were first incubated in artificial cerebrospinal fluid (ACSF) containing (in mM) 124 NaCl, 26 NaHCO 2 , 1.25 NaH 2 PO 4 , 2.5 KCl, 1 CaCl 2 , 6 MgCl 2 , 10 D-glucose gassed with 95% O 2 and 5% CO 2 at 35°C for 30 min, then continued to incubate at room temperature (22°C). After at least 1 hr incubation, a slice was transferred to a submerged recording chamber with continuous flow (2 ml/min) of ACSF as described above except for 2 mM CaCl 2 and 2 mM MgCl 2 and gassed with 95% O 2 -5% CO 2 giving pH 7.4. All experiments were carried out at 32°C to 33°C. Whole-cell recordings were made from pyramidal cells in the hippocampal CA1 region using infrared differential interference contrast microscopy and an Axopatch 1Damplifier (Axon Instruments, Foster City, CA). Patch electrodes had a resistance of 3-5 MO when filled with intracellular solution containing (in mM): 125 K-gluconate, 8 NaCl, 10 HEPES, 2 MgATP, 0.3 Na 3 GTP, 0.2 EGTA, and 0.1% biocytin (pH 7.3 with KOH, osmolarity 290-300 mOsM). sEPSCs and mEPSCs were recorded at a holding potential of −70 mV and at the presence of 50μM picrotoxin which blocked GABAergic synaptic activity. 1 μM TTX was also applied to bath solution for mEPSCs recordings, which blocked the transmitter release driven by action potentials. Series resistance was 15-20 MO and cells were rejected if it changed more than 20% throughout the recording. All drugs were purchased from Sigma-Aldrich. Data were acquired using pClamp 10 software. The recordings were started 5 min after accessing cells to allow for stabilization of spontaneous synaptic activity. Analysis of sEPSCs and mEPSCs was based on 5 min continuous recordings from each cell. Synaptic events were detected using the Mini Analysis Program (Synaptosoft) with parameters optimized for each cell and then visually confirmed prior to analysis. The peak amplitude, 10-90% rise time and the decay time constant were measured based on the average of all events aligned by rise phase. The electrophysiological recordings were performed by investigators blind to the genotypes.
Set-up and electrode placement
For extracellular field recordings, glass recording electrodes were pulled from capillary glass tubes using a horizontal electrode puller (Narishige), and filled with aCSF. The input resistance of each electrode was tested by applying a current pulse and breaking the tip until a resistance of 1-3 MO was obtained. Recording electrodes were placed in stratum radiatum of hippocampal area CA1. Test stimuli were delivered to the Schaffer collateral/commissural pathway with bipolar Teflon coated platinum stimulating electrode positioned in the stratum radiatum of area CA3. Responses were recorded through a personal computer using AxoClamp pClamp8 data acquisition software. Excitatory Post-Synaptic Potential (EPSP) slope measurements were taken after the fiber volley to eliminate contamination by population spikes.
Long-term potentiation
Followed by at least 20 min of stable baseline recording, LTP was induced with two, 100 Hz tetani (1 sec), with an interval of 20 sec between tetani. Synaptic efficacy was monitored by recording field EPSPs (fEPSPs) every 20 sec beginning 0.5 hour prior to and ending 3 hours after the induction of LTP (traces were averaged for every 2 min interval). For statistical comparison, all traces were averaged over 2 min intervals from 36 min to 176 min, when a majority of recordings ceased. The first 35 min were excluded to prevent averaging the posttetanic potentiation.
Input-output curves
Test stimuli were delivered and responses were recorded at 0.05 Hz; every six consecutive responses over a 2 min period were pooled and averaged. fEPSPs were recorded in response to increasing intensities of stimulation (1 mV-30 mV).
Paired pulse ratio
PPRs were measured at various inter-stimulus intervals (10,20,50,100,150,200,250, and 300 msec). All experimental stimuli were set to an intensity that evoked 50% of the maximum fEPSP slope.
Statistics
The beam-walking test data were analyzed by logistic regression (GENMOD) with negative binominal distribution using GEE model in SAS/STAT Analyst software (SAS institute Inc. NC), using sex, age and body weight as variables [36]. The data were analyzed after log transformation to obtain a normal distribution. Control mice were normalized to zero. The hippocampal torsinA levels were compared between Dyt1 heterozygous KO and their littermate WT mice using Student's t-test. The sEPSC and mEPSC data were analyzed using Student's t-test. The fEPSP slope data were analyzed by the Kolmogorov-Smirnov test in SAS/STAT Analyst software (SAS Institute Inc. NC). LTP data were analyzed by ANOVA mixed model with repeated measurements in the software. The data for PPRs were analyzed either for data at each inter-event interval or all data regardless of the inter-event intervals. Significance was assigned at p < 0.05. | 6,738.2 | 2015-03-23T00:00:00.000 | [
"Biology",
"Medicine"
] |
Dielectric functions and collective excitations in MgB_2
The frequency- and momentum-dependent dielectric function $\epsilon{(\bf q,\omega)}$ as well as the energy loss function Im[-$\epsilon^{-1}{(\bf q,\omega)}$\protect{]} are calculated for intermetallic superconductor $MgB_2$ by using two {\it ab initio} methods: the plane-wave pseudopotential method and the tight-binding version of the LMTO method. We find two plasmon modes dispersing at energies $\sim 2$-8 eV and $\sim 18$-22 eV. The high energy plasmon results from a free electron like plasmon mode while the low energy collective excitation has its origin in a peculiar character of the band structure. Both plasmon modes demonstrate clearly anisotropic behaviour of both the peak position and the peak width. In particular, the low energy collective excitation has practically zero width in the direction perpendicular to boron layers and broadens in other directions.
The frequency-and momentum-dependent dielectric function ǫ(q, ω) as well as the energy loss function Im[-ǫ −1 (q, ω)] are calculated for intermetallic superconductor M gB2 by using two ab initio methods: the plane-wave pseudopotential method and the tight-binding version of the LMTO method. We find two plasmon modes dispersing at energies ∼ 2-8 eV and ∼ 18-22 eV. The high energy plasmon results from a free electron like plasmon mode while the low energy collective excitation has its origin in a peculiar character of the band structure. Both plasmon modes demonstrate clearly anisotropic behaviour of both the peak position and the peak width. In particular, the low energy collective excitation has practically zero width in the direction perpendicular to boron layers and broadens in other directions. After the discovery of superconductivity in the M gB 2 compound with the transition temperature T c ∼ 39K [1] much effort has been devoted to understanding the mechanism of the superconductivity [2][3][4][5][6][7][8][9][10][11] as well as to studying different electronic and atomic characteristics of this compound. Among these characteristics are the superconducting gap [3,12,13], the crystal structure and its influence on T c [1,2,[14][15][16], band structure [4,5,[17][18][19] and the Fermi surface [4], lattice vibrations [6,9,20,21] as well as thermodynamic and transport properties [22]. Recently Voelker et al [8] explored collective excitations very near the Fermi level by using a simple band structure model and found the plasmon acoustic mode at very small momenta (q ≃ 0.01 a.u. −1 ) and low energies (ω ≃ 0.01 eV). Here we study collective excitations in M gB 2 for different momenta and energies, namely, for q ≥0.1 a.u. −1 and ω ≥ 2 eV. We report first-principle calculations for the real, ǫ 1 (q, ω), and imaginary, ǫ 2 (q, ω), part of the dielectric function as well as the energy loss function Im[-ǫ −1 (q, ω)]. As a result of the calculation two plasmon modes and a few features arising from interband transitions are obtained.
M gB 2 crystallizes in the so-called AlB 2 structure in which B atoms form graphitelike honeycomb layers that alternate with hexagonal layers of M g atoms. The magnesium atoms are located at the center of hexagons formed by borons and donate their electrons to the boron planes. Similar to graphite M gB 2 exhibits a strong anisotropy in the B-B lengths: the distance between the boron planes is significantly longer than in-plane B-B distance. We use this resemblance between graphite and M gB 2 in order to clear up the origin of plasmon peaks in M gB 2 by comparing the calculated energy loss function features with those obtained from EELS measurements for graphite and single-wall carbon nanotubes (SWCN) [26,27]. To shed more light on the problem we also evaluate the dielectric functions and the energy loss spectra for the M gB 2 crystal structure with the M g atoms re-moved (this hypothetical crystal structure is designated as B 2 ).
Information on the energy lost by electrons in their interactions with metals is carried by the dynamical structure factor S(q, ω) which is related by the fluctuationdissipation theorem to the energy loss function Im[ǫ −1 00 (q, ω)]. To calculate the inverse dielectric function we invoke the random phase approximation (RPA) where where υ c is the bare Coulomb potential and χ 0 is the density response function of the noninteracting electron system. The dielectric function is related to χ 0 as ǫ = 1 − υ c χ 0 . The energy loss function may be obtained by inverting the first matrix element of ǫ that leads to neglecting short-range exchange and correlation effects or directly from ǫ −1 when these effects are included. We have computed the energy loss function by using both of these approaches and found that the inclusion of the local field effects leads to negligible changes of both the width and energy of the plasmon peaks. In the calculation of the density response matrix χ 0 GG ′ (q, ω) we have used two different first-principle methods: the plane wave pseudopotential method [23] and the tight-binding version of the LMTO method [24,25].
In Figs. 1a and 1b we show the evaluated band structure of M gB 2 and B 2 along the symmetry directions. In general, these band structures are quite similar. The distinctions between them in the vicinity of the Fermi level (E F ) are due to the lower position of E F relative to the σ band in the ΓA direction for B 2 . The states of this band, which are of p x,y symmetry, are degenerate in ΓA and their charge density is located in B layer showing a clear 2D character. This character leads to weaker interactions between the B layers and to smaller dispersion of the σ band along ΓA in B 2 . The p z band which is occupied at Γ in M gB 2 becomes unoccupied at Γ in B 2 . In Figs. 2a and 2b we present the momentum dependence of the energy loss function in the ΓA, ΓK and ΓM directions. In M gB 2 we have found two plasmon modes. The higher collective excitation mode originates from the free electron like excitation mode with energy ω p1 =19.1 eV that corresponds to the electron density parameter r s =1.82 a.u. of M gB 2 . This free electron like mode is transformed into two separated submodes ω xy p1 and ω z p1 in a real crystal. One of them is very isotropic in the hexagonal plane (the ΓK and ΓM directions) and disperses linearly up for momenta q ≥ 0.2 a.u., while another one, in the ΓA direction, has smaller energy and is nearly constant. Because of limitations of the calculation methods and a large width of the energy loss peaks at small momenta we could not determine accurately the plasmon peak position in this region. Therefore we define the plasmon energy at the Γ point by extrapolation of the computed plasmon dispersions at q≥ 0.2 a.u. This extrapolation results in ω xy p1 =19.4 eV and ω z p1 =18.8 eV in good agreement with the free electron gas value ω p1 . The width, △ p1 , of both these energy loss peaks decreases with the increasing momentum, and the △ z p1 width deacreases faster than △ xy p1 . A different behavior is shown by the low-energy loss function peak which disperses linearly up in both the ΓA direction and in the hexagonal plane. In the ΓA direction, at q ⊥ ∼ 0.1 a.u. −1 , the peak is very narrow, △ z p2 ∼ 0.01 eV, that can be seen in the very small value of ǫ 2 in the energy interval around the peak position where ǫ 1 =0 (Figs. 3a and 3b). In particular, for q ⊥ =0.12 a.u. −1 this interval is between 1 and 5 eV (Fig. 3a) and the energy loss peak is located at 2.9 eV. On changing the momentum to the A point this interval becomes more narrow and moves to higher energies (Fig. 3c). At small momenta the first peak of Im[-ǫ 00 (q, ω)] placed in the energy interval 0-1 eV is determined by intraband transitions within the two σ bands in the x, y plane around the ΓA direction, while the second peak located at 5.4 eV (Fig. 3a) is determined by the interband σ−π transitions in the KM , AH, and AL directions. So, the low energy plasmon excitation corresponds to electron excitations in the σ bands and one can define it as the σ plasmon. In the hexagonal plane, in the ΓK direction, the low energy EELS peak broadens (Fig. 4a) and disperses linearly up on going from the Γ point to K. In the ΓM direction the plasmon peak disperses similar to that in the ΓK one showing a nearly ideal isotropy in the hexagonal plane. However, it becomes smaller and wider on going from Γ to M and vanishes finally at q ≃ 0.8| ΓM |. Comparing the plasmon energies obtained from the LMTO and pseudopotential calculations one can find only a small difference of ∼ 0.1 eV between them. For instance, at q=0.2 a.u. −1 the LMTO ω z p2 is slightly smaller than the pseudopotential one and vice versa for larger momenta. This slight difference results in different energy loss peak positions at Γ: the extrapolation of both plasmon energies ω xy p2 and ω z p2 calculated for q≥ 0.1 a.u. −1 to the Γ point gives ω z p2 =1.8 eV, ω xy p2 =2.0 eV (LMTO) and ω z p2 =2.2 eV, ω xy xy =2.4 (pseudopotential). We estimate the accuracy of these values to be better than 0.2 eV.
Besides two plasmon modes we have obtained four small features in Im[-ǫ −1 00 (q, ω)] that correspond to interband excitations. It is difficult to find out what transitions are responsible for these features, nevertheless we show them in Fig. 2a. One of them arises at q ≃0.1a.u. −1 at an energy of ≈2 eV in the hexagonal plane, another one occurs at q ≥ 0.4 a.u. −1 in the ΓM direction at an energy of ≃ 4 eV and the other two small features arise in the ΓA direction for q=0.1-0.25 a.u. −1 at energies of 10 eV and 13 eV, respectively.
In Fig. 2b we show the momentum dependence of the energy loss function calculated for the hypothetical crystal structure B 2 . In general, the energy loss function in B 2 shows relatively similar features to those in M gB 2 , though there are some important distinctions. In particular, all collective excitations including two plasmon modes manifest a smaller dispersion in the ΓA direction. This effect is a direct consequence of a weaker interactions between adjacent layers of boron in B 2 compared to M gB 2 . Another distinction is that all features in the energy loss function in B 2 are much clearer than those in M gB 2 (Figs. 3a-3c and 4a-4c). One exception is the high energy plasmon mode. The third distinction is that B 2 has more features in Im[-ǫ −1 00 (q, ω)] than does M gB 2 . The low energy plasmon mode extrapolation to the Γ point gives ω z p2 =4.1 eV which is ∼ 2 eV higher than that in M gB 2 . This shift in energy is due to the higher energy position of the second maximum of Im[ǫ 00 ] (Fig. 3a). While the position of the first peak of Imǫ in B 2 nearly coincides with that in M gB 2 the second peak is moved by 2 eV to higher energies. Via the Hilbert transform (Kramers-Kronig relation) it also moves the node of ǫ 1 to higher energy. Despite some quantitative distinctions between the energy loss functions in M gB 2 and B 2 one can conclude that mostly the features of the excitation spectrum of M gB 2 can be derived, with the relevant corrections, from those of the hypothetical crystal B 2 .
The two plasmon modes similar to those obtained in M gB 2 were also observed in EELS experiments for graphite and SWCN [26,27] which are even more anisotropic than M gB 2 . The upper plasmon mode which has larger energy in graphite and SWCN than in M gB 2 (near the Γ point ω xy p1 ≃21 eV (SWCN) and ω xy p1 ≃26 eV (graphite)) [27] also results from excitations of all valence electrons. The lower plasmon mode ω xy p2 shows a linear dependence on momentum, like that in M gB 2 , with energies ω xy p2 ≃5 eV (SWCN) and ω xy p2 ≃6.5.eV (graphite) [27] at Γ. But in contrast to SWCN and graphite where ω xy p2 represents the collective excitation of the π-electron system [26][27][28] in M gB 2 this mode is a result of the collective excitation of the σ-electron system. The different origin of the low-energy plasmon mode in M gB 2 /B 2 and graphite/SWCN can be qualitatively understood from the Fermi energy (E F ) position. In graphite the Fermi level is pinned by π-electrons in the KH direction. In M gB 2 and B 2 with a smaller number of electrons per atom the Fermi level is pinned by σ-electrons. So, lowenergy excitations in M gB 2 and in B 2 are expected to be derived from the σ-band electrons.
In conclusion, we have performed the first-principle calculations of the dielectric functions ǫ 1 (q, ω) and ǫ 2 (q, ω) as well as the energy loss function Im[-ǫ −1 (q, ω)]. The calculations reveal the two plasmon modes in M gB 2 and B 2 and a few interband collective excitations. The low energy plasmon mode corresponding to the excitations of electrons in the σ bands shows a very anisotropic behavior of the peak width. The energy loss spectrum of M gB 2 can be derived, with the relevant corrections, from that of the hypothetical crystal structure B 2 .
We thank R.H.Ritchie and A.Bergara for helpful discussions. This work was partially supported by the Basque Country University, Basque Hezkuntza Saila, and Iberdrola S.A.
After the submission of this paper first-principles calculations of collective excitations in M gB 2 have also been presented by Wei Ku et al. [29]. | 3,325 | 2001-05-23T00:00:00.000 | [
"Physics"
] |
NLP-based personal learning assistant for school education
ABSTRACT
INTRODUCTION
Computers plays an important part in our lives.Computers are now widely used for educational purposes like lesson tutoring, language testing, diagnosing errors and also for documents archival.This has led to many researchers think about designing an intelligent tutoring system (ITS).These intelligent tutoring systems imitate one-to-one tutoring to deliver the content in the learning material better than using a class room environment.Artificial intelligence (AI) plays a major role in understanding and in interpreting the human language in order to develop learning application.
AI has been adopted by many educational institutions in various forms from a web based online education system to a web based chatbots [1].It determines the student's level of competence and adapt the computer responses to student input.The intelligent tutoring system (ITS) uses NLP technique to process and access students text input [2].The intelligent tutoring system provides the students with an interactive learning environment.An ITS helps to identify the gaps in student knowledge and also helps to provide personalized instruction based on student knowledge level [3].It is very important for the system to provide timely feedback which would enhance the learning experience [4].NLP is about building an intelligent machine with computational technologies and linguistics.The availability of large amount of educationallyrelevant text have increased the use of NLP to resolve the challenges faced by students and teachers [5].A Int J Elec & Comp Eng ISSN: 2088-8708 NLP-based personal learning assistant for school education (Ann Neethu Mathew) 4523 question-answering (QA) system finds precise answer or find the precise portion of the text from a given collection of documents, when the user poses a query.QA system answers to the questions asked by the user in human language.The answer is retrieved from either a database or a collection of documents using NLP techniques [6].
Chatbots are an artificial intelligence (AI) conversational user interface that allow users to communicate with services using a messaging application.Eliza, Alice, Jabberwacky [7] are few early chatbots that could simulate the human interaction and was able to have conversation with users.The chatbots can be used in formal education where most frequent questions of the students can be answered by the assistant chatbots thus useful in e-learning classrooms [8].The bots bring a product or service into any messaging products such as Slack, Facebook via conversation and can simulate an interaction with a user in natural language through messaging application.Chatbots represent the evolution of question answering (QA) systems using NLP and it responds to users queries in natural language.When a chatbot is trained with lot of data, more satisfaction and precision is reached [9].
With this increased adoption of technology in education, we see a need to also ensure that quality of learning is not compromised on.Technology is best used as a supporting tool that improves the overall productivity and effectiveness of the teachers and provides a better way for education coverage to scale up and grow.In this paper, an attempt is made at exploring the feasibility of using AI and machine language (ML) in a teaching assistant context to create a near human interaction experience and also provide a means for teachers to get insights about each student's learning progress.An understanding of past work in this space was developed by doing a review of existing literature.A summary of the literature review is provided in section 2. Section 3 explains the learning assistant chatbot design and associated prototype implementation work.Section 4 covers the results from the chatbot trial run experiments.Section 5 explains the conclusion of the paper.
Intelligent tutoring system in education
The intelligent tutoring system provides the students with an interactive learning environment.The intelligent tutoring system uses NLP technique to process and access students text input [2].ITS has a very significant role in one-to-one tutoring and by incorporating NLP in ITLSs made it more capable to understand the learners input text and also provided pedagogical feedback.
Emotional intelligence is incorporated in intelligent tutoring systems [10] that use different styles to teach the learning material to students.An emotionally intelligent tutoring system (EITS) finds the emotional disposition of the learner.The learner's facial expression, mouse movements, clicks and speed are some of the inputs to the tutoring system to analyse the emotional state of the student.A virtual assistant is used to motivate the students for solving algebra problems using a dialog flow tool [11].It gives an alert about the mistakes made when solving algebra related question.
Student modelling for personalization
A teaching-learning process helps to identify the gap in student's knowledge [12].An ITS is an effective system that fills the gap by providing personalized instruction to the students based on their knowledge and learning capacity.The confidence level of the student in applying these grammar rules are measured on the scale very good-good-average-weak-poor.The student's model captures the students' knowledge level after every exercise and generates new exercise if any weak areas are identified.
Deep tutor, a dialogue based ITS is developed to help the students with the science topics [13].Students interact with the system to solve physics problems.Students are given hints to help them find the solutions themselves and using natural language assessment methods, the solutions are evaluated.The Tutoring system gives positive or negative feedback.Thus, there is a significant learning gain in the system when compared to reading worked out solutions.
A conversational intelligent tutoring system (CITS) is used to enhance the learning in autistic children by adopting a visual audio kinaesthetic learning style [5].The conversational intelligent system is adopted to the learning style of these children.Short text similarity (STS) algorithm and text based pattern matching are used to get the response from a domain for a user utterance.The CITS was built on three main components: Component 1 consists of knowledge base and the log files, component 2 consists of the conversational agent and component 3 is the ITS.
Multimodal interaction on educational systems
The framework of multiple modal interaction system for education (MMISE) implements a client server architecture [14].The client deals with various input signals from the student like keyboard typing, mouse click, pad touch, speech recognition and face recognition.These signals are stored in the database for analysis depending on the pedagogical requirement.The output generated from the system is text, voice or any avatar animation to engage the students in the learning process.The survey conducted on teachers' experiences in conducting multimodal teaching and learning show the use of technology as a medium for imparting information, collaborative learning and also to design learning activities for improving the learning experience [15].The communications and the relationship between the teachers and students have changed and learners cope more efficiently with computer and other new modes of information presentation [16].
Application of NLP in pedagogy
Implementing natural language in the learning process not only brings an effective improvement in the learning process but also helps in developing the language process [17].NLP is an effective tool for bringing an improvement in the education sector.Many educational contexts such as e-learning, research and evaluation integrate NLP techniques to bring a positive outcome in the education system.When NLP is implemented in education, there is an increase in the student's learning output.Sometimes, student fails to understand the context due to the obstacle of language.NLP can be a desired approach for improving the learning ability of students.NLP and Multimedia technology is combined to develop a computer assisted teaching system [18].An educational application area of language assessment uses NLP to access the students typed answers [19].Syntactic analysis detects any writing errors while semantic analysis accesses the meaning of student's responses.The discourse processing is used to find the connection between sentences NLP tools are leveraged for measuring or accessing students' content in a project based learning activities [20].A linguistic inquiry and word count 2015 (LIWC2015) is used to analyse the content in the student's blog for the assessment work given by the teacher.The LIWC captures the analytical thinking, emotional tone, clout and authenticity.These categories provide details about a student's engagement while learning in an ITS.The knowledge evaluation of the student's assessment is done through natural language processing.NLP is used for framing the questions, checking the answers given by students and also for giving motivational feedback [21].
Information retrieval via a conversation interface
Question answering (QA) model extracts the exact and precise answer from the dataset [22].The QA model has four different modules namely question processing module, document processing module, document extraction module and document processing module.The answer extraction algorithm extracts the keyword from the question and match the relevant paragraph having the keyword in the dataset.The answer extraction algorithm extracts the keyword from the question and goes through the paragraph extractor and sentence extractor according to the type of questions.
For implementing a question answering system pre-processing operation like stemming, stop-word removal, are performed on the text file of corpus [23].The extracted keywords are stored in Index term dictionary.POS tagging, stemming is done on user query and the extracted keywords are matched with the term in the Index dictionary.POS tagging is applied on the extracted documents to match the same sense as the query sense.A simple question parsing algorithm and question semantic similarity algorithm is used to parse a simple question [24].The experiments results show the precision and recall as high indicated that question answering system using natural language is an effective method for students and teachers to exchange question and answers on network teaching.The new generation bots use NLP to provide human like interaction [25].The bots are classified as conversational chatbots and service oriented chatbots.The paper proposes a concept of 'botplications' for providing services and data through chatbots for an efficient experience.
Knowledge evaluation through conversation
Chatbot developed is used to introduce the concepts of variables and condition and also advanced concepts of finite state automata [26].The chatbot uses pattern matching, lemmatization technique and finite state automata to provide these assessments to the students.The authors did two observational studies to analyse the effect of chatbot on students.Different indicators like task completion, participation, self-reported interest and willingness to learn are measured.It is found that the task completion rate is more for students who used the chatbot.And describe that the chatbot can be used in educational background.
Quiz chatbot responds to student's frequently asked questions to reduce educator's workload [9].The Quiz chatbot asks the students to respond to multiple choice questions and also asks for justification for the answer they had selected.The chatbot also provides personalized feedback, thus allowing the students to identify misconceptions or errors.This also helps the educators to identify the common areas where students struggle.These two applications show that chatbots can be used within education and hence chatbots can improve the students learning experience.An ITS is developed to assist the high school students for learning general knowledge [27].The tutoring system, a web-based application is accessed by a large number of learners worldwide.It uses NLP Techniques to find the answers for queries given by students.This ITS system is trained on a knowledge base of general knowledge questions and answers.The paper discusses about different cloud-based chat-bot platforms like Dialogflow.com (API.ai),Wit.ai, Luis.ai, and Pandorabots.com.The survey conducted shows that majority of students use chatbot to clear their doubts related to their education and students prefer them as one of the quickest modes of communication [28].
A chatbot is developed to measure the memory retention and learning outcome of students [29].A chatbot of an object-oriented programming language is developed and is deployed.It evaluates the students learning methodology.The results of obtaining answers from google and results from chatbot system developed are compared.The results show that the students who use chatbot system show a higher level of memory retention and better learning outcome than the students using the conventional search engine.
METHOD
The approach decided for the research was to pick a suitable subject using which a learning assistant prototype could be built and analysed.The subject must also be efficiently captured within an electronic knowledge base which can be used in an NLP context for information extraction.For evaluating the learning assistant, sufficient quantity of real user queries across various subject topics regarding the key concepts need to be collected from students.The user queries will mimic or simulate the interaction of the chatbot with individual students.
Choosing the subject for the knowledge base
One of the critical underlying imperatives for school education is to shape the way students think.Over the past decade, schools across the world have been turning to Scratch.It is by far, the most popular tool for computer programming education in primary schools by millions of school children.It is a very visual and block-based, drag-and-drop system that enables a highly creative learning experience for all children, irrespective of their abilities and levels of proficiency.Using Scratch, children enjoy making presentations, simple games, cartoon animations, conversation simulations, and puzzles.This provides the freedom for them to explore their boundless creativity.However, since the learning is very experiential, the maximum learning happens while the students try out various projects, learning how to explore and apply the various computational concepts, such as sequencing, iteration, and variables, and computational practices, such as debugging and abstraction.However, the traditional pedagogical models followed in schools for most students are not very suited to subjects like Scratch which are very "hands-on".Based on a survey of students, teachers and parents across a sample of popular schools, we see that during the first year of introduction to Scratch, around 25-30% of the students are seeking help from their parents to clarify their understanding, while doing their Scratch assignments and projects.What if we could let technology don this "learning assistant" role, in a fun and engaging way?Hence this research is focused on exploring the application of NLP for implementing a learning assistant chatbot, which can be of aid in enabling students' better and faster learning of Scratch basics.
Chatbot prototype implementation-rasa natural language processing (NLU) integration
The Rasa stack was used for the development of this conversational chatbot.Rasa NLU and Rasa Core are two popular open-source Python libraries for developing a chatbot.They can provide the required natural language understanding, machine learning and dialogue management capability.The developed bot called "SCRATCHAI" is integrated with Slack for a User Interface, selected based on popular choice.The picture shown in the Figure 1 previous illustrates how a chatbot built using Rasa responds to a message from the user. A question is received from the student and is passed on to an interpreter (Rasa NLU) to extract the intent and entities. The Tracker is used to maintain the conversation status.It gets a notification that a new message has come in The policy gets the current state of the tracker. The policy decides what action to be taken next. The action that is chosen is logged on to Tracker. The action is executed and answer is sent to the student.
Training the chat-bot
Extracting the intent and the entities is an important part of the conversation management.Rasa NLU is used to derive intent and entities from user's natural language.Intent-Intent is defined as what the user wants to perform.It is expressed in users' messages.The user's message is "Hi", the intent is extracted as "Greetings".If it is "What is a Sprite", the intent is "ask".Entity is the useful information extracted from the user input is defined as entity.The message "What is a Sprite," the entity is "Sprite".
Any NLP Bot should be taught to understand the messages first.In order to teach the model, we must run training sequences through the NLU model which will take the user input in simple text for, based on which it would extract the intents.This intent extraction helps the bot to understand and classify the message from the student.The visual studio project "SCRATCH AI" is the base project directory.For files, a "data" directory under "SCRATCH AI" with a training file is used.The user messages that the bot should understand or the intents are defined in this.The NLU and the core components that the model use will be defined in a configuration file and pipeline "pretrained embeddings spacy" is also added.The NLU model is trained and is passed to the NLU Interpreter to parse the sample intents to find if NLU is able to classify the intent and extract the entities correctly.
Deploying the bot on slack
The messaging interface chosen is Slack 3.9.Slack is a widely used messaging platform which is available on the mobile and desktop.The slack API provides a variety of actions that bot can do on the platform.The bot can send messages into Slack and it can receive messages from the user.The content can include rich text and images and more.The user can also use Slack as an identity provider by signing-up with a Slack account.The Scratch AI bot is integrated onto Slack using ngrok, a lightweight utility that establishes a secure tunnel between slack and the SCRATCH AI program.A bot user OAuth access token is generated from Slack and the details of this token are used in the 'credentials.yml' file of SCRATCH AI project to authorize the connection from SCRACHAI bot to Slack, through the tunnel created using ngrok, on port number 5004.The ngrok tunnel URL generated is also mapped as an Event Subscription Request URL within Slack. Figure 2 shows the creation of the tunnel URL using ngrok
Chatbot evaluation and analysis
To evaluate the performance of the chatbot and to improve it, we have used a 2-pass basic test sample approach, as recommended for any AI/ML model.As part of this, the test stimuli would be first divided randomly into 2 separate halves, to ensure there is no sampling bias introduced at this stage.The first Int J Elec & Comp Eng ISSN: 2088-8708 set of conversation samples would be run on the chatbot and observations made on the quality and accuracy of responses.Based on analysis of the results of the first round of sample testing, the various model configurations, knowledge base and ontology definitions would be tweaked or corrected as the need be.
In the second run, the second sample would be run on the chatbot and same measurements noted.This process can be continued in the same manner until the required level of accuracy and quality is reached, subject to having enough test stimuli for conducting the repeated testing.The test stimulus for this exercise was collected from 60 Grade-3 students in a nearby school of high repute.The students were asked to provide a list of information-seeking questions that they have, for the Scratch module in their 3rd grade computer science syllabus.A total of ~310 questions were received, which were all first combined and then divided into two random sets of 150-160 each.
Analysis results-first iteration
The first iteration of testing the Scratch chatbot for its performance in information retrieval was conducted using the first set of 159 query text samples.It was observed that the chatbot was successful in capturing and matching the intent and entity for only ~35% of the test stimuli.The failure to capture and match the intent and entity for ~65% of the stimuli indicates that the knowledge database that has been built up for the chatbot evaluation needs to be enhanced to cover the missing topics.The confidence levels for intent and entity extraction and matching are shown in Figures 3 and 4. Of the stimuli for which intent and entity extraction was successful, 70% had intent confidence levels of at least 80%, and 96% of them had entity confidence levels of at least 80%.
The distribution of the various extracted entity values is shown in Figure 5.In a real-life situation, teachers and administrators can make use of such entity insights that can be inferred from the chatbot usage by the students.For example, as seen Figure 4, the top 2 topics that were observed, were regarding the "pen block" and "stage" concepts within Scratch.The teachers can then allocate more time and activities towards such most-queried topics.This is a clear illustration of potential direct and indirect benefits that such a system can create for school education.
Analysis results-second iteration
The second iteration of testing the chatbot for its performance was conducted using the second set of 166 query text samples.It was observed that the chatbot was successful in capturing and matching the intent and entity for ~72% of the test stimuli.The confidence levels observed for the intent and entity extraction and matching are shown in Figures 6 and 7.In this second iteration, it was observed that of the stimuli for which intent and entity extraction was successful, though a significantly better 97.4% had intent confidence levels of at least 90%, only a lower 87% of them had entity confidence levels of at least 80%.
CONCLUSION
The work presented in this paper explored the expansion of chatbots into the realm of school education.We demonstrated the application of NLU/NER/NLP techniques to a practical chat bot implementation using computer science school subject as the knowledge base topic.A detailed review of related literature that touch upon various design methodologies and NLP use cases is also presented, which helped in understanding some of the current approaches and their potential drawbacks.
The field of artificial intelligence has grown manifold over the recent years.Development of chatbots or conversational agents and their applications have seen stupendous success in many different business domains.This gives us enormous confidence that with further research and development, such conversational agents can be put into work in a completely unsupervised manner, in a school education context.By a collaborative and ecosystem-based approach, governments and start-ups can drive the required process and technology standardization, to enable quicker adoption and maximum return of investment for the respective educational institutions and content providers.
Such a learning chatbot has the potential to provide coverage across the whole subject if the knowledge base components and language models are enhanced further.This can be achieved by detailed model training and enhancing the knowledge base.A learning assistant chatbot can go another step further if it could perform learning assessments through periodic and on-demand evaluation of the students and provide educators and teachers the access to view and analyse the summary of evaluation results of all the students.Such a system would enable students to get feedback on their learning progress, faster and more frequently. | 5,124 | 2021-10-01T00:00:00.000 | [
"Computer Science",
"Education"
] |
Dynamic colour in textiles: combination of thermo, photo and hydrochromic pigments
Colour Change Materials are a class of smart materials that present distinct characteristics relative to conventional pigments and dyes. When exposed to a given stimulus, these materials exhibit reversible chromatic changes, allowing the introduction of dynamic and interactive qualities to textiles. Currently, the combination of various chromic materials presents an under-developed area. The objective of this research is to study the integration and combination processes of different types of dynamic colourants in textiles, namely thermo, photo and hydrochromic pigments. The experimental work conducted demonstrates the effect of screen printing variables in textile chromatic qualities, being the variables set: screen mesh count, magnetic field of the screen printing table, printed layers (number and process) and pigments’ combination method – overprinting and combination in the screen printing paste. The results attained also highlight design possibilities to further research colour change effects and outline a methodology to explore and design screen printed textiles with multi-sensitive qualities.
Introduction
The emergence of Colour Change Materials (CCMs) has enabled the introduction of dynamic chromatic qualities to textiles by sensing and reacting reversibly in response to an external stimulus. This chromic behaviour is based on the variation of the substances microstructure or electronic state, affecting their optical characteristics; absorptance, reflectance, scattering or transmittance [1][2].
CCMs are denominated according to the stimulus that triggers the changes, also presenting different properties and behaviour [3].
Thermochromic (TC) leuco dyes change colour upon heat. They fade away above their activation temperature and return to the predefined colour below it. TC pigments can be combined with conventional pigments, according to the defined binder type. In this case, the effect of thermal variation reaches variations between colours [4,5].
Photochromic (PC) pigments react to Ultraviolet (UV) radiation, being colourless in the stimulus absence and acquiring their predefined colour when exposed to it. Transition between colours can be also attained through combination of PC and conventional pigments [6].
The presence of moisture or water changes the optical characteristics of hydrochromic (HC) pigments. These materials are usually white and opaque in dry conditions, changing to transparent when exposed to water [3][4][5][6][7].
Currently, limited research has focused on systematic methods to develop colour changing textiles and the combination of various chromic materials presents an under-developed area. This research The main objectives are to study the integration and combination processes of these colourants in textiles and to develop a methodology to create chromic textiles with multi-sensitive qualities.
Materials and methods
The chromic pigments handled in this research were supplied by SFXC: water based TC dispersion black, blue, magenta and yellow, with 31ºC activation temperature; water based PC ready formulated ink blue, magenta and yellow; water based HC ready formulated ink white. Conventional pigments were ATUSMIC Magnaprint yellow HG, pink H5B, blue HG and black H3B. Screen printing pastes were formulated with Gilaba vinyl acrylic binder.
Samples were screen printed on a Zimmer Mini MDF R541 table on a cotton substrate. The selection of the table's magnetic field level defines the pressure that the metallic rod-squeegee applies during paste application on the substrate. The levels tested were 1; 3 and 6 from a range of 1 to 6 available, low to higher pressure. The diameter of the rod-squeegees used was 12 and 6 mm and the mesh count of the screens were 46; 89 and 107 TPI.
After being screen printed, each sample completed a process of drying and thermo setting in a Werner Mathis AG laboratory oven at 160ºC during 2 minutes for TC; 130ºC during 3 minutes for PC and HC pigments.
Based on experimental practices in textile engineering and design, the experimental work consisted on three phases focused on a) individual use of each chromic material; b) combination of each stimulussensitive colourant with conventional pigments; c) combination of chromic materials.
The first phase studied the effect of printing parameters in the chromatic qualities of TC, PC and HC textiles. The variables set were screen mesh count, magnetic field level and printed layers (number and process). Considering that the HC colour was commercially available in white, HC samples were developed with the substrate previously screen printed with 3% conventional pigment black.
The second phase studied the combination of each CCM with conventional pigments, applied to textile substrate by different processes: overprinting of conventional and chromic pigments in different pastes and screen printing with the pigments combined in the same paste.
The third phase explored the possibility to combine different typologies of chromic materials, mixed in the screen printing paste or screen printed with overlapped layers. Samples analysis encompassed qualitative and quantitative methods through laboratory studies, namely colour measurement using a spectrophotometer Datacolor International SF600 Plus -CT with Datacolor TOOLS software, direct observation, photographic and video record.
Individual use of each chromic material
The analysis of printing parameters' influence on chromatic qualities of each pigment type was conducted with a set of sample frameworks. Screen printed samples were developed with the 12 mm rod and encompassed a relationship between screen mesh counts -46, 89 and 107 TPI -and magnetic field levels (Mf) -1; 3 and 6. A framework of samples screen printed with each pigment type is presented in figure 1 with samples in colourized state: TC blue below 31ºC, PC blue under UV radiation and HC in dried state. For the three chromic pigments handled, screen mesh count variable showed a greater effect on textile printed results than the magnetic field level. Screens with lower mesh count present larger open areas, allowing more paste to flow onto the substrate and with TC and PC pigments, samples developed with 46 TPI screen are darker than other samples, but they present uneven printing. This effect suggests an excessive paste layer thickness printed. In samples developed with higher mesh counts, the printing was more uniform, although a subtle moiré pattern was perceived. An additional experiment was conducted with a lower diameter rod -6 mm -that reduced this effect.
HC samples screen printed with the 46 TPI screen present a thicker layer in relation to samples developed with higher mesh counts, which was obviously perceived through the white layer coverage level above the black background and through touch. Mean values of samples thickness varied from 0.45, 0.35 and 0.33 mm for 46, 89 and 107 TPI screen, respectively. Besides colour qualities, the printing can change textile characteristics, such as thickness, weight, stiffness and handle, becoming an important design consideration as these variables also build up the textile expression(s).
The change of magnetic field levels produced slight variations between the samples' qualities. TC and PC textiles screen printed with the lowest pressure (Mf 1) have a subtle darker nuance than samples with higher levels, whereas for HC a more uniform printing was attained with Mf 6.
To study the variable printed layers, number and process, a framework for each chromic material and colours was developed with samples screen printed with one layer (1L), one layer applied on top of a previously printed and dried layer (1+1L) and two consecutive rod-squeegee passages of printed layers (2L). Figure 2 presents an example of the results, for each pigment type. Number and process of the printed layers showed a similar effect along the chromic materials tested. In relation to one layer application (1L), the overprinting of a second layer (1+1L) increases colour saturation and thickness, also contributing to the evenness of the printed surface. Samples with two layers consecutively applied (2L) attained a blurred effect with areas of different colourization.
When chromic pigments are in colourless state, printing parameters were not expected to influence textile colours. Nevertheless, an incomplete decolourization was observed in samples with all pigments used and thus, the subtle colours or opacity can also present slight differences, particularly comparing samples with the smaller screen mesh count and overprinted layers, as well as samples of different ). At a temperature above 31ºC, the sample screen printed with 2% TC blue presented 10,48 dE* in relation to 10% TC blue (samples developed with 107 TPI screen and Mf3). The residual colouration or opacity of chromic pigments can affect how colours or colour change ratios with pigments combination will behave, being an important variable to assess.
Chromic materials and conventional pigments
Combinations of chromic materials with conventional pigments were studied through two processes: overprinting and pigments mixed in the paste. Samples developed by overprinting present the static pigment at the background and different colours at various concentrations were tested. A relationship between these parameters was applied for paste elaboration with mixed pigments. For example, if a sample was developed with 1% conventional pigment magenta at the background and overprinted with 100% HC, the corresponding samples' paste has 1% conventional pigment magenta with 99% HC. Figure 3 presents an example of each chromic set, developed by different processes. Considering the relationship between dynamic and static colourants, colour change behaviour can encompass a transition between colours and tones. Besides colours and concentration of the pigments combined, application processes can also influence textile chromatic behaviour. TC and PC examples presented in figure 3 attained a slightly darker colour in samples where pigments were combined in the paste than by overprinting. Although, by decreasing the conventional pigment concentration, the colour of the chromic pigment was more apparent in samples developed by overprinting. HC pigment is white and also displays a lighter surface through overprinting process.
Furthermore, when the conventional pigment selected presents a dark colour or is applied in high concentration, TC and PC colour change effect is subtle or not perceived. The same occurs for HC pigment when combined with lighter colour or low concentration of the static colourant.
Definition of screen printing paste colours, concentrations and application processes involves a set of relationships. The results attained also highlight the importance of a systematic practice-based approach to explore these materials' potential for designing colour change effects.
Combination of chromic materials
The results discussed in the previous sections set a basis to study the combination of different chromic materials types, which initially focused on two pigment types at a time, followed by an experiment with the three chromic materials.
When combining different CCMs, colour change effect of each chromic pigment type is not just influenced by the stimulus that it reacts to, but also by the stimuli of other pigments as they also influence its colour. In this sense, the relationship between several stimuli is an important parameter in textile colour change behaviour. For example, if a TC blue with 31ºC activation temperature is combined with a PC yellow, below 31ºC the textile is green when exposed to UV radiation (also depending on concentrations applied) and blue without UV; above 31ºC the textile colour is yellow with UV radiation and colourless without.
In addition, when mixing chromic materials in the paste, printing parameters such as screen mesh count and magnetic field, are defined for an individual paste with different pigment types mixed. In overprinting processes, each pigment type can be screen printed with selected printing variables for each layer, using for example a different screen mesh count for each chromic paste.
The combination of three pigment types was researched with pigments mixed in the paste and overprinting process. The pigments selected for the study were 10% TC yellow, PC blue and HC. Sample 1 was screen printed with pigments combined in the paste. Through overprinting, sequence order of printed layers encompasses six options. For the following analysis, just four samples were considered (samples 2 to 5), excluding the two samples with the HC background. Figure 4 presents an image of each sample at a selection of different stimuli conditions. Considering the intrinsic dynamic behaviour of the pigments researched, screen printing decisions play a crucial role in colour change effects. Samples presented in figure 4 applied the same pigment types vs. colours and similar concentrations between mixture in the paste and overprinting, although their chromatic behaviour at each stimuli condition tested have varied significantly, as can be analysed through samples' colours at each framework column.
When HC pigment is printed at the top layer (Samples 2 & 3), its opacity in colourized state does not allow an evident perception of changes between hues of the pigments below it. The effect observed relies on a variation between colour nuances. By mixing the HC in the paste, colour changes can be observed in HC dry state, varying between yellow, green and blue.
Layer order sequence of HC pigment also affects how the textile dries and becomes wet, particularly when PC layer is above the HC, when it was perceived that sample 5 did not become wet as fast as the other samples. TC yellow concentration applied displays a light colour. In this sense, when PC is at decolourized state, the change between TC and HC states does not create an evident change between yellow and white hues. The pigments' incomplete decolourization discussed also contribute for this effect.
PC pigment selected has a dark colour and the transition between UV absence and exposure displayed a clear influence on textile chromatics. During experimentation, PC sensibility to heat was observed. When heating up the textile, PC colour saturation has decreased but returned to colourized while TC pigment was still colourless. This behaviour will be further researched.
In addition, chromic materials studied present gradual colour transitions at different rates in colourizing and decolourizing processes, according to each stimulus-sensitive type. According to the work developed, chromic textiles may exhibit different chromatic and dynamic qualities in response to the external stimuli combination and printing parameters.
Conclusion
This research studied processes to combine and integrate chromic pigments in textiles, exploring colour as a dynamic variable of textile design.
The results demonstrate the interdependency of textile colours and colour change behaviour in relation to pigment types, colours, concentrations, fabrication processes and stimuli parameters.
When working with chromic materials, decisions on pigments' combination and printing processes through paste mixture or overprinting are critical, as they significantly affect textile expressions and performance. Printing processes present designers with enhanced opportunities to explore colour change effects, namely to work with textile patterns, where each pigment type printed can add to more than a colour change effect, interacting with other pigments in the composition.
The systematic approach conducted to study the colour change effects through independent variables creates an understanding of TC, PC and HC behaviour and proposes a design methodology to explore and create interactive textile surfaces capable of displaying a wide range of chromatic changes, under diverse external stimuli.
The samples framework developed demonstrates colour change effects and can be used to assist design decisions when creating chromic textiles for expressive and functional purposes. | 3,404.8 | 2020-06-11T00:00:00.000 | [
"Materials Science"
] |
From symmetry breaking to symmetry swapping: is Kasha's rule violated in multibranched phenyleneethynylenes?
The phenomenon of excited-state symmetry breaking is often observed in multipolar molecular systems, significantly affecting their photophysical and charge separation behavior. As a result of this phenomenon, the electronic excitation is partially localized in one of the molecular branches. However, the intrinsic structural and electronic factors that regulate excited-state symmetry breaking in multibranched systems have hardly been investigated. Herein, we explore these aspects by adopting a joint experimental and theoretical investigation for a class of phenyleneethynylenes, one of the most widely used molecular building blocks for optoelectronic applications. The large Stokes shifts observed for highly symmetric phenyleneethynylenes are explained by the presence of low-lying dark states, as also established by two-photon absorption measurements and TDDFT calculations. In spite of the presence of low-lying dark states, these systems show an intense fluorescence in striking contrast to Kasha's rule. This intriguing behavior is explained in terms of a novel phenomenon, dubbed “symmetry swapping” that describes the inversion of the energy order of excited states, i.e., the swapping of excited states occurring as a consequence of symmetry breaking. Thus, symmetry swapping explains quite naturally the observation of an intense fluorescence emission in molecular systems whose lowest vertical excited state is a dark state. In short, symmetry swapping is observed in highly symmetric molecules having multiple degenerate or quasi-degenerate excited states that are prone to symmetry breaking.
Introduction
Kasha's rule, an empirical founding principle in molecular spectroscopy, states that uorescence can only appreciably stem in molecules from the lowest-energy excited state having the same spin multiplicity as the ground state. As a consequence, to observe uorescence, the lowest excited-state should be a bright state, i.e., it should have a sizable transition dipole moment from the ground state. [1][2][3] The rationale behind this phenomenological rule is simple: aer excitation, a molecule quickly relaxes to the lowest excited singlet (for simplicity, we consider closed-shell molecules with a singlet ground state) so that uorescence can be observed from this state, oen called Kasha's state. However, to observe spontaneous emission, the uorescence probability must be large enough to overcome other relaxation processes such as nonradiative decay and intersystem crossing. Kasha's rule explains why J-aggregates are strongly uorescent while H-aggregates are dark, [4][5][6] and it allows uorescent polymers to be distinguished from non-uorescent ones based on the symmetry of the excited states, as dictated by electron-correlation and vibronic coupling. 7 Nonetheless, Kasha's rule is not an exact theorem: even formally non-uorescent systems may show very weak uorescence due to vibronic coupling, as in the well-known case of H-aggregates where a weak and red-shied uorescence is observed. [8][9][10][11][12] More impressive deviations from Kasha's rule are observed in the rare systems where dual uorescence occurs, originating from an anomalously long-lived high-energy singlet state as well as from Kasha's state. [13][14][15][16][17] In this work, we report a joint experimental and theoretical investigation of highly symmetric multibranched phenyleneethynylenes characterized by a dark Kasha's state, but showing an intense uorescence. This seeming violation of Kasha's rule is the result of a symmetry-breaking phenomenon that lowers the energy of a bright excited singlet below the dark Kasha's state. Herein we dub this swapping of excited states induced by symmetry-breaking as symmetry swapping. Symmetry breaking is a powerful concept that emerged in the eld of condensed matter physics 18 to describe systems represented by a perfectly symmetric Hamiltonian that collapse into a state with reduced symmetry. Of course, the global symmetry is regained by the presence of several equivalent broken-symmetry states that the system may choose to collapse in. Indeed, multistability is a signature of symmetry breaking. In chemical systems, one of the rst examples of symmetry breaking can be traced back to mixed valence complexes, wherein two equivalent metallic centers share a few electrons. 19,20 In spite of the equivalence of the two metals, if the electronic charge is not equally distributed, a brokensymmetry state is observed. The concept of symmetry breaking was explicitly introduced in the chemical community in a work addressing the anomalous uorescence solvatochromism in nominally symmetric two-branched charge-transfer dyes. 21 Spontaneous symmetry breaking in the ground state of long cyanine dyes has also been discussed [22][23][24] and, more recently, symmetry lowering induced by external perturbations (counterions) was reported. 25 In multibranched molecules having formally equivalent branches, symmetry breaking implies a geometrical distortion leading to inequivalent branches in the excited states, as recognized and thoroughly investigated, from both experimental and theoretical perspectives. [26][27][28][29][30][31][32][33][34][35] Symmetry breaking also plays a crucial role in the generation of charge separation in multichromophoric assemblies. [36][37][38][39][40][41][42] In multibranched multipolar chromophores, whose low-lying excitations are dominated by charge transfer degrees of freedom, symmetry breaking is driven by polar solvation, as theoretically predicted 21,26 and experimentally veried. 29,43,44 In these multipolar systems, having formally equivalent branches, symmetry breaking implies chargelocalization in one branch, with a concomitant geometrical distortion, so that in the relaxed excited state the molecular branches are no more equivalent.
Herein, we present the dramatic inuence of excited-state symmetry breaking on the photophysics of a family of multibranched molecules (Scheme 1) based on phenyleneethynylenes (PEs). In these molecules charge transfer degrees of freedom are not relevant, due to the lack of electron-donor or acceptor groups. Accordingly, polar solvation plays a marginal role, as demonstrated experimentally in this manuscript. These rigid linear pconjugated molecular systems show however intriguing physical properties: [45][46][47][48][49][50][51][52][53] (i) the rod-like molecular structure of PE is resistant to isomerization and (ii) the cylindrical nature of the carboncarbon triple bonds maintains the p-electron conjugation at any degree of rotation of the phenyl rings. By exploiting these properties, PE-based motifs with branched phenylacetylene have been extensively exploited for the design of molecular materials. For instance, PE-based molecular systems have been explored for surface engineering, as elements in optoelectronic systems such as in molecular electronics and for the design of donor-acceptor systems. [54][55][56][57][58][59] However, a systematic understanding of the effect of structural branching and molecular symmetry on the photophysics of this class of molecules is still lacking. In this regard, multibranched systems having two, three, four and six equivalent phenylacetylene branches, arranged to share the same central benzene unit, have been synthesized as shown in Scheme 1. The molecules are labeled as 2L, 2B, 3, 4, and 6 where the numbers refer to the number of arms, while L and B denote linear and bent geometries, respectively. The presence of an increasing number of branches arranged in different geometries, with different symmetries and degrees of conjugation, offers a perfect playground to understand excited-state symmetry breaking in these systems wherein the role of solvent polarity is marginal due to the absence of electron donating/accepting groups. The high density of excited states in these systems leads to intriguing photophysics and to a variety of distinctively different properties. Specically, the investigated family is comprised of molecules (Scheme 1) displaying a classical symmetry-preserving behavior as well as excited-state symmetry breaking, with a special case exhibiting the novel symmetry swapping phenomenon.
Results and discussion
PEs 2L, 2B, 3 and 4 are synthesized via the Heck-Cassar-Sonogashira-Hagihara cross-coupling reaction, while a tandem Negishi-Sonogashira cross-coupling protocol 60 is adopted for the synthesis of hexaethynylbenzene derivative 6 (Scheme 2). Details of the synthesis and characterization are provided in the ESI. †
Establishing low-energy dark states in PEs
The structurally similar PEs in Scheme 1 show contrasting photophysical properties. The UV-vis absorption and uorescence emission and excitation spectra of 2L, 2B, 3 and 6 dissolved in liquid CHCl 3 and in glassy matrices are displayed in Fig. 1 (DPA and 4 in Fig. S1 †) and relevant data are summarized in Tables 1 and S1. † Of particular interest are the strikingly different Stokes shis, measured as the energy difference between the 0-0 vibronic transitions in absorption and emission, observed for different compounds. Specically, data obtained in glassy solvent, reported in Table S2, † are not affected by the solvent relaxation and Scheme 1 Molecular structures of the phenyleneethynylene derivatives along with their 'ball and stick' representation which is used in the subsequent sections. In brown, the symmetry group of the symmetric structure is given (except DPA, the reference compound). PEs are labeled as 2L, 2B, 3, 4, and 6 with numbers 2, 3, 4 and 6 denoting the number of phenylacetylene arms; L and B denote linear and bent geometries, respectively. therefore give direct information on the molecular relaxation upon excitation. The Stokes shi is negligible for both DPA and 2L, and very small for 4, while being sizable for 2B, 3 and 6 ( Fig. 1 and S1; Table S2 †). These results are in line with the Stokes shis observed in CHCl 3 under ambient conditions ( Fig. 1 and Table 1), and conrm the marginal role of solvent polarity in these systems, as further supported by the spectra collected in a non-polar solvent, cyclohexane ( Fig. S2 †). Finally, we notice that 2B, 3 and 6 are moderately uorescent (f f = 0.15-0.32) whereas 2L is highly uorescent (f f = 0.94). An obvious question now ariseswhy do the Stokes shis of these PEs display such a large difference? Quite interestingly, the absorption spectra of 3 and 6 show a tiny band located on the red side of the most intense peak, suggesting the presence of low-energy dark states acquiring weak intensity via vibronic coupling ( Fig. 1 and S3 †). Two photon absorption is a convenient and unambiguous technique to locate dark states. We therefore measured two-photon absorption spectra with the two-photon excited uorescence (TPEF) technique. In our experimental setup (vide infra), excitation can be tuned in the 700-1300 nm region so that TPEF data could only be collected for 4 and 6. Fig. 2 shows the two-photon excitation spectra as a function of the transition wavelength (i.e., half the excitation wavelength). Interestingly, two distinct bands are observed in the two-photon excitation spectrum of 6 in the 400-450 nm spectral region, con-rming the presence of states that are dark in linear absorption. In contrast, the two-photon excitation spectrum of 4 does not show any signature ascribable to low-energy dark states. These experimental results conrm that the lowest singlet excited states of 6 are optically dark, in striking contrast with the observed sizable uorescence quantum yield. The presence of multiple excited states very close in energy in 2B, 3 and 6 is further conrmed by uorescence anisotropy experiments ( Fig. 3 and S4 †) that were carried out in glassy solvent (to avoid rotational diffusion). PE molecules are further classied into three groups based on the anisotropy spectra presented in Fig. 3: (i) DPA and 2L, (ii) 3 and 6 and (iii) 2B and 4. Excitation anisotropy of DPA and 2L is close to the limiting value of 0.4, and is approximately at within the absorption band. A marginal wavelength dependence is observed for DPA, which is ascribed to the presence of other excited states at high energy. Molecules 3 and 6 have a at excitation anisotropy spectrum, but with a distinctively low value of ∼0.05-0.1, suggesting the presence of multiple degenerate excited states (see the ESI † for further discussion). The red-edge effect 61 is responsible for the abrupt increase of anisotropy in the red edge of the absorption of 3 and marginally in 6. Interestingly, moving from higher to lower wavelength, the excitation anisotropy of 4 and 2B Scheme 2 Synthesis of PEs (2L, 2B, 3, 4 and 6). Fig. 1 Absorption/excitation (blue lines; absorption spectra were measured at room temperature, while excitation spectra were measured at low temperature) and emission spectra (brown lines) of PEs. For each compound, the left column shows spectra measured in CHCl 3 at room temperature and the right column shows results in glassy matrices (propylene glycol at 190 K for 2B, 2L and 3; 2-methyl-THF at 77 K for 6).
varies from 0.2 to 0 and from 0.3 to 0.2, respectively. The variation of excitation anisotropy in the spectral region of interest is a clear indication of the presence of multiple excited states.
In an effort to shed light on the intriguing photophysics of these systems, we performed time-resolved transient absorption measurements. Unfortunately, with our experimental setup (see the Experimental section) the pump region does not extend below 350 nm, hindering the measurement on 2B and 3. We therefore investigated molecules 2L and 6 (results in Fig. S5 †) as representatives of molecules that show negligible and large Stokes shis, respectively. The stimulated emission of 6 is observed at 450 nm, and an excited state absorption appears at ∼600 nm. The stimulated emission of 2L falls outside the accessible spectral window, and we only detect the excited state absorption at 620 nm. The decay of the excited state absorption signal of 2L is very fast being almost completed in the rst 1.5 ns. In contrast, the signal at 600 nm observed for 6 survives much longer. These results are in line with the different uorescence decays measured for the two dyes, amounting to 0.6 ns and 20 ns, respectively (Table 1).
Vertical transitions from the optimized ground state
Having experimentally established the presence of low-lying dark excited states in 3 and 6, we performed TDDFT calculations on the PE structures to obtain deeper insight into their ground and excited-state properties. Ground-state geometries are obtained at the DFT level of theory (CAM-B3LYP functional and 6-31G* basis set; more details in the Experimental section). All calculations are run in the gas phase, since, as conrmed by experimental data ( Fig. 1 and S2 †), in these molecules the solvent polarity has marginal effects. On the other hand, continuum solvent approaches, commonly adopted to account for solvation, may lead to uncontrolled results, particularly in systems with several excited states that are close in energy and with different characteristics. 62-64 Tables 2 and S3 † list absorption energies, oscillator strengths and transition dipoles calculated at the ground-state optimized geometry. All multibranched systems in Scheme 1 show symmetric structures in the ground state, with equivalent branches (Fig. S6 †). The calculated vertical transition energies compare well with the experiment. Interestingly, the linear 2L molecule has a single bright transition at low energy, polarized along the main molecular axis, while the corresponding (3); 450 nm (6). Fluorescence anisotropy spectra of DPA and 4 are reported in Fig. S4. † bent molecule 2B shows two almost degenerate bright transitions, polarized along mutually perpendicular directions. Computational results conrm the experimental observation that the lowest-energy excited states are dark for both 3 and 6. Specically, the rst excited-state of 3, and the rst and second excited states of 6 have negligible oscillator strength ( Table 2). The dark states of either 3 or 6 involve excitations between two pairs of degenerate orbitals, as discussed in detail in the ESI (Section S3.1, Table S5 and Scheme S6 †) where we compare the frontier orbitals of 3 and 6 to the frontier orbitals of the DPA molecule. Results in Table S4 † point to degenerate HOMO/HOMO−1 and LUMO/LUMO+1 for both 3 and 6, in line with the high symmetry of the two molecules, since both D 3h and D 6h groups support doubly degenerate representations.
Symmetry breaking and symmetry swapping
To investigate the geometrical rearrangement in the excited state and obtain information about symmetry breaking, we have optimized the excited states of the PE structures at the TDDFT level of theory (CAM-B3LYP functional, 6-31G* basis set). Results for the PEs showing a large Stokes shi, 2B, 3 and 6, are listed in Table 3, and those for the molecules showing a small Stokes shi, DPA, 2L and 4, are presented in Table S6. † The optimization of the lowest excited-state of 2B leads to a broken-symmetry geometry, with nonequivalent bonds in the two molecular arms (Fig. 4 and S7 †) so that the molecular symmetry lowers from C 2v to C s . Fig. 4a and b show how the energy and oscillator strength of the three lowest excited states of 2B vary when the system is driven from the ground state geometry to the excited state geometry via a continuous variation of the coordinate (see the Experimental section for technical details about the calculation). In the ground state geometry (C 2v group) the rst and third excited states (polarized along the long molecular axis, y) transform as B 2 , and the second excited state (polarized along z, the short molecular axis) transforms as A 1 . As soon as we move along the symmetry breaking coordinate, the molecular symmetry lowers to C s and the three states all transform as A ′ . The two lowest energy states are very close in energy and their mixing is responsible for the lowering of the energy of the rst excited state, driving the symmetry-breaking. Both states are optically bright in the ground state geometry, even if they are polarized along orthogonal directions: their mixing of course changes the direction of polarization. Quite interestingly, an avoided crossing between the second and third excited states is responsible for the redistribution of oscillator strength between the two states. We notice that a specularly reversed broken symmetry geometry is obtained from the optimization of the second excited-state (Table 3 and Fig. S7 †).
The situation is much more interesting for molecule 3. In the ground state equilibrium geometry, the three molecular branches are equivalent and the molecule belongs to the D 3h group. As discussed in the ESI, † the rst and fourth excited states (A 2 ′ and A 1 ′ symmetry, respectively) are dark states, while the second and third excited states are degenerate (E ′ symmetry) and are optically bright, having a large transition dipole moment from the ground state. The E ′ states are indeed responsible for the intense absorption of 3 at 306 nm, while the dark A 2 ′ state is responsible for the weak (vibronically allowed, top right panel in Fig. 1 and S3 †) transition at z330 nm, as also supported by uorescence excitation anisotropy data (Fig. 3). The optimized rst excited state stays dark as the molecule maintains the D 3h symmetry (Table 3 and Fig. S7 †). This result does not explain the large uorescence quantum yield of 3. Nonetheless, upon optimization, the second and third excited states of 3 undergo symmetry breaking: both optimizations converge towards two equivalent geometries where one molecular arm is different from the other two (Table 3 and Fig. S7 †), reducing the molecular symmetry to C 2v . These relaxed states are bright and, most importantly, their energy is lower than the energy of the optimized rst excited state. Fig. 4c and d show the evolution of energies and of oscillator strengths with the symmetry-breaking coordinate of the four lowest excited states of 3. When moving away from the ground state geometry, the lowest excited state evolves into a B 2 state in the C 2v group and the fourth excited state into A 1 , while the two degenerate E ′ states in D 3h become non-degenerate and evolve into A 1 and B 2 states in C 2v . As long as two states have the same symmetry, they can mix. Specifically, the mixing between the two states with A 1 symmetry lowers the energy of one state below the energy of the B 2 state, as to produce a symmetry swapping, i.e., a swapping of excited-states induced by symmetry breaking. The B 1 state, while optically allowed by symmetry, maintains a negligible oscillator strength that cannot explain the intense uorescence of 3. Indeed, the uorescent state is one of the E ′ states that evolves into a low energy A 1 state in the broken symmetry geometry. For the sake of completeness, Fig. S8 † shows the analogous geometry scan, performed along the effective coordinate that connects the optimized ground-state geometry to the optimized geometry of the rst (dark) excited state, conrming again that upon optimization of the A 2 ′ state, symmetry is conserved and the state has higher energy than the A 1 state in the broken symmetry geometry. Fig. S9 and S10 † show the scans for molecules 4 and 6, respectively. We stress that the potential energy surfaces in Fig. 4 and S8-S10 † give information about the energy of the different states at the different geometries, but they cannot provide any information about the actual relaxation path followed by the molecule. In summary, in the ground state geometry, as relevant to absorption processes, the lowest excited-state is a dark state so that 3 should not be a uorescent molecule according to Kasha's rule. However, when the molecule is excited to the bright doubly degenerate E ′ states, it relaxes to a brokensymmetry geometry. In this relaxed geometry, the bright state has lower energy than the dark state (Fig. 4c), conclusively explaining the large uorescence quantum yield of 3 without violating Kasha's rule. We designate this new phenomenon of symmetry-induced excited-state swapping as symmetry swapping.
The situation is more complex for 6, due to the presence of several excited states with similar energies and the possibility of having different stable conformers either in the ground or in the excited states, making the geometry optimization difficult. However, as for 3, the rst excited-state of 6 stays dark and symmetric upon geometry optimization. Instead, the second excited-state of 6, a dark state at the ground state geometry, and almost degenerate with the rst excited-state (Table 2), breaks the symmetry upon relaxation (two arms become nonequivalent with respect to the other four arms - Fig. S7 and S10 †) and becomes bright. The two lowest relaxed excited states of 6 are very close in energy, so that the bright and dark states are thermally populated, justifying the observed emission ( Fig. S10 †). Overall, all molecules showing a large Stokes shi, 2B, 3 and 6, undergo symmetry breaking aer excitation. Upon relaxation, the energy of the broken-symmetry excited states signicantly lowers, explaining the large observed Stokes shi. Moreover, in molecule 3, the novel phenomenon of symmetry swapping is observed. A detailed theoretical investigation shows that the three molecular systems undergoing excited-state symmetry breaking are characterized by the presence of degenerate (3 and 6) or quasi-degenerate (2B) excited states, a key feature for multistability. This point is illustrated in Fig. 5, showing a sketch of the potential energy surfaces relevant to 2L and 2B. The potential energy surfaces are plotted as a function of an effective symmetry-breaking coordinate, d, that measures the asymmetry between the two arms. For 2L, all the potential energy surfaces are well-behaved, showing a single minimum at d = 0: in all states the molecular symmetry is preserved. Instead for 2B, the potential energy surface relevant to the rst excited-state shows a double minimum, so that uorescence occurs from a distorted state with a nite d value. Both systems have two equivalent arms, and the excited states can be associated with two diabatic states with the excitation localized on either arm. The potential energy surfaces relevant to the two diabatic states (dashed red lines in Fig. 5a and b) are two equivalent parabolas centered at equal and opposite d values. In 2L, the strong interaction between the two arms, mediated by para-conjugation in the central benzene ring, is large enough to obliterate in the excited-state potential energy surface any memory of the two minima in the diabatic potential energy surface. In 2B, instead, the much weaker meta-conjugation leads to a potential energy surface with two equivalent minima for the rst excited state. Therefore, upon light absorption, 2B is driven into a multistable excited-state: aer excitation to a symmetric vertical state, the system rapidly relaxes towards one of the two equivalent minima, and the emission originating from the brokensymmetry state is largely red-shied compared to absorption, explaining the large Stokes shi.
A similar but more complex picture applies to 3, where, as for 2B, the interaction between the three arms is weak due to metaconjugation. The potential energy surfaces relevant to the three lowest electronic excited states of 3 are sketched in Fig. 6. For this three branched system, two symmetry-breaking coordinates must be introduced: 26,27 indeed the three coordinates describing the molecular relaxation along each one of the three equivalent arms, d 1 , d 2 , and d 3 , can be combined into a symmetric coordinate d + = d 1 + d 2 + d 3 that does not break the molecular symmetry, and two Black and grey traces in panels (a) and (b) represent the ground state and dark excited states, respectively. The blue and green traces represent bright excited states of which the green trace is the emissive state. Panel (a) presents 2L in which symmetry is preserved. Due to strong coupling, the adiabatic potential energy surfaces of 2L have a single minimum and hence no symmetry breaking. Panel (b) presents 2B wherein symmetry breaking is seen. Due to weak coupling, the potential energy surface of the first excited state of 2B (green) has a double minimum and shows symmetry breaking. Bond lengths are reported inÅ. Fig. 6 Qualitative sketch of potential energy surfaces of the first three excited states of molecule 3, plotted as a function of the two effective symmetry breaking coordinates d x and d y . The left panel displays 3-dimensional potential energy surfaces and the front view is given in the right panel. The blue and green surfaces represent bright excited states of which the green trace is the emissive state. The grey surface is the dark state. The arrows in the right panels show the involved excited states in absorption and emission processes. degenerate and mutually orthogonal symmetry-breaking coordinates d y = 2d 1 − d 2 − d 3 and d x = d 2 − d 3 .
At d x = d y = 0, corresponding to the ground state geometry as relevant for absorption processes, the lowest excited-state is a dark state (grey in Fig. 6), while the two higher excited states (green and blue) are degenerate, as calculated in TDDFT. The green surface in Fig. 6 clearly shows multistability with three equivalent minima corresponding to broken-symmetry geometries. The symmetry swapping between a dark (grey) and a bright (green) state, promoted by symmetry breaking, determines both the strong emissivity of the system and the large Stokes shi.
Conclusions
In conclusion, by combining experimental and computational methods, we have investigated the origin of the unusual photophysics exhibited by a series of multibranched PEs with different symmetries. The novel phenomenon of excited-state symmetry swapping is unveiled, explaining the observation of large uorescence intensity in systems whose lowest-energy excitation is dark, without violating Kasha's rule. Among the molecules under investigation, para-conjugation promotes sizeable interactions among molecular branches in 2L and 4: large interactions lead to large splitting of the energy levels, i.e. to stable excited states well separated in energy. Multistability and symmetry breaking phenomena are observed in multibranched systems with degenerate or quasi-degenerate excited states. Degeneracy or quasidegeneracy is observed in multibranched systems where the interaction among the different molecular arms is weak. Accordingly, symmetry breaking and/or symmetry swapping is observed in molecules 2B, 3 and 6, where meta-conjugation is responsible for weak interactions.
The large Stokes shis observed for molecules 2B, 3 and 6 are a clear signature of excited-state symmetry breaking, with emission originating from a relaxed excited-state with non-equivalent arms. In addition to symmetry breaking, 3 exhibits the novel symmetry swapping phenomenon that shows up with the inversion of the energy order between a low-energy dark state and a high-energy bright state, thus providing an explanation to the strange photophysics of the system, without violating Kasha's rule. The lowest excited state has in fact a different nature at the ground state geometry (as relevant to absorption processes) and at the geometry of the rst excited state (as relevant to uorescence). Specically, the lowest excited state is a dark state at the ground state geometry, suggesting a non-emissive behavior, but aer excitation the molecule relaxes to a broken-symmetry geometry where the lowest excited state is bright, in a novel phenomenon dubbed here symmetry swapping. Molecule 6 is a more complex system, where multiple degenerate or quasi-degenerate excited states and low-energy multiple dark states are present. However, the concomitant presence of conjugation promotes the interaction between different arms: in this system, the lowering of symmetry upon excited-state relaxation is responsible for turning one of the dark states to bright.
The new phenomenon of excited-state swapping described here for a molecule belonging to the family of multibranched PEs can virtually occur in many systems having degenerate or quasi-degenerate excited states, prone to excited-state symmetry breaking. In multipolar dyes, symmetry-breaking is driven by polar solvation that largely stabilizes polar broken-symmetry states. Relevant dynamics, typically in the picosecond timescale, is easily addressed in pump-probe experiments. In contrast, in PEs, polar solvation plays a marginal role, and indeed symmetry swapping is also observed in a glassy solvent, where polar solvation is ineffective. Being not related to polar solvation, symmetry breaking and/or symmetry swapping in PE systems occurs in an ultrafast timescale (a few hundreds of fs) as related to vibrational relaxation, making time-resolved experiments more challenging.
The phenomenon of excited-state swapping provides an insight on some of the less understood photophysical properties in highly emissive multibranched conjugated systems with low energy dark excited states. The novel phenomenon of symmetry swapping presented herein can provide new directions in the design of multibranched molecular systems for optoelectronic applications.
UV-vis and uorescence measurements at room temperature
Electronic absorption spectra were recorded using quartz cuvettes of 1 cm path length on a Shimadzu UV-3600 UV-vis-NIR spectrophotometer. Steady-state PL spectra were recorded on a Horiba Jobin Yvon uorimeter, in a quartz cuvette of 1 cm path length.
Low-temperature uorescence
Low-temperature measurements were performed with an FLS1000 Edinburgh uorometer, equipped with automatic polarizers, using a liquid nitrogen cooled optical cryostat (OptistatDN, Oxford Instruments) equipped with a temperature controller (ITC601, Oxford Instruments). 2-MeTHF (stored over molecular sieves for 1 night and ltered) and propylene glycol solutions were rapidly cooled down to 77 and 190 K, respectively, obtaining transparent glasses. Solutions for spectroscopic measurements were prepared using spectra grade or HPLC solvents. Fluorescence spectra have been corrected for the excitation intensity and the detector sensitivity.
Two-photon absorption spectra
Two-photon absorption cross sections of 4 and 6 in chloroform were obtained by comparing their two-photon excited uorescence (TPEF) intensity to that of a reference, a 6 × 10 −7 M solution of uorescein in water at pH > 10 (0.1 M NaOH), following a procedure described in the literature. 65 The experimental setup consists of a Nikon A1R MP+ multiphoton upright microscope equipped with a Coherent Chameleon Discovery femtosecond pulsed laser (∼100 fs pulse duration with 80 MHz repetition rate, tunable excitation range 700-1300 nm). A 25× water dipping objective with a numerical aperture of 1.1 and 2 mm working distance was employed for focusing the excitation beam and for collecting the TPEF. The TPEF signal was directed by a dichroic mirror to a high sensitivity photomultiplier GaAsP detector, connected to the microscope through an optical ber and preceded by a dispersive element. This detector allowed the spectral prole of the TPEF signal (wavelength range 400 to 620 nm for 4 and 6 or 430 to 650 nm for uorescein, with a bandwidth of 10 nm) to be recorded. Correction for the wavelength dependent sensitivity of the detector was applied.
The measurements were carried out using 1 cm quartz cells placed horizontally under the microscope objective. Distilled water was employed to dip the objective and the focal point was moved as close as possible to the upper cuvette wall, at the same exact height for the reference and the samples. The concentrations of the sample solutions were 2 × 10 −5 M and 8 × 10 −6 M for 4 and 6, respectively. The corrected uorescence spectra obtained by one-and two-photon excitation were well superimposed for all the investigated solutions, conrming that the emitting state is the same for both processes. Therefore, for each sample and for the reference, we assumed the same uorescence quantum yield for one-and two-photon excited uorescence.
Following the procedure reported in the literature, the twophoton absorption (TPA) cross section of the sample s 2,new as a function of the excitation wavelength, l, can be obtained as: 65 where s 2,ref is the TPA cross section of the reference, f is the uorophore quantum yield, C is the solution concentration, n is the refractive index, P(l) is the laser power at wavelength l, and F(l) is the integral of the TPEF spectrum, evaluated aer correcting the emission spectrum for the detector sensitivity. The subscripts "new" and "ref" refer to the sample and to the reference, respectively. For each excitation wavelength, the quadraticity of the signal with respect to the excitation power was veried for all the solutions and the maximum deviation does not exceed 20%. The absolute values of s 2,ref (l) of uorescein were taken from the literature. 66 TPA cross sections are expressed in Goeppert-Mayer units: 1 GM = 10 −50 cm 4 s photon −1 .
Transient absorption spectroscopy
The femtosecond transient absorption measurements of 2L and 6 in toluene were carried out by exciting the samples at 350 nm. The experimental setup comprises a Ti:sapphire amplied laser system: the output of a main oscillator (Mai Tai SP, Spectra Physics, 800 nm, 80 MHz) was used as a seeding for a regenerative amplier, which generates amplied output pulses centered at 800 nm, with 5 mJ energy per pulse at 1 KHz repetition. From the amplied output, a pump pulse of 350 nm was generated using an optical parametric amplier, and the residual pulse of 800 nm was passed through a motorized delay stage inside the pump-probe spectrometer. The white light continuum (WLC) was then generated by focusing this residual pulse into a rotating CaF 2 plate of 2 mm thickness. A 1 : 1 beam splitter divides the WLC into probe and reference pulses. The pump power was maintained sufficiently low (2.5 mW) using neutral-density lters and the samples were continuously stirred in a rotating quartz cell with a 1.2 mm path length to avoid laser-induced photobleaching. A dual diode array detector with a 200 nm detection window was used to record the transient absorption spectra. The solvent responses were measured under the same experimental conditions (10% benzene in methanol) to determine the instrument response function (IRF), which was around 120 fs. The optical densities of the samples were kept at 0.6 at the excitation wavelength (l = 350 nm) to improve the signal to noise ratio in the transient absorption spectra.
DFT/TDDFT calculations
All TD-DFT calculations were performed using a Gaussian 16 computational 67 package, using the CAM-B3LYP functional and 6-31G(d) basis set, in the gas phase. Initial molecular geometries were generated using GaussView 6.0 soware. 55 The geometry of all the PE derivatives was optimized before performing the vertical transition calculations. Optimized molecular structures and HOMO/LUMO electron density distributions were analyzed using GaussView 6.0 and Avogadro soware.
Initial and nal geometries in Fig. 3 and S7 † correspond to the ground and excited state equilibrium geometries, respectively. Intermediate geometries are obtained moving the atoms along the displacement vector that connects the initial and nal geometries.
Conflicts of interest
There are no conicts to declare. | 8,243.2 | 2023-01-13T00:00:00.000 | [
"Physics"
] |
Measuring the moderating influence of gender on the acceptance of e-book amongst mathematics and statistics students at universities in Libya
The success of using any types of technology in education depends on a large extent of the acceptance of information technology (IT) by students. Therefore, understanding the factors influencing the acceptance of electronic book (e-book) is essential for decision-makers and those interested in the ebook industry. Based on an extended technology acceptance model (TAM), this paper examines the impact of some factors on the students’ behavioural intention (BI) toward adoption of the e-book in mathematics and statistics. This paper also investigates the effect of gender differences on the relationship between the factors affecting the acceptance of e-book. A self-administered survey was used to collect data from 392 mathematics and statistics undergraduate students. The research model has shown that the factors related to the social factor and users’ characteristics are the critical factors that affect the acceptance of the e-book. The results also indicated that perceived usefulness (PU), perceived ease of use (PEOU) and students’ attitude (AU) have strongly affected students’ BI. Self-efficacy (SE) has a significant impact on PEOU while social influence (SI) has a significant influence on students’ AU. Moreover, the results confirmed that most of the TAM constructs were significant in both models (males and females), where there are no differences between males and females; however, only PEOU has been affected by the gender moderator. The results showed that the impact of the factor of SI on females was more than males. On the other hand, female students were more confident in the use of the e-book than males. In general, the female students’ model was more powerful in explaining the variance than males’ model.
Biographical notes: Mrs Asma Mohmead Smeda is currently a PhD candidate in the School of Engineering and information Technology at Murdoch University in Western Australia where she is currently undertaking a research entitled "Investigation of the Perception and Adoption of e-book amongst Mathematics and Statistics Students at Universities in Libya".She holds a Master's degree in Mathematics and Statistics Sciences from the Academy of Graduate Studies, Tripoli, Libya; and a Bachelor's degree in Data Analysis and Computer Science from Al-Jabal Al-Gharbi University, Libya.Prior to her PhD studies, she was a full-time lecturer at Al-Jabal Al-Gharbi University.
Introduction
E-book is defined as a digital representation of printed material presented via electronic devices or mediums such as e-book readers, personal computers, smartphones, netbooks, PDAs and tablets (Poon, 2014;Letchumanan & Tarmizi, 2011).The content of e-book could comprise of an electronic copy of the printed materials such as paper books (i.e.textbooks), journals, research, reports and magazines (Embong, Noor, Hashim, Ali, & Shaari, 2012b).Most e-books have features such as notetaking, highlighting, bookmarking, searching, and annotating (Khanh & Gim, 2014;Park & Kim, 2014).The e-book is becoming more widespread in developed countries due to its dynamic features and mobility (Smeda, Shiratuddin, & Wong, 2015a;Kelley, 2011;Rosenstiel & Mitchell, 2011).As of this writing, electronic publications have overtaken printed version as a source of information and news for the majority of readers in the United States (Kelley, 2011;Rosenstiel & Mitchell, 2011).Some research involving the adoption of the e-book in education also claim that e-books are also widely used in developed countries (Embong, Noor, Ali, Bakar, & Amin, 2012a;Kropman, Schoch, & Teoh, 2004).However, most developing countries such as Brazil, Libya, Sultanate Oman, South Africa and Turkey are still struggling to use the e-book as a part of enhancing their education system (Roesnita & Zainab, 2005;Embong et al., 2012a;Noorhidawati & Gibb, 2008).Numerous research has been done to look into the factors the can affect the adoption of the e-book in developing countries (Ngafeeson & Sun, 2015b;Letchumanan & Tarmizi, 2010).The studies in the field of the acceptance and effectiveness of the e-book among higher education students and teachers in developing states are still scarce (Ebied & Rahman, 2015;Smeda, Shiratuddin, & Wong, 2015a, 2015b;Al-Suqri, 2014;Smeda, Shiratuddin, & Wong, 2014;Roesnita & Zainab, 2005;Embong et al., 2012b;Mohammed Aly & Gabal, 2010;Alzaq, 2008;Noorhidawati & Gibb, 2008).
Many factors that could encourage students to use e-book; and there are also factors that hinder its use (Williams, 2011;Spring, 2010).For example, factors related to user characteristics such as self-efficacy and resistance to change; factors related to social factor i.e. social influence; factors related to the characteristic of technology i.e. cost of technology, technology acceptance and technical support; and other factors related to the infrastructure provided by the educational institutions i.e. library service and technical service.According to Pituch & Lee (2006), factors related to user characteristics appear to have a significant impact on students' acceptance of electronic education, and according to Heinich (1996) factors related to users' characteristics can be used to improve students' adoption of instructional technology.Moreover, the social factor is one of the most important factors that have a substantial impact on technology adoption.According to Ahmad (2015), intention or tendency to use technology can be influenced by the SI factor such as the influence of peers, colleagues or teachers.Lin, Tzeng, Chin, and Chang (2010) also confirmed the impact of the recommendations of peers, colleagues and experts on the students' BI to use the e-book for academic purposes.This paper addresses some of the factors related to social factor and user characteristics by studying the impact of SI and SE on the acceptance of e-book.
According to Adam, Howcroft, and Richardson (2004), if the objective of a study is to develop the use of Information Technology (IT), the effect of gender must be taken into consideration.Gender has become a source of concern for many researchers in the acceptance of technology (Teo & Lee, 2010), where there is research arguing that gender in IT applications is still under-theorized (Adam et al., 2004).For example, Sun and Zhang (2006) emphasises that male students are more influenced by the Perceived Usefulness (PU) than females.However, (Ong & Lai, 2006) confirms that the impact of Perceived Ease of Use (PEOU) in female students is more than males.Other studies have argued that there was no considerable association between the total use or non-use of IT applications and gender (Ong & Lai, 2006;Venkatesh & Morris, 2000;Gefen & Straub, 1997;Igbaria & Baroudi, 1995).On the other hand, some researchers have agreed that the results of the impact of gender on some external variables i.e.SI and computer SE were conflicting (Kesici, Sahin, & Akturk, 2009;Wang, Wu, & Wang, 2009).Therefore, gender is one of the important issues that affect the understanding of user acceptance of IT (Padilla-Meléndez, del Aguila-Obra, & Garrido-Moreno, 2013;Terzis & Economides, 2011).Venkatesh and Bala (2008) have also proved that the gender has the impact on users' acceptance of IT.
Numerical models have been developed to assist in the acceptance of technology.The most widely used models are the Theory of Reasoned Action (TRA), the Theory of Planned Behaviours (TPB), and the Technology Acceptance Mode (TAM) (Al-Aulamie, 2013).The acceptance of technology is constantly evolving due to the rapid advances in IT.Usage and acceptance are two of the most important elements that contribute to the improvement of these theories and models dealing with the acceptance of the technology (Al-Adwan & Smedley, 2013).
In 1986, Fred Davis and Richard Bagozzi devised a model of Technology Acceptance that was based on the TRA (Davis 1989;Davis, Bagozzi, & Warshaw, 1992).TAM is a very useful model which can be used to explain and understand the BI of users in different applications of IT (Al-Adwan & Smedley, 2013;Al-Aulamie, 2013).TAM also allows for the evaluation of the possibility and compatibility of the use of any Information System (IS) (Masrom, 2007;Fishbein & Ajzen, 1975).The model performs the assessment of the behaviour of individuals that is likely to be affected by the use of IS (Park, 2009).It allows system designers to make changes in the IT applications to improve their suitability for users to enhance its usability.Therefore, TAM is a significant body of research, and it is widely accepted in the field of IS.It is also proven to be an accurate indicator of the user's intention and the actual use of the system (Al-Aulamie, 2013).
To understand the aforementioned issues, two factors that called SI and SE have been added to TAM.This paper also investigates the moderating impact of gender on the relationships among the factors affecting on the acceptance of e-book among Mathematics and Statistics students.This paper provides a more understanding of e-book acceptance between male and female students taking Mathematics and Statistics at universities in Libya.
Technology acceptance theory
Fred Davis was the first to shed light on the technology acceptance model to empirically test new end-user information systems (Ngafeeson, 2011).In his doctoral thesis, he provided a quote scarce model and introduced two beliefs which were PEOU and PU.PEOU is defined as "the degree to which a person believes that using a particular system would be free of effort" (Davis, 1989, p. 320).Whereas PU is defined as "the degree to which a person believes that using a particular system would enhance his or her job performance" (Davis, 1989, p. 320).Based on previous research that has embraced PEOU and PU in various environments, PEOU and PU are the two fundamental constructs that affect individual's decision to adopt any applications of IT such as e-book (Davis, Bagozzi, & Warshaw, 1989).
Social influence
The Social Influence (SI) is the subjective norm (Yau & Ho, 2015).The term SI was introduced in social psychology research in the mid-20th century (Yau & Ho, 2015).According to Eckhardt (2009), this term was used to refer to the influence of communication that takes place between individuals, which leads to a change of emotion or mood or view of a person or an individual associated with a particular behaviour (Eckhardt, 2009).Hashim (2011) view that SI can significantly influence BI of individuals to comply with the views presented to them.Furthermore, it was suggested that individuals act or exhibit a particular behaviour despite their non-acceptance of the positive outcome of the behaviour enforced through the influence of another person or an individual.The individual behaviour is motivated by the views presented by one or more references, and his or her behaviour is simply to comply with their views.According to Lu, Yu, Liu, and Yao (2003), SI is defined as an individual's belief that it is significant for other individuals to engage in an activity.SI is studied in both TRA and TPB as the important determinant to explain the adoption of a system (Rao & Troshani, 2007).Numerous research has supported the influence of SI on students' AU and BI (Elkaseh, Wong, & Fung, 2015;Tarhini, Hone, & Liu, 2013;Jong & Wang, 2009;Park, 2009;Yang, 2007;Yang & Chen, 2006).The consequence, the Social Influence factor has been subjected to the test in this study.
Self-efficacy
Self-Efficacy (SE) is a significant concept in the theory of social learning (Bandura, 1977).SE is the belief of an individual in his/her ability to carry out particular behaviours or one's individual beliefs regarding his/her capability to carry out particular tasks successfully (Abbad et al., 2009b;Al-Ammari & Hamad, 2008;Compeau, Higgins, & Huff, 1999;Compeau & Higgins, 1995).It is also described as the personal judgment that is apprehensive, not with the skills that one possesses, but with judgments of what one can perform with whatever skills one possesses (Bandura, 1986).Therefore, the factor of SE is described as the perception of an individual regarding his or her capability to make use of e-book device such as computers in the completion of a task (Compeau & Higgins, 1995).Similarly, SE of e-book readers is interpreted by a student's self-confidence in his/her capability to make use of e-reader software and devices, such as personal computers, tablets and smartphones (Waheed, Kaur, Ain, & Sanni, 2015;Letchumanan & Muniandy, 2013).A student who possesses a strong sense of his ability in dealing with ereader devices might have a more optimistic PU and PEOU, and it is possible to be more willing to use and accept e-book.
Literature shows that the relationship between SE and the adoption of technology in education is statistically significant (Waheed et al., 2015;Hayashi, Chen, Ryan, & Wu, 2004;Burkhardt & Brass, 1990).In the case of e-book, SE also emerged as a significant factor (Waheed et al., 2015).Hsiao and Chen (2015) reported that SE has the most important influence on intention to study through using e-book readers.Therefore, it has been examined to determine his influence on students' acceptance of e-book among Mathematics and Statistics students at universities in Libya.
Gender difference and acceptance of e-book
In the acceptance of technology field, a few research that has explored gender difference in the area of e-learning, especially e-book (Yoo, Huang, & Kwon, 2015;Marston et al., 2014;Letchumanan & Tarmizi, 2011;Ngafeeson, 2011).In several countries, some studies have focused on researching the effect of gender on the user acceptance of e-book in higher education.They have examined the decisions made by students and teachers regarding the use of e-book.Consequently, their decisions have been influenced by numerous factors incorporating demographic factors (Roesnita & Zainab, 2005;Letchumanan & Tarmizi, 2010;Woody, Daniel, & Baker, 2010;Shepperd, Grace, & Koch, 2008).Letchumanan and Tarmizi (2011) explored the motivation of using e-book as a learning medium among undergraduates in an engineering division by employing TAM and gender as its external determinant.The findings of their investigation demonstrated how PEOU relates positively with PU.PEOU has a substantial impact on attitude and intention to use e-book, and Attitude (AU) has a substantial impact on the motive to use.Nevertheless, PEOU does not have a substantial impact on AU towards using e-books.Letchumanan and Tarmizi (2011) suggested that gender did not have a significant impact to use e-book.
Using gender as a moderator, Ngafeeson (2011) did research on the acceptance of e-book by undergraduate students into the application of TAM.Gender difference has been tested through the investigation of the impact of moderating "gender" on the acceptance of e-book.The exploration entailed research work centred on information collected from undergraduate students (70 males, 88 females).The results confirmed the reliability and applicability of TAM when measuring the acceptance of e-book.Although the significance of gender difference moderator is general, there is no sufficient evidence on the significant of gender differences in mutual relations between the constructs.The results of this research also indicated that despite the gender differences have been theorised and tested with different levels of the experimental support; one must be aware of the generalisations when studying its effect on the use of technology.
In contrast, some research has found that male perception is significantly higher compared to female perception to use e-book (Roesnita & Zainab, 2005;Shepperd et al., 2008).These results were also supported by Marston et al. (2014) where they studied the impacts of gender difference on the level of satisfaction and student adoption of an electronic version of the textbook (e-textbook).The use of e-books as textbooks in education is a new paradigm particularly in developing countries (Embong et al., 2012b).Their study presented survey results collected from 250 male and female undergraduate students who used e-book in their study.It is looking for examining the potential differences between male and female students with respect to satisfaction with an e-book.Their results confirmed that there is a difference between the genders in the likelihood of reusing e-book.Although the results revealed that female students were using e-books less than males, females were using it more than males because of the interactive features of e-book.However, there is no sufficient evidence of the existence of gender differences with respect to satisfaction, ease of use and usefulness.
Theory development and hypotheses
Recently, many research focused on the role that technology plays in the development of the educational process, and specifically in the factors determining technology adoption and usage (Al-Aulamie, 2013).Several models have been developed to aid in predicting technology acceptance (Marston et al., 2014;Poon, 2014;Al-Aulamie, 2013;Alkharang & Ghinea, 2013;Lee, 2013;Letchumanan & Muniandy, 2013;Othman et al., 2013;Sharma & Chandel, 2013;Letchumanan & Tarmizi, 2011;Ngafeeson, 2011;Phan & Daim, 2011;Abbad et al., 2009a;Abbad et al., 2009b;Shelburne, 2009;Martínez-Torres et al., 2008;Tao, 2008;Ngai et al., 2007;Kurnia et al., 2006).This paper presented a study that added several external factors to examine Mathematics and Statistics students' acceptance of e-book at universities in Libya.The research model in this study focused on two main constructs, which are (1) Self-Efficacy (SE), and (2) Social Influence (SI) (Fig. 1); and gender was used as a moderator.The research model in this paper explores the effect of gender difference on the acceptance of e-book where it measures the relationship between the external factors and TAM constructs for male and female students.These relationships in Fig. 1 represent the research hypotheses (H1, H2, H3, H4, H5, H6, and H7).According to research in technology acceptance, the relationships between dependent and independent variables represent the hypotheses that governing the relationships between the variables of the model (Venkatesh & Davis, 2000;Lee, Cheung, & Chen, 2005;Cho, Cheng, & Hung, 2009;Park, 2009;Liu, Liao, & Pratt, 2009;Sánchez & Hueros, 2010;Al-Harbi, 2011;Lee, Hsieh, & Chen, 2013;Udo, Bagchi, & Kirs, 2012;Padilla-Meléndez et al., 2013).The study of hypotheses allows for the exploration of each relationship between different technology adoptions variables in terms of the probability value such as the level of significance and standardised coefficient such as the expectation value.The hypotheses are specified as follows.
H1: Social Influence has an influence on MAS students' Attitude towards using the ebook at universities in Libya.H2: Computer Self-Efficacy influences MAS students' Attitudes towards using the ebook at universities in Libya.H3: Computer Self-Efficacy influences the Perceived Ease of Use of the e-book among MAS students at universities in Libya.H4: Perceived Ease of Use influences the Perceived Usefulness of the e-book among MAS students at universities in Libya.H5: Perceived Ease of Use influences MAS students' Attitudes towards using the ebook at universities in Libya.H6: Perceived Usefulness influences MAS students' Attitudes towards using the ebook at universities in Libya.H7: Attitude influences MAS students' Behavioural Intention to adopt the e-book at universities in Libya.
Sample of population
Data was collected using self-administered survey method.Mathematics and Statistics students from three different universities in Libya have participated in this survey, and they are Tripoli University (TU), Al-Zawia University (ZU) and Al-Jabal Al-Gharbi University (GU).These universities differ in terms of density of students and geographical location.The sample size required was calculated based on Yamane (1967) table with α=0.05; 391 respondents; 199 males and 192 females (Table 1).The survey was conducted between 30-35 minutes, and participation was voluntary.Undergraduate students were selected because they are the majority population at universities in Libya.
Data collection process
In this study, the first step in data collection process was to advertise in the selected universities using printed colour flyers.The flyer contained information such as the research topic, and contact information where students wishing to participate will contact the researcher via phone or email.Then the questionnaire and all of the required documents were posted to the participants via snail/normal mail.The documents include the questionnaire, the e-book file on CD and guidelines on how to use the e-book.A student can still withdraw at any time even though he or she may have initially agreed to participate.Students who completed the questionnaire will then mail it back to the researcher.Fig. 2 summarises the data collection process.
Data analysis and results
Structural Equation Modelling (SEM) was used as the main technique to analysis the data and examines the hypotheses in this study.In the field of the acceptance of technology, SEM has been widely used by the majority of published studies for its ability to predict the full model as well as the integration both measurements and structuring perceptions (Creswell, 2013;Abbad et al., 2009b;Selim, 2007;Venkatesh, Morris, Davis, & Davis, 2003;Venkatesh & Morris, 2000;Davis, 1993Davis, , 1989)).SEM is beneficial for testing theories that participate dependency relationships i.e. (a→ b→ c) (Khodabandelou, Jalil, Wan Ali, & Mohd Daud, 2014;Hair, 2010).Furthermore, Chin (1998) indicated that SEM was used to evaluate the reliability and validity of the model, as it is capable of simultaneous analysis of all the variables in the model rather than being analysed separately.
Measurement of the developed model
To measure the model and test the relationships between the constructs, Exploratory Factor Analysis (EFA) were used to test the validity of the variables proposed and compared the initial reliability of the scales.Confirmatory Factor Analysis (CFA) was then used to measure the Goodness of fit and constructs' validity.In this study, Cronbach's Alpha (α) was applied to evaluate the reliability of each factor.According to (Hair, Tatham, Anderson, & Black, 2006;Stafford, Stafford, & Schkade, 2004), the Cronbach's Alpha (α) should be more than 0.7 to be considered as the acceptable value of internal consistency.There are certain indicators that should be taken into account to evaluate the model's goodness of fit.Six measures have been chosen to evaluate the validity of the developed model, Chi-Square Test, Goodness-of-fit Index (GFI) and Adjusted Goodness of Fit Index (AGFI), Root mean square error of approximation, Standardized root mean residual, comparative fit index, and Tucker-Lewis index.These measures are commonly used in most literature.However, in this study, some measures that could be sensitive to large samples, such as NormedChi-square (NC) were not selected (Al-Aulamie, 2013;Schumacker & Lomax, 2012;Sharma, Mukherjee, Kumar, & Dillon, 2005).
The results of the goodness of fit of the model measurement are shown in Table 2.While carrying CFA, it is very important to find out convergent, and discriminant validity and the same is true of reliability, Composite Reliability (CR), Average Variance Extracted (AVE) and Maximum Shared Variance (MSV) are among the important measures for testing the validity and reliability (Cramer & Howitt, 2004).As indicated earlier, discriminate validity helps to check the degree in which a variable is very distinctive from other variables (Hair, 2010).Dividing the total of all squared standardised factors loading on the number of measured variables gives the AVE value.In examining the measured variables discriminate validity, the AVE values will be compared to the MSV.AVE value must be at less 0.5 to confirm convergent validity.Also, the AVE value has to be higher than the MSV to ensure discriminate validity (Al-Hadad, 2015;Kannan & Narayanan, 2015;Awang, 2012).The results obtained in this study were more than the recommended value.
Structural model and testing of hypotheses
The criteria that have been used to the measurement model were used again to measure the Goodness of fit (GOF) for the structural model.The outcomes obtained of the GOF were satisfactory and emphasised the acceptance of the proposed model.The findings were within the range of the recommended value, except GFI that was close to the recommended value (0.90) (Lee et al., 2014;Arteaga Sánchez, Duarte Hueros, & García Ordaz, 2013;Ong & Lai, 2006).Table 2 shows the results of the mode fit of this study.The last step is to check the proposed model hypotheses through the use of path analysis.
The results are shown in Table 3.The study hypotheses were tested by using path analysis via standardised path coefficients, the significance of the estimated coefficients (critical ratio) and probability value (p-value).The acceptance hypothesis should be (-0.05≤P-value ≤0.05) as well as the critical ratio which is more than +1.96, or less than -1.96 (two tails).All hypotheses of the developed model in this study have been successful in overcoming these conditions.
Moderating effect of gender
This study has utilised multi-group analysis to explore the moderating impact of gender on the relationship between the constants in the proposed model.Multi-group analysis has been used to investigate the influence of moderators.According to Lowry and Gaskin (2014), a multi-group moderation is "a special form of moderation in which a dataset is split along values of a categorical variable i.e. gender, and then a given model is tested with each set of data" using gender as an example, the model was tested for males and females separately.
The measurement and structural model test were used to examine the research model.First, the measurement model was tested for the differences between genders in terms of the measured variables.In addition, the structural model was also tested for the differences between genders in term of the hypotheses.In AMOS, multi-group analysis classified the data on the basis of the value of grouping i.e. gender, and the group analyses were performed simultaneously among male and female (Byrne, 2013).
Chi-square differences and critical ratios are two ways to measure the differences between multi-group moderation such as genders.This study used the difference in chisquare Δ x 2 to test if there are significant differences in genders on the measurement model, and the structural models level.According to Hair (2010), Chi-square represents a statistical measure of differences that commonly utilised to compare and estimate the matrices of covariance.In the measurement model, the chi-square x 2 has been computed through CFA; whereas in the structural model it has been calculated through the structural equation modelling.Byrne (2013) has suggested that the difference in chisquare x 2 can be calculated by finding the chi-square x 2 for the proposed model twice; first time should compute it without weight constraints; then with weight constraints.If the result of the difference in chi-square Δ x 2 is significant, that means the model is not equivalent over genders.
The measurement model test
Chi-square was computed in the measurement model before and after the process of weight constraints to the measured variables.Based on the results shown in Table 4, there is no significant difference at the model level between the two groups.It means that gender perception towards the measured variables is similar.
Table 4
The Chi-Square Δ x 2 for the measurement model
Constrained Model 2692.98 1873
The difference in chi-square 85.22 76
The structural model test
Chi-square was calculated before the process of weight limitations of the research hypotheses, and the same process was also applied to after weight constraints to the hypotheses.As shown in Table 5, there is no significant difference amongst two groups at the model level.However, they may be different at the path level.Therefore, the identification of the hypothesis has been determined by repeating the method of weight constraints on each hypothesis separately and computes the difference in chi-square Δ x 2 again.The difference in chi-square 65.26 68 The model hypotheses were then tested by comparing the path coefficients between both groups by using the critical ratio (t-value > 1.96) and p-value > 0.05 (Table 5).SE factor was the main variable that hypothesised to impact students' AU, where the hypothesis for SE toward students' AU has been accepted in the case of females and rejected in the males' case.Based on the result that showed in the test of the structural model, the R 2 (explained variance) of PU, PEOU, AU and BI, was totally different between males and females (Table 6).
Table 6
The explained variance for the dependent variables for each group
Discussion and conclusion
The main purpose of this study was to investigate the impact of personal characteristics factor including SE and the social factor including SI on the acceptance of e-book among the mathematics and statistics students at Universities in Libya by extending the TAM.This study also determines the effect of the gender moderator on the acceptance of ebook among the students.
Determining the acceptance of e-book
The findings of this study regarding the impact of PU upon AU were strongly and directly affected.Conversely, PU has an indirect influence on BI via AU.Numerous studies have confirmed that PEOU has a strong influence on AU and BI (Elkaseh, Wong, & Fung, 2016;Al-Adwan & Smedley, 2013;Arteaga Sánchez et al., 2013).Students who benefit from e-book will have a positive AU toward using e-book.The importance of PEOU was through its direct and indirect effects via PU on students' AU toward using ebook.This logical consequence of the participants who were not from the area of IT, and they have poor knowledge about the use of e-books.It also has an indirect influence on students' BI.This explains the choice of the majority of the participants to use the e-book for easy handling.The results of this study also agree with (Elkaseh et al., 2016).Moreover, students' AU seems to have a powerful effect on students' BI.The positive feelings of the students towards the use of the e-book will be positively reflected on their behaviour.
In this study, based on the results related to the social factor, it can be emphasised the importance of SI.Numerous research has supported the influence of SI on AU and BI (Elkaseh et al., 2015;Tarhini et al., 2013).This could be due to the social culture that is often active in Libya.According to the results related to the characteristics of the users, SE was the strongest factor that impacted PEOU.The findings of this study also confirmed that SE has a positive impact on students' AU regarding the use of e-book.SE also has a strong positive indirect effect on BI through PEOU and AU.It can be explained by the user's confidence in their abilities to use e-book associated with their judgment on the ease of use of the devices that was used to download and read e-book; as supported by (Abbad et al., 2009b;Al-Ammari & Hamad, 2008;Venkatesh & Davis, 1996).Thus, developing students' skills in the use of computers or other devices that can be used to read e-book, as well as encouraging them to use e-book by officials, faculty members and librarians at universities will have a positive impact in attracting more students towards the utilisation of the e-book.
Determining the effect of moderate
Seven hypotheses were tested in this study.Five hypotheses were similar in terms of impact, whereas just one hypothesis has a significantly difference between men and women, which are SE (Table 7).The findings in Table 7 mentioned that gender did not moderate the relationship between PU, PEOU, AU and BI in most of the hypotheses.Perhaps this is due to the convergence rate of the use of e-book among males and females, which reduces the expected differences between them (Wong, Teo, & Russo, 2012).TAM constructs were found to be positive and significant for most parts.Although the results confirmed that PEOU has a strong influence on PU in female students, it insignificantly impacted their AU toward the acceptance of e-book.However, it has a strong indirect influence on female students AU through PU factor.Tarhini, Hone, and Liu (2014) have explained that "females tend to place more emphasis on ease of use of the system when deciding to whether or not adopt a system".Therefore, they may be selecting e-book because they think it will reduce the effort required in the study, research and find solutions to their questions.It can also help them to understand their subjects since most of them did not use e-book before and had no experience in dealing with it.Similarly, Ong and Lai (2006) confirmed that the impact of PEOU on female students is more than males.
In regard to PU, the construct was slightly stronger for male students than females.The results of this study are supported by numerous literature such as by Al-Aulamie (2013); Ong and Lai (2006); Morris and Venkatesh (2000).Venkatesh et al. (2003); Hoffman (1972) explained that male students tend to concentrate more on the benefits that accrue from the use of technology, and they are driven by the achievement needs more than females.Sun and Zhang (2006) also emphasises that males are more influenced than females by the PU.
The students' BI was used to predict the extent of acceptance of technology such as e-book (Davis, 1989).According to the results shown in Table 7, AU is a strong predictor of students' BI in both males and females participants.Similarly, Fishbein and Ajzen (1975) have indicated that BI was predicted by using users' AU.It is logical to expect that the positive AUs will produce positive behaviour whether in the case of male or female students.However, the females' AU is always courageous, especially when it comes to technology which in turn could contribute to their excellence.
Unexpectedly in this study, SE has been found to be stronger for female students' than males.These results were consistent with the results obtained from some other research (Ngafeeson & Sun, 2015a;Madigan, Goodfellow, & Stone, 2007;Ong & Lai, 2006;Morris & Venkatesh, 2000).The factor of SE has a strong influence on female students' AU while it has an insignificant impact on the AU of male students towards the use of e-book.However, SE has a significant impact on PEOU in the case of males.Female students seem more confident than males when using the e-book.Tarhini et al. (2014) and Morris and Venkatesh (2000) have interpreted that an increased level of SE will lead to the decline in the importance of ease of use.In fact, the number of female students in higher education in Libya exceeds the number of male students; this could explain the superiority of females in the use of technology (Al-Hadad, 2015;Abdulatif, 2011).These results are in contrast with the results recorded in other research which confirmed that men are more confident when it comes to using technology from women (Ngafeeson & Sun, 2015a;Madigan et al., 2007;Ong & Lai, 2006;Morris & Venkatesh, 2000).Moreover, although both hypotheses that represent the relationship between SI and students' AU were accepted, the results confirmed that female students are more influenced by social factors than males.These results were consistent with the results obtained from other research (Tarhini et al., 2014;He & Freeman, 2009;Wang et al., 2009;Venkatesh & Morris, 2000).Therefore, the only difference observed is their ability to explain the variance in students' BI.The results showed that the explained variance of female students' BI was largest than males (Table 6) and these results were expected due to the high level of higher education enjoyed by women in Libya compared with other developing countries (Rhema, 2013;Tamtam, Gallagher, Olabi, & Naher, 2011).The moderating impact of males and female students on the acceptance of e-book has received great attention, but some of the results were inconsistent (Marston et al., 2014;Al-Aulamie, 2013;Ngafeeson, 2011).
Implications
The results of this study include a number of important implications that can be summed up in the following points: 1.The results of this study have important implications for academics, decision makers, as well as supervisors and stakeholders to adopt the e-book in the field of education in Libya, where this study represents a good source of information about the factors that effect on the acceptance of e-book among Mathematics and Statistics at universities in Libya.Understanding of the impact of these factors is often a prerequisite crucial to develop effective strategies aimed at increasing the level of use of e-books in higher education institutions in Libya.
For example, the factor of SI has a significant influence on students' BI towards the use of the e-book; therefore, decision-makers can take advantage of social influence to enhance the acceptance of e-books through the granting of incentives for existing real users to convince their colleagues to adopt e-book.2. The importance of this study can be traced back to the possibility of obtaining a better understanding of e-book acceptance among university students as well.
Through knowledge of the intentions of the participants, the officials can decide on how to encourage non-users for the use of e-book in future.
The results of this study could also help the Faculties of Mathematics and
Statistics in Libya to understand the current state of e-book, which can improve their AUs towards the use of e-book as a new method of teaching and learning.Faculty members could also acquire knowledge from the results of this research to help them understand the student tendencies through knowledge of the barriers and incentives facing the use of e-book by students.4. Studying the effect of gender on the acceptance of e-book can help researchers interested in studying the impact of demographic factors on the use of technology; where the impact of gender is still a subject of controversy among many researchers, especially in Arab countries.5. Results from this study also can be used primarily as a critical nucleus that will assist other future researchers on the subject matter in Libya because there are no previous researches that have been conducted before in Libya.
Limitations
Some of the results obtained were inconsistent with other studies; they could be due to the long war that broke out in Libya since 2011.The war has had negative effects especially on the men which led to the absence of many males in education for a long time (Rhema & Miliszewska, 2012), and this may have affected the participants' responses especially in the male students.
Conclusion
In summary, students' AU was only the factor that has a strong direct effect on students' BI.PU had the strongest direct impact on students' AU, followed by PEOU.Regarding the external factors, SE has a significant impact on PEOU whereas SI has a significant influence on students' AU.In addition, PEOU, PU, SE and SI factors have a significant indirect impact on students' BI toward the adoption of the e-book.The results also confirmed that most of the TAM constructs were significant in both models (males and females), where there are no differences between males and females; however, only PEOU has been affected by the gender difference moderator.The results showed that there are the important differences in male and female students' perceptions in just one hypothesis.The hypothesis of SE toward AU was supported in females' case, nevertheless; it was rejected in the case of male students.The results of gender differences confirmed that females were more confident to use e-book than males.The females' model has the greatest ability to interpret variation than males.
Table 1
Descriptive statistics of the participants' demographic information
Table 2
Goodness of fit results of the measurement and structure model
Table 3
Results of path tests
Table 5
The Chi-Square for Δ x2 the structural model
Table 7
The summary of the moderating effect on research hypotheses | 8,681.4 | 2017-06-25T00:00:00.000 | [
"Mathematics",
"Computer Science",
"Education"
] |
Experimental identification of unique angular dependent scattering behavior of nanoparticles
Nanoparticles exhibit unique light scattering properties and are applied in many research fields. In this work, we perform angular resolved scattering measurements to study the scattering behaviour of random and periodic silver (Ag), and periodic polystyrene (PS) nanoparticles. The random Ag nanoparticles, with a wide particle size distribution, are able to broadbandly scatter light into large angles. In contrast, both types of periodic nanoparticles are characterized by a strong scattering zone where scattering angles are increasing as the wavelength increases. Angular resolved scattering measurements enable experimentally revealing the particular scattering properties of different nanostructures.
Background
Light scattering is a fundamental property of particles and can be adjusted in a great flexibility by finely tuning the particle geometries and surrounding media.This unique feature opens up numerous applications for nanoparticles in spectroscopy, sensors and photovoltaic devices [1][2][3].For instance, large-angle scattering of nanoparticles is capable of enhancing the light propagation path beyond the physical thickness of devices for absorption improvement, enabling to reduce material usage and resulting manufacturing cost [3][4][5][6][7].As the scattering behaviour is highly wavelength-dependent it needs to be identified for specific applications.Mostly the scattering behaviour is either simply predicted by theoretical simulations or characterized experimen-tally by haze measurements [5][6][7].However, the haze only gives the overall scattered fraction for each wavelength, and is not sufficient to resolve angular scattering details.In this contribution, we will apply angular resolved scattering (ARS) measurements [6][7][8] to characterize the wavelength-dependent scattering behaviour of particles for which only few examples exist so far [9].
Methods
The measurement is done with an UV/VIS setup (PerkinElmer Lambda 950 UV/VIS) and an additional ARS (Automated Reflectance/Transmittance Analyser (ARTA)) extension [10].The illustrative sketch of ARTA is shown in Fig. 1a.It consists of a fixed sample holder placed in the middle of the moving detector with a detector-sample distance of 92.1 mm (R ARS ).Unpolarised light comes from 180°and the scattered light is measured with an angular resolution of 2°.The detection is done by an integration sphere with a slit opening of 6 mm width (w) and 17 mm height (l).Since the detector covers only one plane of the whole scattering space, the measured value at a certain angle should be expanded for estimating the volume scattering.The weighting factor F is acquired as [7]: where A D = w*l is the area of the detector slit and ϕ is the measuring angle.Δϑ is related to the slit width and is obtained by Δϑ ¼ arctan w 2R ARS .We selected random and periodic Ag nanoparticles as well as closely packed polystyrene (PS) nanospheres, which covers both metallic and dielectric materials.To avoid refraction at the substrate/air interface influencing the scattering angles when light is leaving the substrate, half cylinder glass is applied as substrate and a sketch is shown in Fig. 1b.The investigated wavelength range lies between 400 nm and 800 nm and the range of scanning angle is confined to 30°to 90°(see Fig. 1a), since the transmission is dominant over reflection in intensity for our investigated samples, and direct transmission (scattering angle below 30°) is omitted.
Results and discussion
Figure 2 represents the scanning electron microscopy (SEM) morphologies (left column) and wavelength dependent angular scattering behaviour (right column) of random and periodic Ag nanoparticles and PS spheres.The random Ag nanoparticles in Fig. 2a were fabricated by annealing a 50 nm thick Ag film for 20 min at a temperature of 500 °C in air.The nanoparticle radii range dominantly from 80 nm to 160 nm (see the size distribution in Fig. 3a) with an averaged spacing of 200 nm.The periodic Ag nanoparticles (Fig. 2b) were prepared using nanosphere lithography [11]: a 30 nm thick Ag film was evaporated onto a hexagonally closely packed monolayer of PS spheres with a radius of 450 nm; subsequently the PS spheres were removed in ultrasonic bath and triangular Ag nanoparticles remained; finally spherical Ag nanoparticles formed after annealing at a temperature of 200 °C for 2 h.Due to the template structure of PS nanospheres, the Ag nanoparticles exhibit a hexagonal order at a uniform radius of 50 nm.Fig. 2c shows the closely packed PS spheres used for the formation of periodic Ag particles in Fig. 2b themselves, which constitute the dielectric nanoparticle sample.
As observed in Fig. 2d, random Ag nanoparticles exhibit a strong scattering ability with a pronounced angular scattering range from 50°to 60°.The scattering is quite broadband and almost covers the whole investigated spectrum, which could be correlated to the broad size and shape distribution of the Ag nanoparticles.Further, as indicated in Fig. 2d, there exits a trend of a moderate increase of scattering angles as the wavelength goes up.To simply explain the scattering behaviour of random Ag nanoparticles, Fig. 3b simulates the angular power distribution of a Ag nanoparticle at air/glass interface using the finite element method as implemented in the software COMSOL [12].To adapt the simulation geometry to the experimental case, a spherical Ag nanoparticle of R = 140 nm radius was cropped off by 20 nm (C) at the substrate interface.Firstly, as shown in Fig. 3b, the large angle scattering ability (a degree beyond 30°) is demonstrated; additionally, the angle corresponding to the large angle scattering peak is increasing as the wavelength increases.This simulation trend is in agreement with the experimental observation of Fig. 2d.In contrast, the periodic Ag nanoparticles (Fig. 2e) exhibit a distinctive scattering feature.It is characterized by a strong scattering zone where scattering angles are increasing from 40°to 70°as the wavelength goes up from 400 nm to 700 nm.We also observe a similar scattering feature in Fig. 2f for the closely packed PS nanospheres.Treating the periodic nanoparticle arrays as line diffraction gratings for PS spheres with the line distance d and considering the refractive index n = 1.5 of the glass substrate, the zero-order diffraction angle α can be obtained by the equation [13] where λ is the wavelength of incident light.The line distance d according to the shortest spacing is set to 1.15 * R, with R being the radius of a PS sphere, and taking into account a finite spacing between the spheres of the order of 15%.The diffraction angle curve (dashed line) is plotted as a function of wavelength in both Fig. 2e and f.It can be discovered that the zero-order diffraction angle curve fits very well with the scattering feature for the closely packed PS spheres.This suggests that it is the diffraction which determines the strong scattering behaviour for the closely packed PS spheres.Remarkably, the periodic Ag nanoparticles follow the same trend of increasing scattering angles with wavelengths, but shifted to even larger scattering angles.This behaviour can be correlated to the large-angle scattering ability of plasmonic nanoparticles as shown in Fig. 3b, which is well known for individual metal nanoparticles and less pronounced for dielectric ones [14].
Conclusions
In this work, we prepared random and periodic Ag nanoparticles as well as closely packed PS spheres and studied their scattering behaviour using angular resolved scattering measurements.The different scattering properties of the three nanoparticles are revealed, showing that random Ag nanoparticles have a broadband scattering ability with large scattering angles due to their wide particle size distribution.In contrast, both periodic nanoparticle types are characterized by a strong scattering zone where scattering angles are increasing as the wavelength goes up.It can be explained by the zero-order diffraction for closely packed PS spheres.Overall, it is proved that angular resolved scattering measurements are a promising experimental characterization method to identify the scattering properties of nanoparticles and can support their selection for specific applications.
Fig. 1 a
Fig. 1 a Illustrative sketch of the Automated Reflectance/ Transmittance Analyser (ARTA) and (b) measured samples
Fig. 2
Fig. 2 Scanning electron microscopy (SEM) morphologies (left column) and wavelength dependent angular resolved scattering (ARS) behaviour (right column) of random (a,d) and periodic (b,e) Ag nanostructures and PS spheres (c,f)
Fig. 3 a
Fig. 3 a Size distribution (radius) of random Ag nanoparticles shown in Fig. 2 (a), and (b) calculated angular power distribution of light scattered by a Ag nanoparticle at air/glass interface as wavelength varies | 1,877.4 | 2017-11-22T00:00:00.000 | [
"Materials Science",
"Physics"
] |
The Effect of Authentic Problem-Based Vocabulary Tasks on Vocabulary Learning of EFL Learners
Language learners’ cognitive engagement with the content in language classes has been advocated in the last few decades (Laufer & Hulstjin, 2001). To this end, the researcher designed authentic problem-based tasks which make use of learners’ cognitive and metacognitive skills to solve real-life vocabulary tasks. Nelson vocabulary test was administered to 64 Iranian EFL learners studying at a language institute in Tehran. By considering 1 standard deviation above and below the mean score, two cohorts of participants were selected for this study, i.e. the experimental group (n=24) and the control group (n=23). Conventional vocabulary learning tasks were implemented in the control group classes for 10 sessions while authentic problembased vocabulary learning tasks were implemented in experimental group classes. The results of data analysis revealed that the experimental group participants outperformed the control group learners in both tests of vocabulary recall and vocabulary retention (administered after a twoweek interval). Pedagogical Implications are discussed.
INTRODUCTION
Vocabulary is in no certain terms one of the most significant components of communication in both first language (L1) and second language (L2).As a result, it has been subject of intense study in the field of applied linguistics.Wilkins (1972) acknowledged vocabulary as a sine qua non in communication and asserted that without vocabulary nothing can be communicated, whereas without grammar communication takes place partially.Mar-Molinero and Stevensons (2016) also acknowledged the prime role of vocabulary as a linguistic feature and noted that learning vocabulary in L2 is more cumbersome than acquisition of vocabulary in L1, as speakers of a language learn their mother-tongue vocabulary effortlessly, but in L2 language learners should pass the challenges of vocabulary learning.
The difficulties associated with learning English as a foreign language (EFL) and learning English as the Second Language (ESL) with regard to vocabulary learning has resulted in a series of studies which aimed at making this process less troublesome.Laufer and Hulstjin's (2001) Involvement Load Hypothesis, and Nation's (2001) clarification of vocabulary learning needs and processes are among the most significant efforts.However, it would be hard to claim all approaches to vocabulary learning have been studied so far.One of the recent approaches to language learning which might have effect on vocabulary learning is problem-based learning (PBL).PBL is a collaborative, self-directed approach to learning which makes use of language learners' cognitive and metacognitive thinking skills (Ansarian, Adlipour, Saber, Shafiei, 2016).PBL tasks aim at mimicking real-life problems in learning (Savery, 2006); believing that what language learners learn should have similarities with their real-life problems.In such a way, the problems designed for language learners are authentic.In addition, PBL, as opposed to traditional approaches to learning, views learning as a process of analyzing and decoding the content, a process which results in longer term retention of knowledge (Boud & Feletti, 2008).
The possible effect of PBL on EFL learners' vocabulary learning motivated the researcher to conduct a study and investigate the impact of authentic problem-based tasks on vocabulary learning of Iranian EFL learners.
Although many efforts are made by language teachers to facilitate learning vocabulary items of the new language for the language learners, many of the new words are subject to learners' obliviousness.Boud (1995) asserts that the main reason for low retention of knowledge is memorization instead of comprehension.Comprehension, as cited in Richards (1985) Bloom's hierarchy of thinking, requires evaluation of the learning content in a way that results in formation of conjectures in the minds of the learners about how to find a possible answer to the learning question.However, most language classes in Iran do not provide the learners with such an opportunity; teachers in these classes act as "sages of stage" and decode the learning materials for the language learners (Akbari, 2015).Many classroom techniques used in the EFL context of Iran for learning vocabulary such as repetition and substitution result in rote learning which puts the content at the risk of being forgotten.Eventually, language learners should try hard to learn vocabulary.This is for sure not a desired situation for both language learners and teachers.
On the other hand, PBL claims to make use of language learners' higher order thinking skills which target both effective learning and long term retention of content.As a result, PBL may have effect on learning vocabulary and provide a solution to this problem.
PBL is an innovative approach to education in the field of language learning.Larsson (2001) called for more studies dealing with PBL and language learning; believing that providing a robust answer to the question of whether PBL can affect language learning requires more empirical studies.Moreover, PBL may affect both recall and retention of vocabulary items.As such, the effect can pave the way for this approach to be implemented in language classes, this study can contribute to the field in terms of vocabulary learning.
Objectives
This study had two main objectives: 1.
REVIEW OF THE LITERATURE Theoretical Background
As a multidisciplinary approach to education, PBL utilizes a number of theories.The main theory used in this study is constructivism.In contrast to positivist who regarded reality as an observable entity which is fixed in nature, PBL seeks the answers to the problems as perceived by the language learners.As a result, many sources in the literature have considered this approach to be a constructivist approach to education (see for example, Savery, 2006;Hmelo-Silver, 2004).The researcher acknowledges that there are no fixed correct answers in PBL tutorship and as long as the language learners can solve the communicative problem exposed to them, the answer is correct.
The second theory is Bloom's Higher Order Thinking Skill, as presented by Ansarian et al. (2016).In this model, learning takes place as a result of critically evaluating the problems rather than attempting to memorize the content.Six levels of Bloom's model are 1) evaluation, 2) synthesis.
Hmelo-Silver's ( 2004) model for practical implementation of PBL was used.Based on the model, the learning process begins by exposing a real-life problem to the students, generating hypothesis about the problem, application of hypotheses to check them, reapplication after amendment (through peer and tutor feedback).
It should be mentioned that PBL is a real-life approach to solve the problems.Therefore, the approaches one can adopt to solve problems is not fixed.Likewise, the way PBL can be implemented is not fixed although this has led to misconceptions about PBL, and eventually, misapplication of this new approach.We attempted to solve this problem by extracting the teaching method based on the afore-mentioned theories.The process used to conduct the study is explained in the procedure section.
Historical Background
PBL emerged in education as a reform in 1960s.It was first used in higher education to teach medical and nursing students (Lee & Kwan, 2014).Later, due to its success with medical students, this approach found its way to other disciplines such as engineering, chemistry and geography (Savery, 2006) and to high schools, middle school and even elementary schools (Boud & Feletti, 1991).The presence of PBL in language classes was observed only in the last two decades (Ansarian, adlipour, Saber, & Shafiei, 2016).Having entered humanities and language education, PBL was implemented and evaluated in various contexts.For example, Azman and Shin (2012) investigated language learners' perception about the use of PBL in language classes in a local university in Malaysia.Coffin (2013) investigated the impact of implementing PBL in Thai context.Ansarian et al. (2016) gauged the effect of PBL on speaking proficiency of Iranian EFL learners.
Although focus has been accorded to the roles of PBL in language classes, the effect of PBL on many language skills, sub-skills and affective factors in language classes is not fully understood, which is due to sparse literature.Therefore, this study attempted to provide documented evidence of the effect of PBL on vocabulary learning in the EFL context of Iran.
METHOD Research Design
This study adopts a quasi-experimental design due to using quota sampling method.The main independent variable in this study is the problem-based vocabulary tasks, and the main dependent variable is vocabulary learning.Vocabulary learning is sought in terms of both recall and retention of words by the participants.
Participants
The participants in this study were selected from a language institute in the city of Tehran, Iran.Nelson vocabulary proficiency test was administered to 64 female language learners who were studying at the intermediate level in the.Based on the results gained from the test and by considering 1 standard deviation above and below the mean score, 47 learners were opted.These participants were divided into a control group (n=23) and an experimental group (n=24).
Instruments
The instruments used in the study are as follows:
Nelson vocabulary proficiency test
Nelson Vocabulary Proficiency Test is a thirty-item vocabulary test.The test is designed for intermediate language learners.The test was used as a homogeneity test in this study.Based on the results of the test the participants were divided into a control group and an experimental group.Wesche and Paribakht's (1996) vocabulary knowledge scale was utilized in this study to select the vocabulary items the participants did not know prior to the study.Ninety-five vocabulary items (Appendix A) were tested through this scale and 30 items were selected to be taught in the course.The same items were used to design the post-test for the study which included 30 items.
Researcher-made post-test
The researcher designed a post-test based on the vocabulary taught in the course of treatment.The test consisted of 30 multiple-choice items.Item analysis was run to measure ratios of item discrimination (ID) and Item Facility (IF) in the test.The reliability of the test was also measured using Cronbach Alpha (α=.85).
Procedure
The intervention phase of the study lasted for 12 sessions over the period of 1 month.Every session lasted for 45 minutes and there were 3 sessions in each week.The control group participants practiced the vocabulary through an input-based approach which was the conventional ap-proach used in the language institute where the study was conducted.The teacher introduced the vocabulary items to the participants by jotting down them on the board.The participants were asked if they know the meaning of the words.The teacher then clarified the meaning of the words through different techniques such as role-playing, drawing pictures, and explanation.The participants were asked to copy the words in their notebooks and to make both written and oral sentences using the words.Finally, they were asked to work in groups and ask and answer questions which required using the newly learnt words in the response.
The participants in the experimental group went through a different procedure.In each session, the teacher introduced a real-life scenario to the participants.For example, in session two in which the target vocabulary items were leaky, destroy and life expenses.The following scenario was given to the participants: Imaging the taps in your apartment are dripping.You want your landlord to solve the problem.What would you do?
Students in groups of 2 to 3 thought about the problem and generated hypotheses about how they could solve the problem.Some decided to write a letter to the landlord.Some preferred to make a phone call.Next, they prepared a list of problems to be mentioned in the conversation.They searched online using their cell phones for other required words and also elicited vocabulary items from each other.Finally, they created a conversation.The conversations were evaluated by the teacher and the peers in the class for clarity of representing ideas and for its potential to achieve the conversation goal.Amendments were made if necessary and conversations were role played for all class members.This procedure was similarly conducted for all intervention sessions in the experimental group.Following the treatment, all participants received the researcher-made post-test.
DATA ANALYSIS
Statistical package in social sciences (SPSS) version 22 was used to find the answer to the research questions.Before the main analysis, distribution of the data was checked.Table 1 shows ratios of skewness and kurtosis over their respective standards for all tests.
As can be observed in Table 1, ratios of skewness and kurtosis are within the range of ±1.96.This indicated normal distribution of data (Strevens, 2009), thus normal distribution of data for all tests was assumed.
Next, the researcher made sure that the difference between the participants' vocabulary scores in the two groups was insignificant prior to the study.Therefore, independent samples t-test was run.As can be noticed in Table 2, the difference between the experimental group's mean (M= 16.66,SD= 1.46) and the control group's mean (M= 16.68,SD= 1.44) prior to the main study is very small.
Noticing Table 2, the results of independent samples t-test (t (45)= -0.35, Sig=.972) indicated that the difference between the groups is not significant.
Q1: Do authentic problem-based tasks have any effect on recall of vocabulary items by Iranian EFL learners at intermediate level?
The answer to the first research question was investigated through independent samples t-test.In this case, the posttest results of both groups in the immediate posttest were compared.
Noticing Table 3, there is a difference between the mean score of the participants in the experimental group (M= 19.62,SD= 1.99) and the control group participants (M= 18.13, SD= 1.54).Also, independent samples t-test results (t(45)=2.861,Sig=.006) indicate a significant difference between the posttest scores of the control group and the experimental group; therefore, the first null hypothesis was rejected, and it can be assumed that authentic problem-based tasks have effect on recall of vocabulary items by Iranian EFL learners at intermediate level.
The Second Research Question
Q2: Do authentic problem-based tasks have any effect on retention of vocabulary items by Iranian EFL learners at intermediate level?
In order to seek the answer to the second research question, independent samples t-test was run between the control group and the experimental group's delayed posttest results.
As noticed in Table 4, there was a difference between the posttest scores of the participants in the experimental group (M=19.20,SD= 1.26) and control group participants (M= 17.39, SD= 1.67).In addition, the results of independent samples t-test (t (45) = 3.11, Sig=.003) indicates that there is a significant effect.Therefore, the second null hypothesis was rejected and authentic problem-based tasks have effect on retention of vocabulary items by Iranian EFL learners at intermediate level.
DISCUSSION AND CONCLUSION
This study revealed that authentic problem-based tasks could have significant positive effect on vocabulary learning of Iranian EFL learners in terms of recall and retention.Many of the previous research findings in the field of PBL and language teaching are in line with the findings of this study.Laufer and Hulstjin (2001) who discussed the significance of involvement with the learning content in language classes through involvement-load hypothesis believed that cognitive involvement is a significant asset to depth of processing hypothesis.In other words, language learners' learning may be more or less effective based on the approach they select to learn vocabulary.On the other hand, Bloom's higher order thinking model which was implemented in the treatment through authentic problem-based tasks makes use of students' cognitive and meta-cognitive skills to solve problems.As a result, it can be concluded that authentic problem-based tasks increase language learners' cognitive involvement with the learning content and result in more effective vocabulary learning.In terms of retention of vocabularies, this study revealed that authentic problem-based vocabulary tasks lead to more retention of knowledge compared to conventional input-based vocabulary learning tasks.Boud (1995) asserted that students forget 90% of whatever they learn in classes if memorization is the dominant approach to learning.By contrast, language learning through problem-based tasks can increase retention of the content (Larsson, 2001).In line with such research findings, this study also revealed that authentic problem-based tasks increase retention of vocabulary items learned by EFL learners.
One of the main problems with conventional language classes, as noted by Krashen (1985) and many other scholars is lack of opportunity to use the language.This can endanger language learners' uptake from classes and long-term retention of knowledge.Kumaravadivelu (2006) argued that the advent of learner-centered approaches to learning is a response to this need.PBL is a learner-centered approach to learning which attempts to provide the opportunity for the learners to socialize (Savery, 2006).In this way, they get more chance to practice the language; therefore, authentic problem-based tasks can be suitable tasks for language classes.Legg (2007) showed that implementing PBL in language classes in contexts where the linguistic system used in the classes is different from the learners' first language, is more difficult for learners.This issue has been mentioned elsewhere (e.g.Larsson, 2001;Mathews-Aydinli, 2007).As this study did not use qualitative methods, this issue was not investigated; however, no particular difficulty was observed by the researchers while implementing authentic problem-based tasks.
Abdullah (1998) argued that PBL is a suitable approach for enhancement of learners' social and communicative skills.Elsewhere, Hmelo-Silver ( 2004), and Mathews-Aydinli (2007) noted that learners should communicate in collaborative learning approaches to convey ideas and elicit information.Wilkins (1972), on the other hand, considered vocabulary as an essential component of effective communication.In this sense, vocabulary learning can be linked to effective communication and it can be claimed that PBL can result in effective communication among EFL learners.
This study was limited by a number of issues such as availability of problem-based materials for language classes, a step-by-step procedure defined for implementation of PBL in language classes, and scarcity of previous studies in the field to help interpret the findings.To overcome these problems the researcher designed problem-based vocabulary tasks based on Hmelo-Silver (2004) to be used in the study.In addition, the main concepts of PBL as stated by Ansarian et al. (2016) and Hmelo-Silver (2004) were used to design authentic problem-based tasks.Finally, to solve the problem with scarcity of previous research, the researcher referred to PBL studies conducted in other disciplines to enrich the discussion.
The researcher also attempted to control some of the confounding variables that could affect the findings of the study.In this study, only intermediate leaners were selected as they usually show the most effect with regard to vocabulary learning (Boer, 2001).This study only dealt with adult EFL learners since mixing young and adult learners could reduce the accuracy of the findings.It comes highly suggested by the researcher that other researchers should focus on other proficiency levels and learning styles with regard to problem-based vocabulary learning.
Table 1 .
Distribution of scores in all tests
Table 2 .
Independent samples t-test; pretest of groups
Table 3 .
Independent samples t-test, immediate posttest results of the groups Posttest
Table 4 .
Independent samples T-test; delayed posttest of vocabulary | 4,240.6 | 2017-07-31T00:00:00.000 | [
"Linguistics"
] |
Information Retrieval System for Determining The Title of Journal Trends in Indonesian Language Using TF-IDF and Naїve Bayes Classifier
The journal is known as one of the relevant serial literature that can support a researcher in doing his research. In it’s development journal has two formats that can be accessed by library users namely: printed format and digital format. Then from the number of published journals, not accompanied by the growing amount of information and knowledge that can be retrieved from these documents. The TF-IDF method is one of the fastest and most efficient text mining methods to extract useful words as the value of information from a document. This method combines two concepts of weight calculation that is the frequency of word appearance on a particular document and the inverse frequency of documents containing the word. Furthermore, data analysis of journal title is done by Naïve Bayes Classifier method. The purpose of the research is to build a website-based information retrieval system that can help to classify and define trends from Indonesian journal titles. This research produces a system that can be used to classify journal titles in Indonesian language, with system accuracy in determining the classification of 90,6% and 9,4% error rate. The highest percentage result that became the trend of title classification was decision support system category which was 24.7%.
INTRODUCTION
The journal is known as one of the relevant serial literature that can support a researcher in doing his research. In its development the journal has two formats that can be accessed by the user (library user) that is: printed format and digital format [1]. Of the many research results in the form of journals published online, not accompanied by the growing amount of information and knowledge that can be taken from these documents.
Text mining in general is the theory of processing a large collection of documents that exist from time to time by using some analysis, the purpose of text processing is to know and extract useful information from data sources with the identification and exploration of interesting patterns in the case of text mining, The data used is a collection or collection of unstructured documents and requires the existence of a grouping for known similar information [2].
In this research we will use TF-IDF and Naïve Bayes Classifier method to determine the trend of journal titles in Indonesia. The TF-IDF (Term Frequency Inverse Document Frequency) method is one of the fastest and most efficient text mining methods to extract useful words as the value of information from a document [3]. The TF-IDF (Term Frequency Inverse Document Frequency) method is a way to weight the relationship of a term to a document. This method combines two concepts for weight calculation, ie the frequency of occurrence of a word within a particular document and the inverse frequency of the document containing the word. The frequency of occurrences of words in a given document indicates how important the word is in the document [4].
Furthermore, the final stage of this research data mining analysis is done by using Naïve Bayes Classifier method. Naïve Bayes is one of the meode on probabilistic reasoning. Naïve Bayes algorithm aims to classify data in a particular class, then the pattern is the journal title class that is becoming a trend. The advantages of the Naïve Bayes Classifier method are simple but have high accuracy. Based on the experimental results [5], the Naïve Bayes Classifier on previous research proved to be used effectively for the categorization of Indonesian text with an accuracy of 90%. The simple Naïve Bayes Classifier algorithm and its high speed in the training and classification process make this algorithm appealing to be used as one of the classification methods. The purpose of the research is to build a website-based information retrieval system that can help to classify and define trends from Indonesian journal titles. In addition, this study also aims to apply the TF-IDF and Naïve Bayes Classifier methods in determining the trend of Indonesian journal titles.
Information Retrieval System
The information retrieval system for determining the trend of Indonesian journal titles uses a waterfall system development model, where it proposes an approach to systematic and sequential software development that begins at the level and progress of the system throughout analysis, design, code and testing [6 ]. Information retrieval is divided into the following sections [7]: 1. Text Operations, including selection of words in the query or document (term selection) in the process of transforming the document or query into term index (index of words).
2. Query formulation, giving weight to the query index of words.
3. Ranking, searching for documents relevant to the query and undoing the documents based on their compatibility with the query.
4. Indexing, builds the index data base of the document collection First done before the document search is done
Text Mining
Text mining or text analytics is a term that describes a technology capable of analyzing semi-structured and unstructured text data, this is what distinguishes it from data mining where data mining processes data that are structured. In essence, text mining is an interdisciplinary field that refers to information retrieval, data mining, machine learning, statistics, and linguistic computation [8]. In general, the concept of text mining work is similar to data mining, which is predictive digging and descriptive digging. Text mining extracts information from the text and converts it into a meaningful numerical index [9]. In order to obtain the final goal of text mining, it takes several stages of the process to be performed as shown in Figure 1 [10]. The selected data to be analyzed first will pass through the Pre-process stage and text representation, until finally knowledge discovery can be performed.
TF-IDF Method
Needs analysis in this discussion consists of two sub-discussion of discussion of TF-IDF method and Naïve Bayes Classifier method. The TF-IDF method is used to determine the weight value of each word (term) contained in each document. The TF-IDF equation scheme is represented by the following equation [11]:
Naïve Bayes Classifier
The Naïve Bayes Classifier procedure there are 2 stages in the process of text classification. The first stage is training on the set of journal titles (data training). While the second stage is the process of classification of documents that have not known the category (data testing). The Naïve Bayes method will calculate the probability of each case. The value of the target attribute in each sample data. Then the Naïve Bayes Classifier will classify the sample data to the class that has the highest probability value [12]. The formula of the Naïve Bayes Classifier method is as follows [13]:
Preprocessing
Before the classification process on the journal title data, the data will enter the tokenisasi stage to make the title a word per word, then from each word will be removed stopword it. Stopword is found by counting the number of occurrences of words. Then, the number of word occurrences is sorted from the most words appear, words that appear big will be the candidate stopword. The example of the found stopword is shown in Table 1. In addition to using the number of occurrences of large words, the selected stopword is a foreground like: "dalam", "di", "ke", "dari", connecting words like: "the", "and", and using subjective options According to the needs of the journal title data. The entire list of used stopwords is in Appendix 2. After all the stopwords found in the title are removed . Then the next process is stemming, that is change the word into the word base. An example of the stemming results in this study is shown in Table 2.
Weighted Phase of TF-IDF
After the process of changing the word base, then done the weighting. The first process is to calculate the occurrence of words or term frequency (TF) for each document based on words that have been used as a basic word on the stemming process. Suppose for the word "sistem pakar" appears in the title 1 as much as 1 time then the column table title 1 is written number 1, then because in the title 3 not found the word "sistem pakar" then in the column title 3 is written number 0. The example of the result of the word (TF) can be seen in Table 3. The next process is to calculate the inverse document frequency (IDF) which is the normalization of word frequency. To calculate the IDF, is by the formula: Example to calculate the IDF of the word "sistem pakar" whose occurrence (nm) in 2 titles and the amount of title data (n) is 5, then it is obtained: The next step is to calculate the TF-IDF value for each document generated from the number of word occurrence frequency (freqm,i) multiplied by the IDF value, To calculate TF-IDF, is by the formula: Calculates the weight of the word "sistem pakar" in heading 1 by the number of times that the word appears , = 0,4 dan IDF = 0,39794 = 0,159176 Then the weight of the word "sistem pakar" in title 1 is 0.159176. For table weighted results using TF-IDF is shown in Table 4.
Probability Calculation Phase
The Naïve Bayes method uses probability theory as the theoretical basis. The probability calculation is done to get the result of value which will lead the document into the class or category. There are two stages in the text classification process. The first stage is training on the set of journal titles (data training). While the second stage is the process of classification of documents that have not known the category (data testing).
Where f (ci) is the value of the appearance of the wkj feature, | W | Is the number of words/features used. While f (wkj, ci) is the number of occurrences of the word wkj in category vi. The number of words in each class is expressed as n. The following is the application of the naïve bayes classifier method in determining how the process of category determination. Starting from determining the data title of the journal to be used as training data then determine the category of each data manually. Table 5 is the title data training that has been specified category. Training data and categories have been obtained then start testing data testing that has not been known category. Data testing of unknown title categories can be seen in Table 6. The data testing is what the categorie will automatically determine to use. Sistem Pakar(1), Penyakit (1) Next count the number of occurrences of the word (term) in the title 1 and title 2. In title 1 only appears the word "AHP" as much as 1 time, then in title 2 there is the word "expert system" 1 time and "disease" 1 times. For keywords that do not appear in the title then it is written "0". The term appearance tables in the data testing can be seen in Table 7. The next step is to look for the probability value of word appearance for each category, from training data. To determine the probability value of words that appear in each category is by the formula: The results of word probability calculations for each category can be seen in Table 8. The final step in the category determination is to use the following formula: Specify title category 1 Because P(sistem pakar│judul 2) > P(sistem pendukung keputusan│judul 2) then the category of title 2 is sistem pakar.
Discussion
Based on the results of the application of Naïve Bayes Classifier method has been done, it can be known classification the title of the Indonesian computer science journal from the DOAJ website. Journal titles are divided into 13 classes including decision support systems, expert systems, information systems, e-commerce, e-learning, computer networks, artificial neural networks, image processing, cryptography, artificial intelligence, geographic information systems, educational games, and applications Mobile. In the classification process must go through several stages of data retrieval process, the division into training data and data testing, preprocessing stage and classification.
Journal data retrieval process is done by using API DOAJ, then the data that has been got divided into 2 that is data training and data testing. Then enter the preprocessing stage that is done tokenizing process, stopword and stemming. After going through the process of preprocessing the process of classifying this process is a prosen where the data training and data testing is calculated probability.
After getting the class or each category then do the trends counting from the classification of the title, Here is a description of the data for the calculation of trends in the system. The test of this system is only done with data training as much as 281, data testing as much as 85 data, and 13 classification categories that have been determined. The classification includes decision support systems, expert systems, information systems, e-commerce, e-learning, computer networks, artificial neural networks, image processing, cryptography, artificial intelligence, geographic information systems, mobile applications, and educational games. From the calculation scenario of this system generated trend percentage of title classification that is. sistem pendukung keputusan 24,7%, sistem informasi 21,7%, sistem pakar 14,6%, e-learning 9,8%, e-commerce 3,5%, kriptografi 6,7%, game 5,2%, kecerdasan buatan 5,3%, sistem informasi geografis 1,8%, jaringan komputer 1,6%, pengolahan citra 2,1%, jaringan syaraf tiruan 1,6%, aplikasi mobile 1,4%. The percentage of trend classification of journal title can be seen in Figure 2.
Figure 2. Percentage of Classified Trends of Journal Title Classification
After the percentage of the journal title classification is obtained, an evaluation of the classification data from the data testing is conducted. This evaluation will result in accuracy and error rate of the classification process. To calculate the value of accuracy and error rate is as follows. Error Rate = 1 -Accuracy (8) = 1-0,906 = 0,094 = 9,4% So obtained the accuracy of 90.6%. This value is obtained from the calculation of the sum of all data that is predicted correctly divided by all the test data in the form of error rate of 9.4% obtained from the calculation of the sum of value 1 minus the result of accuracy. The advantages of this system is to use this system we can find trends from the title of the journal in the field of computer science available on DOAJ website. In this system also has a shortage of training data that is used still use labeling manually so it is difficult when labeling done on the data training in large numbers.
CONCLUSION
Based on the description of the results and discussion of this research, it can be concluded that the application of TF-IDF and Naïve Bayes Classifier method in the system to determine the trend of Indonesian journal titles is using several stages. First, data retrieval using DOAJ API. Second, the preprocessing stage consists of tokenization, stemming and filtering. Third, the word weighting stage with TF-IDF and the last stage is the process of classifying the test data using Naïve Bayes Classifier by calculating the probability value of each text. Then from the classification of training data and data testing will be obtained trends from the title of the journal by class or category that has the largest to smallest members. The results of the 5th grade or category with the highest number of trends are decision support system 24.7%, information systems 21.7%, expert systems 14.6%, e-learning 9.8%, cryptography 6.7%. | 3,584 | 2017-11-10T00:00:00.000 | [
"Computer Science"
] |
Extension Error Set Based on Extension Set
This paper gives the concepts of extension error set and fuzzy extension error set, discusses diverse extension error set and fuzzy extension error set based on extension set and error set, and puts forward the relevant propositions and operations. Finally, it provides proofs of the soundness and completeness for the propositions and operations.
Introduction
In the field of fuzzy mathematics, the research of set mainly concentrates on the static form of fuzzy set and its effective forms of reasoning and rule.However, the dynamic changes of the fuzzy set are important parts of set research.In this paper, firstly, we study extension error set and fuzzy extension error set's dynamic concept based on the theory of error eliminating and extenics.Then, we research diverse extension error set and fuzzy extension error set, and put forward the relevant propositions and operations.Finally, we provide proofs of the soundness and completeness for the propositions and operations.In one word, because of the study of extension error set, this paper has very important theoretical and practical significance in different fields.As the fundamental element for matter description, it's referred to as 1-dimensional matter-element, and m , vm are referred to as the three key elements of matterelement M, within which, the two-tuples composed of cm and is referred to as the characteristic-element of matter .A matter with multiple characteristics, similar to 1dimensional matter-element, can be defined as a multidimensional matter-element:
Affair-Element
Interaction between matters is referred to as affair, described by affair-element.
Relation-Element
In the boundless universe, there is a network of relations among any matter, affair, person, information, knowledge and other matter, affair, person, information and knowledge.Because of interaction and interplay among these relations, the matter-element, affair-element and relation-element describing them also have various relations with other matter-elements, affair-elements and relation-elements, and the changes of theses relations will also be interacting and interplaying.Relation-element is a formalized tool to describe this kind of phenomena.
The Research of Extension Error Set
We research extension error set based on the theory of Extenics, and explore classical extension error set, fuzzy extension error set, multivariate extension error set.Moreover, we put forward the relevant propositions and operations.According to thses propositions and operations, we provide some proofs.
The Definition of Extension Error Set
Suppose is an object set, is a set of association rules, if , , , we call that "E" is an extension error set for association rule in domain .In detail, is a domain, ) is a set of association rules, M refers to the matter-element, A representative affairelement, R represents the relationship between relationelement; represents the correlation functions of extension error set, R is the real number field, T refers to the time.In this paper we take extension error set as a complex system, its' elements as subsystems. So, are called extension error set's extension of the domain, negative extension field, extension, stable domain and negative stable region, critical region respectively.In the definition ´, , This contradiction with 1 E E 2 .so, 1 2 f f . The end.
We according to the features of the elements can be di vided: 1) Classic extension error set 3) Have critical point extension error set , ,
The Research of Fuzzy Extension Error Set
eration of extension error set.
This section mainly research the definition, relation, op
The Definition of Fuzzy Extension Error
, we call that E is a fuzzy extensio n error set for S in U .
Error Set
Defi uppose
The Relation between Fuzzy Extension s 4.2.1. Equation nition
for association rule S, is the subset , or Proposition 3.1.2.4 Suppose are subsets for associatio
The Operations of between Fuz Extension
Error Sets attracted the atten cially in the fields of management and decision-making.So we study the extension error set and fuzzy extension error set.But, what we have done is not enough.It's in administrative before our theory is perfect.So, we call for more scholars from all over the world to do research about extenics and error eliminating theory.Only in this way, can they have wider value of applications in more fields.
he Inte Fuzzy Extension
An ordered triple composed of the measure v m of O m about c m , with matter O m as object, and cm as characteristic
m
For convenience, the whole matter-element is ex-pressed as £ M , the whole matter is expressed as the domain of measure of c m .
there are clearly tablished the following proposition: | 1,048.6 | 2013-11-08T00:00:00.000 | [
"Computer Science"
] |
Reflected Entropy in Double Holography
Recently, the reflected entropy is proposed in holographic approach to describe the entanglement of a bipartite quantum system in a mixed state, which is identified as the area of the reflected minimal surface inside the entanglement wedge. In this paper, we study the reflected entropy in the doubly holographic setup, which contains the degrees of freedom of quantum matter in the bulk. In this context, we propose a notion of quantum entanglement wedge cross-section, which may describe the reflected entropy with higher-order quantum corrections. We numerically compute the reflected entropy in pure AdS background and black hole background in four dimensions, respectively. In general, the reflected entropy contains the contribution from the geometry on the brane and the contribution from the CFT. We compute their proportion for different Newton constants and find that their behaviors are in agreement with the results based on the semi-classical gravity and the correlation of CFT coupled to the bath CFT.
I. INTRODUCTION
The holographic entanglement entropy (HEE) has provided a geometric description for the entanglement of quantum matter and thus opened a new window for understanding the fundamental problems in quantum information theory. Originally, it is identified with the area of the minimum surface ending on the boundary. When quantum fields in the bulk are taken into account, their contribution to the entanglement can be evaluated by considering the minimal area of the quantum extremal surface (QES). Specifically, given a d dimensional asymptotically AdS spacetime and consider a region A on the boundary, the von Neumann entropy of this region can be computed by [1][2][3][4] S(A) = min where G (d) is the Newton constant of gravity in d-dimensional spacetime. Area(X A ) denotes the area of QES X A , which stretches into the bulk with A as the boundary. Σ A is the spatial region enclosed by X A ∪A, and throughout this paper we will call Σ A the entanglement wedge of A. Thus S(Σ A ) denotes the entropy of the quantum field within Σ A . Finally, the entropy is identified with the minimal area of all possible QESs. Usually, the entanglement entropy of quantum fields is difficult to compute, however, if they are described by conformal field theory (CFT) with large central charge, then they would enjoy the holographic duality such that we may provide a geometric description for their entanglement entropy by holography as well. The strategy is further embedding the considered d-dimensional spacetime into a d + 1-dimensional spacetime and treating it as a dynamical brane living in the bulk or on the boundary. This setup is also dubbed as double holography. By virtue of this setup, both terms in equation (1) have a geometrical interpretation and the formula becomes [5][6][7] S(A) = min where G (d+1) is the d + 1-dimensional Newton constant and G is the intrinsic Newton constant on the brane. Now, thanks to the notion of HEE, X Σ A is identified as the minimal surface associated with the entanglement wedge Σ A in the (d + 1)-dimensional bulk, which may simply be called the Ryu-Takayanagi (RT) surface of Σ A .
When a bipartite quantum system with two subregions A and B is in a mixed state, the above setup can be generalized to consider the entanglement between A and B by purification. It has been conjectured that the holographic entanglement of purification could be evaluated by the area of the minimal cross-section of the entanglement wedge (EWCS), which may be denoted as E A:B [8]. It is expected that this identification captures both classical and quantum correlations between two disjoint subregions. Meanwhile, a similar concept called holographic reflected entropy, which describes the entanglement involving the canonical purification of mixed states, has also been related to the EWCS [9]. EWCS, as a good measure of mixed state entanglement, has been widely studied in recent literature [9][10][11][12][13][14][15][16][17][18][19][20][21]. Similar to the holographic dual of entanglement entropy, the holographic dual of reflected entropy with quantum fields in the bulk is proposed as [9] S R (A : B) = min where the first term is proportional to the area of the EWCS E A:B that splits the wedge Σ A∪B into two parts and the second term is the reflected entropy between the quantum fields in the bipartition Σ A A∪B : Σ B A∪B , as illustrated in Fig. 1(a). In this figure, one intuitively notices that Σ A∪B = Σ A A∪B ∪ Σ B A∪B and E A:B = Σ A A∪B ∩ Σ B A∪B . Next, for convenience, we call the second term the bulk reflected entropy.
Similar to the arguments on the holographic entanglement entropy in [2], the holographic reflected entropy in (3) does not contain quantum corrections o(G (d)0 ). In [2], a very elegant scheme has been proposed to include the contribution of quantum corrections of HEE. The key point is to extend the notion of extremal surface to quantum extremal surface, which is obtained by finding the minimal contribution of EE from both terms, as shown in (1).
Motivated by this point, we propose a generalization of EWCS to its quantum version such that the holographic reflected entropy contains higher-order quantum corrections as well in this paper. Specifically, in the presence of quantum fields in the bulk, we propose that the reflected entropy between A and B on the boundary can be evaluated by holography as In comparison with the equation in (3), the key difference is that searching the minimum is taken at the final step such that the minimal cross-section E min A:B is influenced by the entanglement between the quantum fields in the bulk regions Σ A A∪B and Σ B A∪B as well. So we call it quantum entanglement wedge cross-section (QEWCS). Obviously, when the total system A∪B is in a pure state, the holographic reflected entropy (4) recovers the holographic entanglement entropy in (1) and the QEWCS recovers the QES. However, in general mixed states, we are usually stuck by the difficulty of computing the entanglement of quantum fields, which is the second term in (4). To overcome this difficulty, we intend to investigate the reflected entropy with quantum corrections by virtue of the doubly holographic setup.
The reflected entropy was previously studied in some doubly holographic setups, focusing on the island scenario of reflected entropy [22,23]. The EWCS of the reflected entropy in the (d + 1)-dimensional spacetime may end either on the (d − 1)-dimensional RT surface in the (d + 1)-dimensional spacetime or on the d-dimensional brane, where the holographic reflected entropy of some regions on the d-dimensional boundary may contain the geometric contribution of the island in the dynamical spacetime on the d-dimensional brane theory.
In contrast to the above consideration, we will utilize the double holography in a quite where E Σ A A∪B : Σ B A∪B is the EWCS that splits the entanglement wedge of Σ A∪B , which is denoted as Σ(Σ A∪B ), into two parts in the (d + 1)-dimensional spacetime. We illustrate the cartoon of the EWCS in double holography in Fig. 1 Equation (5) is the core formula proposed in the present paper. Next, we will present the details for the doubly holographic setup, and then evaluate the reflected entropy with quantum corrections for some bipartite systems in pure AdS space and black hole background, respectively. We consider an action of the brane, which contains a tension term and Dvali-Gabadadze-Porrati (DGP) term [25,26]. So the total action of the system is given as where h is the induced metric on the boundary, and Σ the induced metric on P . K is its extrinsic curvature scalar and R h is the intrinsic curvature scalar of h. The ϑ and k are the intrinsic curvature scalar and extrinsic curvature scalar of P . The constant α is proportional to the tension of the brane and for simplicity we just call it tension term. The third line in DGP is introduced. To determine the metric of the background, we need to solve the equations of motion.
For this purpose, we impose Dirichlet boundary condition on the conformal boundary M and Neumann boundary condition on the brane Q Q : where is the length cutoff of the theory on the conformal boundary. We will take the semi-classical limit L d−1 /G (d+1) → ∞ such that the background is described by the classical solutions to the Einstein equation on N .
The above system can be viewed from the following three perspectives [5,7]: Bulk perspective: The pure gravity theory in the asymptotic AdS space N with the above boundary conditions on conformal boundary M and the brane Q.
Brane perspective: The gravity near the brane Q is localized by the (d + 1)-dimensional negative curvature [27]. After imposing the Einstein equation in N , the theory in the bulk is dual to the theory of induced metric on the brane Q and the CFT living on both Q and M [28]. One may think of it as the gravity-plus-CFT theory on Q coupled to the CFT on the flat half space M at the intersection P , where the former is the system that we are interested in and the latter may be treated as a bath.
Boundary perspective: The geometry on the brane is also an asymptotic AdS space. The gravity-plus-CFT theory is dual to the (d − 1)-dimensional theory without gravity on its boundary, namely, the intersection P [5]. With the language of boundary conformal field theory (BCFT) [24], the intersection P is the boundary of the CFT on M , and the theory on P forms a conformal defect [7].
The gravity-plus-CFT theory in the brane perspective exhibits the following advantages in the study of the reflected entropy (4).
• The CFT has a semi-classical gravity duality characterized by large central charge and entropy.
• The quantity in the square bracket in (4) can be computed by the RT formula in (d + 1)-dimensional bulk, with the form of that in (5).
• The state in the gravity-plus-CFT theory on Q is mixed, caused by its interaction with the bath CFT.
Next, we will consider AdS space and black hole as two specific states of the bulk N and compute the reflected entropy of a simple bipartite of region P .
A. Background
The AdS spacetime with a brane is considered as the ground state. We first consider the bulk N metric as AdS d+1 spacetime with the conformal boundary M and the brane Q at From (9), the induced metric on the brane Q is AdS d spacetime.
The Neumann boundary condition on the brane gives rise to which should be satisfied by the above geometry and embedding.
For later convenience, we apply the coordinate transformation and rewrite the metric in Poincare coordinate system (z, x, y, t) as We denote the inner angle between the brane Q and the conformal boundary M as π − θ where 0 ≤ θ ≤ π, then the location of the brane Q can be described by It is easy to see that θ is related to ρ 0 by cot θ = sinh ρ 0 or csc θ = cosh ρ 0 . Thus the boundary condition in (12) becomes αL + λ sin 2 θ − 2 cos θ = 0. In general, one can freely choose the values of θ ∈ [0, π] and λ ∈ R, but determine αL by the above equation. Basically, we will consider the case with 0 < θ ≤ π/2 such that the boundary entropy is always positive [24].
B. RT surface
Throughout this paper, we only consider time-independent states. So we will work on a specific time slice of {N, M, Q, P } and denote them with the same notations for convenience.
Rather than considering the bipartition with finite intervals in Fig. 1(b), whose entanglement wedge in general is rather complicated for numerical simulations, we will consider the bipartition A : B where A and B are two half-infinite intervals satisfying P = A ∪ B, as shown in Fig. 2. In the Poincare patch, we let n = d − 2, y = (y, w) and w = (w 1 , ..., w d−3 ).
The regions {P, A, B} are defined as which always cover all the space along transverse directions w and their dependence on w has been neglected due to the translational symmetry.
Our goal is to calculate the reflected entropy of A : B by finding its minimal QEWCS.
First, we need to figure out the entanglement wedge of P . Notice that P is a codimension-3 manifold. To apply the RT formula here, we may imagine that P has a finite width along x direction on the boundary M which scales as the UV cutoff of the boundary theory.
Technically, we will consider a codimension- near the brane, which are defined as We can obtain {P, A, B} from {p, a, b} by sending x b → . Thanks to the above limit process, the RT surface of P can be obtained from the RT surface of p by taking the limit. The above setup is illustrated in Fig. 2. Next, we turn to consider the entanglement in {p, a, b}.
In this subsection, we will focus on the entanglement entropy S(p) associated with the region p, but leave the reflected entropy S R (a : b) for investigation in the next subsection.
Now to compute S(p), it is essential to figure out the RT surface X p and the entanglement wedge Σ p of p.
According to (2), the entropy S(p) is the minimum iñ with respect to the surfaceX p anchored on the line ∂p = {(x, y)|x = x b , y ∈ R} and a lineX P =X p ∩ Q on Q, where the tildes refer to quantities before minimization. The minimization can be achieved in two steps. Firstly, given aX P , we find the minimal surface X p anchored on ∂p. Secondly, we minimize the entropy with respect toX P and determine X P .
In general, there are two candidates of RT surface X p , one of which ends on the brane Q (X P = ∅) and the other does not (X P = ∅), as shown in Fig. 2. We call the former island phase and the latter trivial phase. The island phase depends on the action on the brane Q, while the trivial phase is a surface at x = x b stretching into the bulk, which is independent from the brane. The entanglement wedge Σ p is the region enclosed by the RT surface and the brane.
We are figuring out the minimal surface X p at the first step. We work in (z, x) coordinates (14) and parameterizeX p as (z(x), x) or (z, x(z)). Then the area ofX p is proportional to the integral where the undetermined coefficient z * is the value of z at the turning point z (x) = 0.
Treating the integral as an action, we find the corresponding equation of motion derived from the Hamiltonian is given by x(z) has two solutions x ± (z) The minimal surface X p parameterized by (21) intersects with Q atX P . Denote the location ofX P as (z 0 , x 0 ), which satisfies (15). Then the area ofX P and X p are given by where V n = d n y, σ = sgn(x + (z * ) − x 0 ),˜ = /z * and thez(x) is the inverse function of (21).
We are figuring out the location of X p at the second step. In coordinate system (ρ, ζ) in (9), the candidate surface X p anchored at ∂p andX P on both ends can be parameterized as (ρ, ζ(ρ)). As a result, the area ofX p and X P are separately given by So, before the minimization, the dimensionless density of entropy in (18) is By requiring δs p /δζ(ρ) = 0, we obtain the boundary condition of ζ(ρ) as [7] So for the island phase, it is necessary that n|λ| ≤ csc θ. But it is not sufficient. We will come back to this point soon.
By utilizing the coordinate relation (13), we can numerically find the value of z * so that the surface (21) satisfies the boundary condition (29) at the intersection (15). So, the dimensionless entropy density at extremum is given by surface X p and the entanglement entropy density s p in the island phase are illustrated in Fig. 3. In the trivial phase, the entropy density is simply s p = 1/(n n ), which matches (30) at the limit of z 0 , z * → ∞ with finite λ.
Let us compare the two phases for different λ. Firstly, to avoid the induced gravity on the brane Q becoming unstable [7], we require the lower bound nλ > −1, which is stronger than nλ ≥ − csc θ. Secondly, when nλ is slightly above −1, the island phase is preferred since its entropy is smaller than that of the trivial phase, as shown in Fig. 3. Thirdly, when λ grows, the first term in (18) also grows with λ. At the same time, the RT surface X p in the island phase will stretch into the bulk in order to alleviate the growth of the first term in (18). Fourthly, when the scale of the RT surface X p is large enough at some values of λ, the finite width x b will become negligible. So we can consider the limit x b /z * → 0 in (21) and find the ratio γ = z 0 /z * ∈ [0, 1], which only vanishes at θ = 0, π/2, as shown in Fig. 4.
We can further calculate ζ(ρ 0 )/ζ (ρ 0 ) and λ from (13) and (29) at this limit. The value of λ at this limit, denoted as λ c , is the upper bound on the λ for the island phase. We plot λ c as a function of θ in Fig. 4 and find n|λ c | ≤ 1 always. At this limit, s p → 1/(n n ) approaches the value in the trivial phase from below. Fifthly, when λ ≥ λ c , the island phase does not exist and the RT surface X p becomes the trivial phase.
C. EWCS ending on the brane
After figuring out the RT surface of the region p, now it is straightforward to define the QEWCS between the subregion a and b. Thanks to the translational invariance along y directions, the EWCS between a and b is described by which intersects with the brane Q at According to the proposal, the reflected entropy S R (a : b) can be evaluated by the minimal area of the QEWCS. Therefore, we have where Θ(σ) is the step function, and f n (z 0 /z * , Θ(σ)) are some complicated functions. The cutoff is chosen as z = and constant ζ = cosh ρ 0 . The dependence of the reflected entropy density s R a:b on λ is shown in Fig. 3. We remark that each term in the final expression has its own geometric correspondence and we demonstrate this in Fig. 5. Now we elaborate our understanding on these terms as follows. This issue occurs since for pure AdS, the CFT stays in the ground state with long-range correlation. As a consequence, the CFT on the brane Q is highly entangled with the CFT on M . From (1), we notice that the QES X P tends to contain a small Σ P to resist the high entanglement of CFT, which leads to z 0 → 0 when x b → 0. To cure this issue, one may increase the proportion of the first term in (1). Here we propose the following two prescriptions: increasing the value of λ or adding a black hole. The former prescription will be discussed immediately. The later prescription breaks the scaling symmetry and will be considered in the next section.
After all, if we send x b → , the bipartition a : b effectively approaches A : B. Meanwhile, to avoid z 0 ∼ one could further choose a large λ approaching λ c from below. According to the analysis in last subsection, the RT surface is subject to z 0 = γz * x b , which stretches into the bulk and keeps away from the conformal boundary M even we send x b → . This tendency has been checked numerically in Fig. 3.
When λ ≥ λ c , i.e. in the trivial phase, the cross section E a:b becomes the surface {(x, y, z)| x < 0, y = 0, z > −x tan θ} and the density of the reflected entropy is which encounters IR divergence for n = 1.
Let us discuss the reflected entropy from the boundary perspective. In the island phase, the behavior of the reflected entropy reflects the finite correlation length ξ of the conformal defect on P . In the presence of QES at ζ 0 , the correlation is suppressed by the entanglement between the conformal defect on P and the bath CFT d on M , whose correlation length along y axis scales as ξ ∼ ζ 0 [29]. In other words, the reflected entropy is dominated by the correlation within the smaller region {(x, y)|0 ≤ x < x b , −ξ < y < ξ} in the CFT. When d = 3, it is similar to the situation of Fig. 1(a), where A ∪ B is a subregion of P with length scaling as ξ. So the reflected entropy scales as ln(ξ/ ) ∼ ln(z 0 / ) [9] and agrees with (34).
In the trivial phase, we have ζ 0 → ∞ and the correlation within P is no longer suppressed by the entanglement between the defect P and the bath CFT on M . The reason is that the central charge c P of the defect on P is comparable to the central charge c M of the bath CFT on M , more precisely c P /c M ∼ (1 + nλ) csc n θ [7]. If we neglect the influence of the bath CFT, the defect on P behaves as a CFT d−1 , whose reflected entropy between its two half spaces scales as (35) exactly, where n = d − 2.
IV. THE REFLECTED ENTROPY IN THE BLACK HOLE BACKGROUND
At finite temperature, the (d+1)-dimensional bulk geometry is a neutral black hole with a brane, which bends toward the interior of the bulk due to the gravity and touches the horizon.
Unlike the case of pure AdS space, now the background is characterized by finite parameters {θ, λ, T }, and the RT surface in the bulk would not shrink into zero even for x b → 0. On the other side, with the increase of λ, the QES will approach the horizon. Following the framework proposed in [5,6,30,31], in this section we will numerically construct the black hole background with a brane, and further evaluate the reflected entropy S R A:B by the minimal cross-section E A:B . Its behavior for different λ will be analyzed as well.
A. Background
To discuss the reflected entropy at finite temperature in the doubly holographic setup, the first thing is to construct a black hole background with a brane. In particular, once the backreaction of the brane is taken into account, one usually needs to solve the equations of motion numerically. Such a static background in higher dimensions has previously been investigated by virtue of Einstein-DeTurck method [5,32,33]. In this paper, we consider the specific case of d = 3, and for later convenience, we introduce a coordinate system (t, w, r, y) by the following transformation In this coordinate system, the metric ansatz for a black hole background is given as where and {F i |i = 1, 2, ..., 5} are functions of (r, w) in the domain {0 < w < 1, 0 < r < 1}.
The configuration of the background is given by the following setup. The brane Q is located at w = 0. The infinity I far from the brane is located at w = 1. The boundary M is located at r = 1 and the horizon H is located at r = 0.
Instead of solving the Einstein equation directly, we will solve the Einstein-DeTurck equations [5,32,33] where ξ µ is the DeTurck vector andḡ is the reference metric. The boundary conditions are imposed as follows The reference metricḡ should be subjected to the same boundary conditions as g on the surfaces {I, H, M }, thus we choose it to be an AdS-Schwarzschild black hole with With the general metric ansatz (37), the background is obtained numerically via the Newton-Raphson method, where we discretize (39) on r and w directions with Chebyshev Pseudospectral method.
To take the backreaction of the brane into account, hereafter, we will fix θ = π/4 and vary λ. With different λ, the numerical solutions of the induced metric on the brane (w = 0) are illustrated in Fig. 6. Note that in the original coordinates (t, x, y, z), the component h zz is divergent on the horizon z = 1, but the apparent divergence at r = 0 vanishes in the new coordinates (t, r, w, y).
B. RT surface
Next, for a given bipartite system, we intend to determine the RT surface over the black hole background. Following the scheme in [30,31], we divide the RT surface into two Here L is fixed to be 1 and the curves from violet to red are plotted with different λ = 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1, 1. 6, 2, 7, 14, 28. segments by the turning point, and each of which can be parameterized by respectively. Then the entropy density is the minimum of the sum of two area terms where (s p ) bulk := Area(X p ) s p as well as the intersection r = r(0) changes with the shift of the turning point {r t , w t }.
Whens p reaches its minimum s p , we denote the corresponding solutions as r c (w) and w c (r) for each segment, and the intersection as r = r c (0).
In the limit x b → 0, the region p becomes P , and the corresponding entropy S p becomes S P , which is the quantity we are really concerned with. The configurations of RT surfaces for different values of λ are shown in Fig. 7. Since the black hole breaks the scaling symmetry of AdS 4 space, in general the RT surface no longer shrinks to the boundary of the brane.
For small λ, the RT surface is located near the boundary, and the configuration is similar to the vacuum case; while for large λ, the configuration of the RT surface is stretched along x direction near the horizon. Moreover, with the growth of λ, the increase of the entanglement entropy density s P is almost contributed from the increase of (s P ) DGP , while (s P ) bulk is almost a constant, as shown in Fig. 8.
This tendency can also be understood from the brane perspective. In the presence of a black hole at finite temperature, the CFT 3 on Q ∪ M is characterized by the correlation with finite length. When one searches for the QES of P by utilizing (1), the entropy density of the CFT within the wedge Σ P is bounded by the thermal correlation length. For a large λ, i.e. a small G (d) , the entropy is dominated by the geometric term in (1). Therefore, the QES approaches the horizon.
C. EWCS ending on the brane
Now we consider the reflected entropy between two subsystems by evaluating the area of the EWCS. As x b → 0, the bipartition a : b becomes A : B exactly and the reflected entropy Similarly, the cross-section E a:b is parameterized into two parts by the line w/w t = (1 − r)/(1 − r t ).
The numerical results for the reflected entropy and its behavior with the change of λ are illustrated in Fig. 9. Similar to the case of entanglement entropy, with the growth of λ, From the brane perspective, due to the finite length of the thermal correlation on the brane, the reflected entropy contributed by the CFT in (4) is bounded above. For large λ, the geometric term becomes dominant. With the increase of λ, the geometry on the brane changes slowly, as shown in Fig. 6, and so does the area of E A:B . While 1/G (d) increases linearly for large λ. So the reflected entropy S R (A : B) increases linearly, as shown in Fig. 9.
In this paper, we have investigated the reflected entropy including the entanglement of quantum matter via the doubly holographic setup. We have proposed a notion of quantum entanglement wedge cross-section (QEWCS), which minimizes the sum of the geometric contribution and quantum matter contribution in (4), and may describe the reflected entropy with higher-order quantum corrections. Specifically, we have considered a (d+1)-dimensional gravity theory in AdS with a brane anchoring on the conformal boundary, which is dual to the gravity-plus-CFT theory living on the brane and the bath CFT living on the conformal boundary. Taking the tension and DGP term on the brane into account, we have obtained the reflected entropy between a bipartition of the boundary in the gravity-plus-CFT theory by calculating the minimal area of the corresponding entanglement wedge cross-section in the (d + 1)-dimensional space. In general, the reflected entropy consists of two parts, one contributed by the geometry on the brane and the other contributed by the CFT on the brane. We have computed their proportion for different Newton constants in the DGP term and found that their behavior agrees with the analysis based on semi-classical gravity and the correlation of CFT coupled to the bath CFT.
It is worthwhile to point out that due to the parity y → −y of the bipartition A : B chosen in this paper, the configurations of the QEWCS in (4) and the EWCS in (3) happen to be the same. Nevertheless, we intend to stress that their definitions are quite different.
It is worth further studying the QEWCS of the bipartition without parity in further, and it is expected that the configurations of QEWCS and EWCS should be different.
The reflected entropy in double holography gives a way to compute the entanglement contributed from quantum matter in the bulk of spacetime. Our setup may also be applied to an eternal black hole coupled to the baths, which recently plays a key role in the understanding of the black hole information loss paradox [5][6][7][33][34][35]. | 7,275.8 | 2021-09-19T00:00:00.000 | [
"Physics"
] |
Production of Abrasive Sandpaper using Periwinkle Shells and Crab Shells
In this study, the properties of periwinkle shell and crab shell grains, such as hardness, compressive strength and wear resistance were examined for their suitability as abrasive materials. The binding effect of polyester resin at high concentration was also considered. Through the process of crushing, grinding and subsequently sieving using ASTM E11 set of sieves, the shells were processed into grit standards grain sizes of P40 and P60. Furthermore, by mixing and mould compression using a hydraulic press polymer matrix composite with particles varying from 96 wt.% to 92 wt.% and resin 3 wt.% to 7 wt.% with 1 wt.% each of cobalt naphthalene and methyl ethyl ketone peroxide hardener respectively were developed from the grits. It was found that, with an increase in polyester resin content, the hardness and compressive strength increased, while the wear rate decreased. The composition with 92 wt.% periwinkle shell grains to 7 wt.% polyester resin was found to be with the most improved abrasive properties.
INTRODUCTION
In shaping, finishing and polishing of other materials, abrasive materials which are very rigid mineral materials are used. The abrasive materials are processed in a furnace after which they can further be crushed and sifted into different grain sizes called grits [1]. Hardness, brittleness, character of fracture, toughness, grain shape and grain size, purity and uniformity of the grains are considered as the most significant physical properties of abrasive materials [2]. Sandpaper is an abrasive grain material that is used to make the surface of work-piece smoother or rougher if rubbed with the surface of the work-piece. Small amounts of the work-piece surface materials are removed in the process. This is useful in removing unwanted layer or coating, such as dirt, paint and stain, on surface of a material [3][4] to attain precision and good surface appearance. Sandpaper consists of abrasive grain material fixed to a flexible backing material by an adhesive [5].
In Nigeria, available synthetic materials abrasive materials are either very scarce or very expensive; this is mainly due to non-availability or high cost of the materials (such as silicon carbide, aluminium oxide and aluminium silicate) used in their production [6]. Meanwhile, the available natural materials, like lime, chalk and silica, aluminium silicate, kaolinite, diamond and diatomite are found to be less effective due to impurity nature of the materials [7][8]. In developing countries like Nigeria, bio-wastes (such as shells from crabs, periwinkles, snails, etc) which are from biological organisms are dumped indiscriminately and as such results to environmental nuisance. Thus, the need to convert these bio-wastes into industrial tools like sandpaper are essential for economic development and sustainability, which will assist in addressing the most needed economic diversification in Nigeria to quench the effect of COVID-19 on the nation's economy.
Periwinkles are marine molluscs (gastropods) with thick spiral shells. The Periwinkle shells are the outer casing of the sea snails which are hard with swollen rough surfaces and usually regarded as waste after consuming the inner fleshy part of the snail. As they grow, gastropod shells follow a mathematically ordered pattern. Thus, as they increase in size, they retain their basic form. It is the presence of the high level of concentration of calcium carbonate (CaCO3) that makes the shell hard. In an experiment on the chemical content of the periwinkle shell and its suitability in thin layer cinematography carried out by Orji et al. [9], the shells were found to contain very high percentage of CaCO3, thereby making them a probable source of CaCO3. The high content is responsible for it being considered for the experiment.
Crabs are crustaceans that are not readily available to people around landlocked areas, but available in high quantities in coastal areas. The swimming crab, Callinectesamnicola is an important food item in coastal waters of West Africa. It belongs to the phylum arthropoda, order decapoda and family portunidae. Crabs are high in calcium; potassium, magnesium, and manganese which make them possess hard and rigid exoskeleton [10]. Other functions of this mineral include formation of endoskeleton structure and maintenance of colloidal system. The crab shell contains 40-70% calcium carbonate which varies according to the species; the calcium carbonate can be further processed into calcium hydroxyapatite [11].
In engineering applications, these sea shells are being used as substitutes for aggregates (chippings) especially in the coastal areas where aggregates are lacking. Some researchers have conducted exploratory studies on the partial or total substitution of waste sea shells with coarse aggregate for the production of mortar and concrete used for civil construction. In the study carried out by Adewuyi and Adegoke [12], it was found that the replacement of granite with 35.4 -42.5 % waste periwinkle shells did not compromise the compressive strength of the resulting concrete and made 14.8 -17.5 % saving in material cost. In glass manufacturing, the suitability of periwinkle shells as a substitute for lime was examined by Malu and Bassey [13]. A proximate analysis of periwinkle shell by Malu and Bassey showed that the shell contained Calcium oxide (38.4%), Silicon (IV) Oxide (0.014%), Magnesium Oxide (18.70%), Aluminium Trioxide (0.211%) and Iron Oxide (0.019%) which are important minerals suitable for glass production [13]. Emery cloth/sandpaper was manufactured from locally sourced materials by Wai and Lily [14]. Silicon sand (quartz) which was sieving into fine grit 180µm and coarse grit 50µm was used as abrasive grit with epoxy resins as binder to produce sandpaper using Hand Spray method. The researchers recommended the manufacturing process for small scale industries. Kishore also worked on the investigation of mechanical properties of crab shell. The results of the study showed that under tensile loading conditions, the whole crab shell exhibited a mode of fracture which does not include the delamination of lamellae common to the other arthropods [15].
It is of interest to further study some properties of periwinkle shells and crab shells and also to determine its usefulness and relevance in engineering usages, especially in production of abrasive material. Therefore, the use of periwinkle and crab shells which are regarded as waste products were considered for production of abrasive sandpaper in this study. This will help to domesticate production of sandpaper in Nigeria; assist in economic diversification of the nation through establishment and sustainability of Small and Medium Scale Enterprises (SMSE); and adequately proffer sustainable solution to the issue of unemployment among the nation's youths; and economic down turn that may arise as results of COVID-19 lockdown.
METHODOLOGY 2.1 Materials Preparation
The periwinkle and crabs were commercially obtained in a market at Egbeda, Lagos State, Nigeria. The periwinkle and crabs shells (Figures 1 a & b) were removed and washed in water to remove all traces of dirt on them. Approximately 3.5 kg samples of shells were sun dried for 2 weeks and further oven dried till moisture content was fully removed. The dried samples were further charged into a local grinding machine where they were ground into powder. The powdery materials were then separately sieved (using ASTM E11 guidelines) with two sieve sizes of 250 μm (P60 abrasive grits) and 420 μm (P40 abrasive grits) [16][17]. Other substances used in this study, which include cobalt naphthalene, methyl ethyl, ketone peroxide and polyester resin were classified and labelled for easy identification. The test samples were produced from each of the weight composition of resin by compressing the pastes to solid shapes in a metal mould of dimensions height 30 mm and diameter 52 mm using a hydraulic press (Model P100T -Capacity, Serial No-38280). The samples were kept in a well-aerated environment for 21 days as practiced by Ibrahim et al. [6]. Some of the samples produced from the formulation are presented in Figure 4, while Table 1 shows the formulation used in production of sandpaper samples.
Samples' Characterization
The samples of the sandpapers produced were characterized through determination of some of the samples' physical and mechanical properties, such as water absorption, wear rate, hardness and strength.
Water Absorption Test
The water absorption test was carried out by determining the weight of the sample (Wo) using weighing machine and subsequently placed the sample in a contained containing water. It was re-weighed after 24 hours to obtain Wi. For each sample, the percentage weight gained was calculated and recorded using the following relationship in Equation 1 [18].
Hardness Test
The hardness value of the composite of different grain sizes was determined by the Brinell hardness method using a Testometric Materials Testing Machine (Type DBBMTCL-5000 Kg, Serial No. 17819) at National Centre for Agricultural Mechanization Universal Testing Machine (NCAM UTM) Laboratory, Ilorin, Nigeria, based on ASTM E10-18 guidelines [19]. A 10mm steel ball indenter was pressed with the specimen at a test speed of 5mins using a force load of 50kg to a penetration depth. The Brinell hardness number was calculated using Equation 2.
Compressive Test
The compressive strength test was carried out using the Testometric Materials Testing Machine (Type DBBMTCL-5000 Kg, Serial No. 17819) at National Centre for Agricultural Mechanization Universal Testing Machine (NCAM UTM) Laboratory, Ilorin, Nigeria, based on ASTM Standards C 617 guidelines [20].
Wear Test
The wear test was carried out at using Europec Bench Grinder (MD-250F). The grinding wheel was coated with an abrasive material and a load of 2kg was applied on each of the specimens for 2 minutes at 2950 rev/min. The differences in weight measured before and after tests give the wear of the samples. The formula (Equation 3) used to convert the weight loss into wear rate as practiced by Bashar et al. and Edokpia et al., [21][22]: = weight after immersion and = weight before immersion, F = Applied load in Kg, D= Diameter of indenter in mm, d= Diameter of indentation in mm, wx is the weight before the test, wy is the weight after the test and s is the sliding distance.
RESULTS AND DISCUSSIONS 3.1 Water Absorption Test
The results of the water absorption test are presented in Figures 5. The results showed that the water absorption rate in the developed abrasive sandpaper samples increased as the sieve size increased from 250 µm to 420 µm in the formulation. The increase in water absorption rate must have been due to the poor interfacial bonding between binder and filler particle that caused increase in porosity.
The highest water absorption value for the 250 µm particle size was 5.21% as against the 420 µm particle size which showed a highest value of absorption rate of 13.186%. The composites with low resin concentration are prone to water absorption due to poor interaction of grains surface area and the resin binder [22]. This shows better water absorption property than the produced abrasive sandpaper. From this result, 420 µm particle size of abrasive sandpaper shows better water absorption rate compared to 250 µm particle size. This result is in line with result obtained by Obot et al. [23]. Weight percentage composition (wt %) 250µm 420µm Figure 6 shows the results of the Brinell hardness test carried out on the samples made from 250µm and 420µm sieve sizes. The results indicated that there was an increment in the hardness value with increasing polyester resin concentration from 3 wt. % to 7 wt.%. The results for Brinell hardness tests carried out on the abrasive samples showed that the hardness values increased proportionally as the percentage weight composition of the polyester resin increased from 3 to 7 wt. %. Interfacial bonding of the polyester resin holding the periwinkle shell and crab shell grain particles together was responsible for the increase in hardness, and also contributes to the hardness of the parent composite material [ ]. The samples displayed the hardness increment across the sieve size with hardness values of 18437.263kg/m 2 and 21650.901kg/m 2 for 250 µm and 420 µm sieve sizes with 7 wt. % polyester resin respectively.
Hardness Test
In line with the findings of Obot et al., the samples' hardness increased with reducing particle sizes [24].
Wear Test
The results of the wear test carried out for 2 minutes per specimen with a fixed applied load of 2 kg are as presented in Figure 7. It was observed from the results that the 250 μm grain sample has the least wear rate (1.67 m 3 /Nm) with a resin content of 92 wt %. It was also observed that the wear rate decreased with increasing resin content which is due to an increase in hardness value and compressive strengths of the samples. The relationship between compressive strength, hardness and wear resistance with increased polyester resin content is in agreement with the assumptions of previous researchers [6,[25][26][27]. The decrease in wear rate with increased in the resin content may be also be attributed to the interfacial bonding between the resin and the grains. Therefore, the wear properties of the agrowaste sandpaper can be improved by increasing the polyester resin content. From the results (Figure 7), it can be proposed that the 250μm sample with 1.67 m 3 /Nm wear rate is an alternative for the common sandpaper.
Compressive Test
The compressive strength test results in Figure 8 shows the compressive strength across the sieve sizes. The 250 μm sieve size of the produced samples displayed the lowest compressive strength (1.650 N/mm² and 39.869 N/mm² for 3% and 7% resin composition respectively) while the other sieve size of 420 μm displayed compressive strength of 16.429 N/mm² and 45.477 N/mm² for 3 % and 7 % resin composition respectively. This may be due to the powder fineness inducing brittleness upon the composite as seen by the reduced strength with reducing sieve sizes. There was an increase in the compressive strength with increasing polyester resin concentration from 3 to 7 wt. %. The highest ultimate compressive strengths obtained was 45.477 N/mm²for 420μm particle size of shells with 7 wt. % polyester resin and 39.869 N/mm²was recorded for 250µm particle size with 5 wt.% polyester resin. in the resin's concentration from 3 to 7 wt. %. Meanwhile, with increased in the resin concentration and reduction in grains composition, the wear reduced. Interfacial bonding between the combination of periwinkle grains and crab shells with polyester resin attributed to the improvement in the tested properties of the composite. 4. The composition from 92 wt.% and resin at 7 wt.%, and 1 wt.% for methyl ethyl ketone peroxide hardener and cobalt naphthalene each exhibited the most suitable abrasive properties (such as hardness, wear resistance and compressive strength) considered as the appropriate alternative material for abrasive sandpaper production. 5. The properties of the agro-waste sandpaper exhibited possible applications as abrasive grits with further improvement.
RECOMMENDATIONS
From the outcomes of this study, it is recommended that: 1. Periwinkle shells and crabs shells can be used to convert waste to wealth. 2. Since the performance evaluation of the produced abrasive paper was suitable for usage, the importation of sandpaper from other countries into Nigeria can be greatly reduced by producing the abrasive locally using agricultural wastes. 3. Further research works can be carried out to discover hitherto waste materials that could be recycle for meaningful purposes rather than disposing them off. | 3,516.8 | 2020-06-25T00:00:00.000 | [
"Materials Science"
] |
Generalized Fuzzy Soft Power Bonferroni Mean Operators and Their Application in Decision Making
: In decision-making process, decision-makers may make different decisions because of their different experiences and knowledge. The abnormal preference value given by the biased decision-maker (the value that is too large or too small in the original data) may affect the decision result. To make the decision fair and objective, this paper combines the advantages of the power average (PA) operator and the Bonferroni mean (BM) operator to define the generalized fuzzy soft power Bonferroni mean (GFSPBM) operator and the generalized fuzzy soft weighted power Bonferroni mean (GFSWPBM) operator. The new operator not only considers the overall balance between data and information but also considers the possible interrelationships between attributes. The excellent properties and special cases of these ensemble operators are studied. On this basis, the idea of the bidirectional projection method based on the GFSWPBM operator is introduced, and a multi-attribute decision-making method, with a correlation between attributes, is proposed. The decision method proposed in this paper is applied to a software selection problem and compared to the existing methods to verify the effectiveness and feasibility of the proposed method.
Research Background
Since the decision-making problem exists in every field of life, it has always been paid close attention by the majority of scholars. As the social environment becomes increasingly complex, more and more factors are involved in the decision-making problems, which leads to the uncertainty, hesitation and fuzziness of decision makers (DMs) when they give evaluation opinions. Therefore, fuzzy set theory, which is used to fit people's fuzzy opinions, has become one of the most commonly used tools to solve decision-making problems [1][2][3]. At the same time, as a more general form of fuzzy sets, soft sets have also been paid close attention by many scholars, and have been applied to uncertain decisionmaking problems in various fields [4][5][6][7][8]. The concepts of fuzzy soft sets (FSS) [9] and generalized fuzzy soft sets (GFSS) [10] are also proposed. In recent years, an increasing number of scholars have used generalized fuzzy soft sets to express people's fuzzy views in order to solve decision-making problems [5][6][7][8]11,12]. Considering the complexity of practical problems, DMs with different backgrounds, and different levels of professional knowledge and experience, are often required to participate in the decision-making process, and different DMs often give different decision-making opinions. Therefore, in order to obtain a comprehensive opinion that is accepted by everyone, it is necessary to aggregate different opinions or carry out corresponding operation. At present, scholars have studied various forms of operator (such as the power average (PA) [13] operator, the Bonferroni mean (BM) [14] operator and the power Bonferroni mean (PBM) [15] operator) to carry out corresponding operations on different opinions in order to obtain as accurate a decision scheme as possible. These operators effectively integrate the information of individual DMs into the overall information, and better consider the possible correlation between different attribute variables.The existing literature on GFSS integration methods is mostly proposed under the condition that the attributes are independent of each other. In fact, there may be different degrees of correlation between different attributes. The PA operator and BM operator can solve these problems. Among them, the PA operator can determine the attribute weights according to the support relationship between the attributes to reduce the influence of the biased decision-maker's abnormal preference value on the decision results, and the BM operator can fully consider the correlation between the attributes. However, there is no research on integrating GFSS using PBM operators. This paper proposes the generalized fuzzy soft power Bonferroni mean (GFSPBM) operator and the generalized fuzzy soft weighted power Bonferroni mean (GFSWPBM) operator by combining the advantages of the PA operator and BM operator. The new operator proposed in this paper can enrich the integration method of GFSS, expand the application field of PBM operator, and provide a new method for multi-attribute decision-making problems. The decision-making method proposed in this paper can be applied to fields such as supplier selection evaluation, product program selection evaluation, and recommendation of talent introduction in the human resources department.
Literature Review
In 1965, Zadeh [16] proposed the fuzzy set theory. This theory regards the object to be investigated and the fuzzy concept reflecting the object as a certain fuzzy set, establishes an appropriate membership function for this, and analyzes the fuzzy object through the related operations and transformations of the fuzzy set. In 1999, Molodtsov [4] introduced soft set theory, which is a mathematical tool for solving uncertain problems which can be widely used in economics, engineering, physics and other fields. Maji et al. further studied the theory of soft sets. They defined the operation of soft sets [17], applied the theory of soft sets to the solution of mathematical decision-making problems [18], and combined fuzzy sets with soft sets to introduce FSS [9]. In order to understand the influence of decision makers' cognition on the effectiveness of information provided by them, Majumdar [10] further proposed GFSS on the basis of fuzzy soft sets. In recent years, the theoretical research and application exploration of generalized fuzzy soft sets have attracted the attention of many scholars. Among them, Chen et al. [5] applied the Bonferroni mean operator to GFSS and proposed the generalized fuzzy soft set Bonferroni mean (GFSSBM) operator, which solved the problem of group decision-making under limited cognition by decision-makers. Dey and Pal [11] introduced the concept of generalized multi-fuzzy soft sets and applied it to decision-making problems. Agarwal et al. [12] extended the intuitionistic soft set (IFSS) to the intuitionistic fuzzy set (IFS) and defined the generalized intuitionistic fuzzy soft set (GIFSS), which can provide a given standard evaluation and the host's evaluation of the data. Xu et al. [6,7] combined the extreme learning machine and GFSS to establish an ensemble credit scoring model. Li et al. [8] combined GFSS with hesitant fuzzy sets and proposed a generalized hesitant fuzzy soft set (GHFSS). Due to the dynamic development of multi-criteria assessment methods, fuzzy criteria are being increasingly considered by scholars. Bazzocchi et al. [19] proposed a method to prioritize space debris by using multi-criteria decision-making (MCDM) methods and fuzzy logic. Dong et al. [20] propose a new fuzzy best-worst method (BWM) based on triangular fuzzy numbers for MCDM; this method is very useful in solving multi-attribute decision-making problems in a fuzzy environment. Thakur et al. [21] depict an MCDM issue and offer the means of the VIKOR approach inside the pythagorean fuzzy system. The information fusion operator of fuzzy numbers is a useful tool to integrate all input-independent variables into the composite total value. The existing literature on the integration methods and applications of GFSS are mostly proposed when the attributes are independent of each other. In many actual decision-making problems, the degree of correlation between different attributes may be different, such as complementarity, redundancy, and preference relationships. The BM operator, proposed by Bonferroni [14] in 1950, is a mean-type ensemble operator, which can effectively find the interrelationship between the variables we entered and aggregate multiple input variables into one variable, is a bounded ensemble operator. In recent years, the BM operator has been widely used in different multi-attribute decision-making problems. For example, Wei et al. [22] studied uncertain linguistic BM operators. The generalized BM operator was proposed by Yager [23]. Intuitionistic fuzzy BM operator is defined by Xu and Yager [24]. Liu and Zhang [8] defined four kinds of intuitionistic uncertain linguistic arithmetic Bonferroni mean (IULABM) operators. In addition, the PA [13] operator is also an integrated operator that can capture the correlation between existing data and cognitive information. It considers the support relationship between the input data to calculate the weight of the attribute, which can effectively reduce anomalies. The impact of data on decision-making results makes the processing of decision-making information more objective and fair and, therefore, has received widespread attention. For example, Liu et al. [25] proposed the generalized neutrosophic number weighted power average (GNNWPA) operator to solve the multi-attribute decision-making (MAGDM) problem. A PA integration operator is applied to an intuitionistic fuzzy number (IFN) environment by Xu [26]. To comprehensively utilize the advantages of the BM operator and the PA operator, He et al. [15] combined the PA operator with the BM operator and proposed the PBM operator. Now, PBM operators are used in various fuzzy environments, such as hesitant fuzzy sets [15], intuitionistic fuzzy sets [27,28], interval-value intuitionistic fuzzy sets [29], and linguistic intuitionistic fuzzy sets [30]. However, to the best of our knowledge, there is no research on how to use the PBM operator to integrate GFSS. Therefore, to enrich the GFSS integration method and expand the application field of the PBM operator, this paper will study the GFSS integration method based on the PBM operator and propose two new GFSS integration operators, namely, the GFSPBM Operator and GFSWPBM operator. In this paper, we discuss and study the excellent properties of operators carefully. On this basis, the idea of the bidirectional projection method is introduced, and a new GFSS multi-attribute decision-making method is given. The combination of different operators and different environments will form new environment operators. The environmental operator has the advantages of its component operator. We classify the environmental operator according to the difference in the component operator as follows (1) PA operator class [25,26]: This can reduce the influence of biased decision-makers' abnormal preference values on decision-making results; (2) BM operator class [22,24,31]: This can fully consider the interrelationship between attributes; (3) PBM operator class [15,[27][28][29][30]: This can effectively reduce the influence of abnormal data on decision-making results and fully consider the correlation between attributes.
In decision-making problems, when we need to eliminate the influence of abnormal data on the decision-making results, we can use operators of the PA operator class. When we need to consider the correlation between attributes, we can use operators of the BM operator class. When we need to eliminate the influence of abnormal data on decision results, but also consider the correlation between attributes, we can use operators of the PBM operator class. When we need to consider the uncertainty of fuzzy evaluation information, and we also want to reduce the influence of abnormal preference values on decision results, and consider the correlation between attributes. At this time, the GFSPBM operator proposed in this paper can be used. Our arrangement for the rest of this paper is as follows. To make our discussion easier, we first introduce the definitions of GFSS, PA operator, BM operator and PBM operator in Section 2. In Section 3, we proposed the GFSPBM operator and the GFSWPBM operator and carefully analyze and discuss the excellent properties of these operators. In Section 4, we introduce the idea of a bidirectional projection method and provide a multi-attribute decision-making method based on the GFSWPBM operator. In Section 5, we provide a practical application example of software selection, which shows the feasibility of our decision-making method. We compare and analyze different operators to verify the applicability of our proposed method, and analyze the sensitivity of the decision-making process. In the end, the conclusion is given in Section 6.
Preliminaries
Definition 1. (Ref. [10]) Suppose U = {x 1 , x 2 , . . . , x m } is the universal collection of elements, E = {e 1 , e 2 , . . . , e n } is the universal set of parameters, F(U) is the set of all fuzzy soft sets over U. The pair ( F, E) is called soft universe. Let F : E → I U . Suppose γ is the fuzzy subset of E, that is γ : 1], where I U is the set of all fuzzy subsets of U. Suppose F γ is the mapping F γ : E → I U × I. Define mapping as F γ (e) = (F(e), γ(e)), where F(e) ∈ I U . Then, we call F γ a generalized fuzzy soft set over the soft universe ( F, E).
At this point, every parameter e j , F γ e j = F e j , γ e j can not only express the attribution degree of the elements of U in F e j , but also the degree of possibility of this attribution degree, which is represented by γ e j . So, F γ e j can be expressed as where F e j (x 1 ), F e j (x 2 ), . . . , F e j (x m ) express the degrees of belongingness and γ e j indicate the degree of possibility of such belongingness.
Example 1.
We assume U = {x 1 , x 2 , x 3 } is a collection of three car under consideration. Let E = {e 1 , e 2 , e 3 } be a collection of qualities: e 1 = good appearance, e 2 = cheap, e 3 = good per f ormance.
We defined a function F γ : E → I U × I as follows: . This membership matrix of F γ can be written as Combined with the operation defined by Chen [5], we have made the following concise definition of the GFSS operation , γ e j , (j = 1, 2) be two GFSSs over (U, E), their operations are defined as follows: Definition 3. (Ref. [32]) Let F γ e j , (j = 1, 2) be two GFSSs over (U, E), the degree of support between them is defined as . . , n. Then, the power average operator can be indicated by the following aggregation function , Sup x i , x j represents the degree of support between x i and x j , and meets the following conditions [23]) Let p, q ≥ 0, and X = (x 1 , x 2 , . . . , x n ) be a group of nonnegative real numbers, x i ∈ [0, 1], i = 1, 2, . . . , n. Then, the Bonferroni mean operator can be indicated by the following equation When n = 2 and p = q, the Bonferroni mean and the geometric mean are equal. Definition 6. (Ref. [15]) Let p, q ≥ 0, and X = (x 1 , x 2 , . . . , x n ) be a group of nonnegative real numbers, x i ∈ [0, 1], i = 1, 2, . . . , n. Then, the power Bonferroni mean operator can be indicated by the following equation
Generalized Fuzzy Soft Power Bonferroni Mean Operator
Considering that the PA operator can determine attribute weights according to the support relationship between attributes, thereby reducing the influence of biased decision makers' abnormal preference values on the decision results, while the BM operator can consider the degree of correlation between different attributes; therefore, according to the characteristics of PA operator and BM operator, this chapter combines the two operators and extends it to the generalized fuzzy soft set environment, and proposes a PBM operator based on generalized fuzzy variables. Definition 7. Let p, q > 0, F γ e j (j = 1, 2, . . . , n) be a collection of generalized fuzzy variables.
We call GFSPBM p,q the generalized fuzzy soft power Bonferroni mean operators, where T(F γ (e i )) = ∑ n j=1,j =i Sup F γ (e i ), F γ e j (i = 1, 2, . . . , n), Sup F γ (e i ), F γ e j represents the degree of support between the generalized fuzzy variables F γ (e i ) and F γ e j , and meets the conditions in Definition 2.6. Theorem 1. Let p, q > 0, F γ e j (j = 1, 2, . . . , n) be a collection of generalized fuzzy variables, then the aggregate value obtained by the GFSPBM operator is still a generalized fuzzy variable.
Proof. This theorem is easy to prove, and we omit this process. Note 1. If one define ω k as follows then ω k ≥ 0, ∑ n k=1 ω k = 1, and Equation (7) can be transformed into Equation (9): The GFSPBM operator has excellent properties such as idempotence and commutativity.
is the weight vector of attribute e j (j = 1, 2, . . . , n). Then, the GFSWPBM operator can be defined as follows where T(F γ (e i )) = ∑ n j=1,j =i ω j Sup F γ (e i ), F γ e j (i = 1, 2, . . . , n), Sup F γ (e i ), F γ e j represents the degree of support between the generalized fuzzy variables F γ (e i ) and F γ e j , and satisfies the conditions in Definition 2.4. Theorem 4. Let p, q > 0, F γ e j (j = 1, 2, . . . , n) be a collection of generalized fuzzy variables, then the aggregate value obtained by the GFSWPBM operator is still a generalized fuzzy variable.
Proof. This theorem is easy to prove, and we omit this process.
Similar to the GFSPBM operator, GFSWPBM also has permutation invariance. Several special cases about GFSWPBM operators will be discussed below. It can be found that many existing operators are special cases of GFSWPBM operators mentioned in this article.
Similarity Measure between GFSSs
To correct the shortcomings of similarity measures defined in the existing literature, Chen [5] defined a new GFSS similarity in 2020, as shown in Definition 4.3. Definition 9. (Ref. [5]) Let U = {x 1 , x 2 , · · · , x m } be the universal collection of elements, and E = {e 1 , e 2, · · · , e n } be the universal collection of parameters. Suppose F γ and G δ are two GFSS over the parameterized universe (U, E), F γ = F e j , γ e j , j = 1, 2, · · · , n and G δ = G e j , δ e j , j = 1, 2, · · · , n . The similarity measure between the GFSS F γ and G δ is is given by the following equation: where F ij = γ F(e j ) (x i ) and G ij = δG (e j ) (x i ).
Bidirectional Projection
Definition 10. (Ref. [33]) Let alternative A i (i = 1, 2, · · · , m) be denoted as A i = (a i1 , a i2 , · · · , a in ), where a ij is a fuzzy number, which indicates the degree to which alternative A i conforms to attribute e j . Then, the modulus length of the vector corresponding to alternative A i is Definition 11. (Ref. [33]) Suppose DM = a ij m×n isthedecisionmatrix,and A + = a + 1 , a + 2 , · · · , a + n and A − = a − 1 , a − 2 , · · · , a − n are the vectors formed by positive ideal alternatives and negative ideal alternatives, where r + j = max 1≤i≤n r ij , r − j = min 1≤i≤n r ij , j = 1, 2, · · · , n.
Definition 13. (Ref. [33]) Let the alternatives be A i = (a i1 , a i2 , · · · , a in ). A + = a + 1 , a + 2 , · · · , a + n and A − = a − 1 , a − 2 , · · · , a − n are positive ideal alternatives and negative ideal alternatives, respectively. Then, the vectors formed by positive ideal alternatives and negative ideal alternatives, negative ideal alternatives and alternatives are The corresponding vector modulus lengths are Then we defined it is the cosine of the angle between A − A + and A − A i .
Note 4.
Ref. [33] The bidirectional projection has the following properties [33]) Let the alternatives, alternatives with positive ideals, and alternatives with negative ideals be A i , A + , and A − , respectively, we defined Pr These are, respectively, called the projection of the vector formed by the negative ideal alternative and the alternative on the vector formed by the positive ideal alternative and the negative ideal alternative, and the vector formed by the positive ideal alternative and the negative ideal alternative on the alternative and the projection on the vector formed by the positive ideal alternative.
Note 5. Ref. [33] The bigger Prj
Definition 15. (Ref. [33]) To obtain the optimal alternative, the closeness C(A i ) is construed as follows The above formula actually refers to the closeness formula of TOPSIS and other methods. Obviously, the larger the C(A i ), the better the alternative A i . The opposite is true. The above decision-making methods are defined when the importance of every attribute is the same. In the practical decision-making process, the importance of different attributes may be different, so they have different weights. Next, define the weighted bidirectional projection method of GFSS. Definition 16. (Ref. [33]) Suppose the attribute weight is w = (w 1 , w 2 , · · · , w n ); then, the vector formed by the i-th alternative and the negative ideal alternative is a weighted projection on the vector formed by the positive ideal alternative and the negative ideal alternative, and the positive ideal alternative and the negative ideal alternative are formed The weighted projections of the vector formed by the alternative and the positive ideal alternative are, respectively, Pr Refer to Equation (23) to define the closeness formula considering the attribute weight:
Algorithm
In this section, we will introduce a multi-attribute decision-making method based on the GFSS environment that considers the cognition of the decision maker. The algorithm steps are as follows.
Step 1: Suppose experts use generalized fuzzy soft sets to express their opinions, which contain decision information about attribute alternatives. Therefore, DM l(l = 1, 2, · · · , L) expresses the judgment for attribute e j (j = 1, 2, · · · , n), which can be indicated as F l e j .
Step 2: Calculate the similarity measure between DMs, get the weight of DMs, and use Equation (13) to find the similarity coefficient S F l γ , F k γ between DMs. In this way, a consensus matrix of preferences of all DMs is obtained.
Next, we define the weight coefficient ω l of DM l as follows Step 3: Utilize the GFSWPBM operator, to aggregate all of the individual GFSSs F l γ e j (l = 1, 2, · · · , L) into a comprehensive GFSS F γ e j , then derive the comprehensive overall assessed value for attribute e j (j = 1, 2, · · · , n) of alternative A i (i = 1, 2, · · · , m).
Step 4: Determine the positive ideal alternative A + and negative ideal alternative A − in the decision matrix GFSS F γ e j . Take the adjustment factor γ j as the attribute weight vector, w = (w 1 , w 2 , · · · , w n ), according to the Equation (30) to find C(A i ) w (i = 1, 2, · · · , m) and rank the alternatives.
Case
We take the group decision-making problem proposed by Wang and Lee and Zhang [34,35] as an example. To improve work efficiency, the administrators of the university computer center need to consider a choice of computer software. There are four alternatives U = {A 1 , A 2 , A 3 , A 4 } remaining on the alternative list. The expert evaluation team includes three DMs D = {d 1 , d 2 , d 3 }. The DMs evaluate the four alternatives using a group of attributes: hardware/software is cheap (e 1 ), benefits the organization (e 2 ), easy migration from current system (e 3 ), and Outsourcing software developers have high reliability (e 4 ). Next, we use the decision-making method proposed in this paper to evaluate these four alternatives.
Step 1: Three evaluation experts DMs evaluated four alternatives A i (i = 1, 2, 3, 4) by using the information provided by the generalized fuzzy soft set. Table 1 shows the results of alternative A i evaluated by each DM under criteria e j (j = 1, 2, 3, 4).
Step 2: We calculate the similarity between DM and use it to calculate the weight of each DM. By Equation (13), We calculate the similarity measure between DM1 and DM2. In the same way, we can calculate the similarity measure between DM1 and DM3 and the similarity measure between DM2 and DM3: S 13 = 0.7426, S 23 = 0.7697.
Using Equation (33), we get the weights of the DMs. The calculation process of ω 1 = 0.3295 is as follows We use the same calculation method to get the values of ω 2 and ω 3 : ω 2 = 0.3332, ω 3 = 0.3373.
Step 3: GFSWPBM operators combine the evaluation information of each DM to calculate the comprehensive decision-making GFSS. To facilitate the calculation, we choose the parameter p = 1, q = 1 , as shown in Table 2. Step 4: Calculate the score C(A i ) w (i = 1, 2, 3, 4) by Equation (30). The calculation process of C(A 1 ) w is as follows: Take A Calculate the score of alternative A 1 .
The same calculation method can get the scores of A 2 , A 3 , and A 4 : C(A 2 ) w = 0.2980, C(A 3 ) w = 0.3914, and C(A 4 ) w = 0.6515. According to the score, the alternatives are sorted: The best alternative is A 4 .
Sensitivity Analysis
In decision-making process, the decision makers can choose appropriate parameters p, q according to their own risk preferences. The calculated sorting result may change with the change of parameter selection. For the convenience of calculation, the above analysis is calculated when the parameters are selected as p = 1, q = 1.
To reflect the influence of different parameters on the ranking order, we conducted a sensitivity analysis. From Table 3, we can find that, as the parameters p and q change, the score values of each scheme have changed accordingly. The best solution is always A 4 . When the gap between p and q is large enough, the worst solution changes from A 1 to A 2 . From the structure of the operator, we can easily see that the calculation results of the operator are symmetrical at about the values of p and q (for example, the calculation result of parameter p = 1, q = 2 is the same as the calculation result of parameter p = 2, q = 1).
Comparative Analysis with Existing Methods
To better show the advantages of the method proposed in this paper, the following further compares and analyzes with the existing methods, and selects the GFSSWBM operator and the FSSWBM operator in the literature [5] (To facilitate the calculation, we choose the parameter p = 1, q = 1). The comparison results are shown in Table 4.
From the above integration results, It can be seen that the optimal alternative by the GFSWPBM operator in this article and the optimal alternative calculated by the GFSSWBM operator and the FSSWBM operator [5] are both A 4 . The ranking results calculated by the three operators are roughly similar. It is further discovered that the score value of this method is also different from other methods. The main factor that causes the difference in the sorting results is that the operators of the above models all adopt different information integration methods. Although they are all based on the idea of arithmetic average, the focus of the operators in the assembly process is different. The GFSSWBM operator and the FSSWBM operator do not consider the relationship between data information. The GFSWPBM operator given in this paper combines the advantages of the PA operator and the BM operator. It not only considers the possible relationships between attributes, but also reflects the overall balance between the data, thereby preventing biased decision makers from giving anomalies. The preference value (the value that is too large or too small in the original data) affects the result of the decision, making the decision more fair and objective. At the same time, this article has different opinions on the ranking results of A 1 and A 2 , which can provide a new reference angle for judging the quality of alternatives. The main reason for the difference in the score value is that this paper uses the weighted bidirectional projection method to calculate the score value, while the GFSSWBM operator and the FSSWBM operator use the weighted method to calculate the score value. Observing the calculation results, we can see that the weighted bidirectional projection method in this paper has better discrimination.
Conclusions
Aiming at the problem of group decision-making, the cognition of decision makers is considered. In this paper, generalized fuzzy soft sets are used to solve the influence of decision makers' cognition on the effectiveness of information provided and introduces two new integrated methods for GFSS, namely generalized fuzzy soft Bonferroni mean(GFSPBM) operator and generalized fuzzy soft weighted power Bonferroni mean(GFSWPBM) operator. The new operator combines the excellent characteristics of the power average operator and the Bonferroni operator. It not only considers the overall balance between existing data and information, but also considers the possible correlation between attributes. Furthermore, some excellent properties and special situations of the new operator are also discussed. On this basis, a multi-attribute decision-making method based on the generalized fuzzy soft weighted power Bonferroni mean (GFSWPBM) operator and detailed steps are given. The weighted bidirectional projection method is introduced to calculate the score value of the alternative, and the calculation is carried out in an example. The practicability of this method is explained, and its advantages are illustrated by comparing with existing methods.
The decision-making methods provided in this article can also be further applied to such fields as supplier selection evaluation, product program selection evaluation, human resources department talent introduction recommendation, etc., with certain theoretical and application value.
Informed Consent Statement: Not applicable.
Data Availability Statement: The data presented in this study are contained within the article. | 7,176.4 | 2021-01-01T00:00:00.000 | [
"Mathematics",
"Computer Science"
] |
Study of the Effects of Lanthanum and the Iron Ion on the Acidic Properties of Al-pillared Vermiculite
The synthesis of porous materials has attracted interest for decades due to their applications in catalysis and gas adsorption. Clay obtained by insertion of mesoporous oligomer [La x Al 13-x O 4 (OH ) 24 (H 2 O) 12 ] + (where x equals the molar amount of lanthanum), followed by calcination, may cause variation in the acidic properties. The analyzes showed that increasing the amount of La on oligomer results in a higher acidity shown in the spectrum a characteristic band in the region of 1445 and 1545cm to Lewis and Brönsted respectively, predominantly of the last one. The Mössbauer spectroscopy results showed a relationship between increased symmetry of the octahedral site of iron (Fe) and increased lanthanum doping, and consequently a sharp increase in acidity.
Introduction
The first reports about obtaining clays mesoporous metal oxide occurred at the beginning of the 1970s.Since then the issue has been addressed in many studies [1][2][3][4][5] .Ions such as Keggin, also called an polycationic oligomer Al 13 6,7 , which has the empirical formula [Al 13 O 4( OH) 24 (H 2 O) 12 ] 7+ , are interspersed in the clay lamellae such as smectites and vermiculites by ion exchange 8 , which will subsequently calcined to obtain crystalline oxides structures like pillars in the clays internal structures 9,10 .
Textural and catalytic properties of the mesoporous clays are related to its intrinsic characteristics due great exposure of active sites by the presence of metal oxides or pillars in the interlayer spacing, thereby increasing the surface area and creating porosity in the material.Acidity study in pillared clays is of great importance due may help in understanding the catalytic selectivity, depending on the acid sites distribution 11,12 .
Clay minerals from smectite group possess layered structure formed by tetrahedral and octahedral sheets.The tetrahedral contain mainly Si(IV) as the central atom, while the octahedral sites are occupied by Al(III), Fe(III) or Mg(II).Two types of octahedral sheets occur in clay minerals: the dioctahedral type, where two-thirds of the octahedral sites are occupied mainly by Al(III), Fe(III) or Mg(II) and the trioctahedral type, with most of the sites occupied by Mg(II) or Li(I) [13][14][15] .Iron is another important characteristic of the clay, which is a chemical species naturally occurring clay minerals from the vermiculites group 16,17 .
Generally, the pillared clays properties depends on the modifying agent position.This behavior can be well evaluated by different methods, such as IR, Mössbauer spectroscopy, NMR, and others 18 .Related processes due iron presence in the clay is best understood using the Mössbauer spectroscopy.
Material and Methods
Precursor clay samples (vermiculite) are from the Paraiba State, Brazil.All of them were dried at 373K, crushed and sieved to 200 mesh to obtain a material with homogeneous particle size.Vermiculite treated with HNO 3 at 0.8 mol.L -1 353K, calcined at 873K for 2 hours, was treated with oxalic acid 0.12 mol L -1 to 353K at reflux exchangeable for iron removal, next, was added NaCl 3 mol.L -1 to obtain the sodium form (VNa) (Table 1).Aluminum oligomer (Al 13 ) was prepared by addition of NaOH 0.2 mol.L -1 in AlCl 3 .6H 2 O 0.2 mol.L -1 at reflux.Aluminum oligomer doped lanthanum (La x Al 13-x ), were obtained from solutions of NaOH 0.2 mol.L -1 , AlCl 3 .6H 2 O 0.2 mol.L -1 and LaCl 3 .6H 2 O 0.2 mol.L -1 , seeing x equals to 1 or 2. Oligomers were obtained by considering the ratio OH / (Al + La) = 2.4.Oligomer solution with stirring at 353K was added to the sample VNa dispersed in deionized water (10% m.V -1 ) under stirring for 24 hours, obtaining the interleaved samples, which were washed with deionized water and dried at 373K, crushed and calcined at 773K for 3 hours.
Lewis and Bronsted-Lowry acidity study of pillared vermiculites was carried out using pyridine as a probe molecule by infrared spectroscopy absorption (FT-IR) of samples tablets form at different temperatures (373K -673K).Weight W (g) and diameter D (cm) of each sample were recorded for the measurement of the Brönsted and Lewis acid sites (q B,L ) using an equation: where A is the integrated area of the band (a) and ε B, L , molar extinction coefficient (μmol.g - ) 19,20 .Mössbauer spectra were recorded in transmission mode at room temperature with a radioactive source 57 Co rhodium matrix mounted in a speed control operating mode sine ranging from -4 to +4 mm.s -1 , to observe all transitions possible energy hyperfine parameters of structural iron ( 57 Fe) nuclei.
Acidity characterization
Some authors, such as Pálkova et al. 13 and Nunes et al. 21uggest that acidic properties of pillared clays are attributed to the presence of SiOH or AlOH groups.An efficient way to investigate this association is the use of organic bases such as pyridine, which second Chmielarz et al. 22 , has frequently been used in the study of acidic sites.
According Akçay 11 and Benvenutti et al. 23 , distinction between the of Lewis and Bronsted acidity is based on the position of certain vibrational modes (Table 2) of pyridine molecule known as 8a and 19b.Pyridine is a strong base and a proton transfer, resulting in the formation of ions when the pyridinium molecules interact with Brönsted acid sites.2), respectively.Gyftopoulou et al. 24 report the samples acidic character of pillared vermiculites derived from both Brönsted sites (proton donor) and Lewis sites (receiver pair of electrons).Brönsted acidity may be associated with the protons release during pillars and the vermiculite lamellae dehydroxylation, whereas the Lewis acidity is assigned to metal oxides pillars. , Shimizu et al. 26 and Yazıcı et al. 27 noted that the region in 1490 cm -1 correspond Lewis and Brönsted acid sites, who also undergo changes in their intensities.However, these changes and thermal degradation of Lewis and Brönsted acid sites (Figure 4), are not as significant, and appear to be related to the fact that the lanthanum is not in a central position in the oligomer, and also the greater stability than Lewis sites acquire when they are located in this specie.
Figure 4 shows a significant increase in Lewis acidity, related to the higher percentage (% w/w) of lanthanum in the oligomer, which shows that the concentration of Lewis sites triples.Twice lanthanum the oligomer, the concentration of sites increases approximately eight times.
Mössbauer spectroscopy
Parameter values hyperfine obtained by adjusting the Mössbauer spectra are shown in Table 3.All Mössbauer spectra (Figure 5a-d) obtained from samples in powder of natural and pillared clay with aluminum and aluminum -lanthanum, show the presence of Fe 3+ , and still observing (Figure 5a) that only natural vermiculite (VERM) presented Fe 2+ .
Mössbauer spectra (Figure 5) may elucidate doubt as iron may affect the thermal stability of the obtained material and possibly iron and interacts only with pillared aluminum oligomer (Al 13 ) and aluminum-doped lanthanum oligomers (LaAl 12 and La 2 Al 11 ).
Al 13 PILV, and LaAl 12 PILV La 2 Al 11 PILV showed quadrupole splitting values (Δ) * significantly different (1.28, 1.21 and 1.14 mm.s -1 ), respectively, as shown in Table 3, decreasing values show an increase in the organization octahedral site iron could be related growing pillars doping with lanthanum and increased acidity.
According Carriazo et al. 28 may be slight variations in the parameters δ and Δ EQ , especially considering the chemical environment of Fe 3+ in the mineral structure may vary slightly, and distortion may occur in the geometry sites of Fe 3+ due loss of water or OH groups in the calcination step.
Conclusions
Aluminum doped lanthanum oligomer (LaAl 12 PILV and La 2 Al 11 PILV) showed Lewis and Brönsted sites predominance at temperatures up to 373K and 573K respectively.The Brönsted sites showed greater thermal stability in all modified clay samples.Mössbauer spectra indicated a possible relationship between increased octahedral site Fe 3+ symmetry, lanthanum/aluminum doped oligomer and higher acidity.
Figures 1 ,
2 and 3 show pyridine spectra adsorbed on samples of the oligomers modified with vermiculite Al 13 , LaAl 12 and La 2 Al 11 respectively, at different temperatures: 373K, 473K, 573K and 673K.Stretch region in the spectra of 1580-1550 cm -1 and 1455-1440 cm -1 band 19b are characteristics to the Brönsted and Lewis sites (Table
Figure 1 .
Figure 1.FT-IR spectra of pyridine adsorbed on the sample Al 13 PILV.
Figure 3 .
Figure 3. FT-IR spectra of pyridine adsorbed on the sample La 2 Al 11 PILV.
Figures 1 ,
Figures 1, 2 and 3, the band at 1445cm -1 on Lewis sites increases significantly as the amount of lanthanum incorporated increases.Layman et al.25 , Shimizu et al.26 and Yazıcı et al.27 noted that the region in 1490 cm -1 correspond Lewis and Brönsted acid sites, who also undergo changes in their intensities.However, these changes and thermal degradation of Lewis and Brönsted acid sites (Figure4), are not as significant, and appear to be related to the fact that the lanthanum is not in a central position in the oligomer, and also the greater stability than Lewis sites acquire when they are located in this specie.Figure4shows a significant increase in Lewis acidity, related to the higher percentage (% w/w) of lanthanum in the oligomer, which shows that the concentration of Lewis sites triples.Twice lanthanum the oligomer, the concentration of sites increases approximately eight times.
Table 1 .
Description of the samples.
Table 2 .
Pyridine bands adsorbed on the solid surface according Benvenutti et al.
Table 3 .
Hyperfine parameters vermiculite at different steps. | 2,254 | 2013-07-05T00:00:00.000 | [
"Materials Science"
] |
MicroRNA Profiling of Activated and Tolerogenic Human Dendritic Cells
Dendritic cells (DCs) belong to the immune system and are particularly studied for their potential to direct either an activated or tolerogenic immune response. The roles of microRNAs (miRNAs) in posttranscriptional gene expression regulation are being increasingly investigated. This study's aim is to evaluate the miRNAs' expression changes in prepared human immature (iDCs), activated (aDCs), and tolerogenic dendritic cells (tDCs). The dendritic cells were prepared using GM-CSF and IL-4 (iDC) and subsequently maturated by adding LPS and IFN-γ (aDC) or IL-10 and TGF-β (tDC). Surface markers, cytokine profiles, and miRNA profiles were evaluated in iDC, tDC, and aDC at 6 h and 24 h of maturation. We identified 4 miRNAs (miR-7, miR-9, miR-155 and miR-182), which were consistently overexpressed in aDC after 6 h and 24 h of maturation and 3 miRNAs (miR-17, miR-133b, and miR-203) and miR-23b cluster solely expressed in tDC. We found 5 miRNAs (miR-10a, miR-203, miR-210, miR-30a, and miR-449b) upregulated and 3 miRNAs downregulated (miR-134, miR-145, and miR-149) in both tDC and aDC. These results indicate that miRNAs are specifically modulated in human DC types. This work may contribute to identifying specific modulating miRNAs for aDC and tDC, which could in the future serve as therapeutic targets in the treatment of cancer and autoimmune diseases.
Introduction
Different cell types constitute a group termed as dendritic cells (DCs) that modulate the balance between innate and adaptive immunity. Basically, DCs adjust T lymphocytes either to activate or suppress a specific immune response in the body. Novel medical therapy strategies aim to exploit DCs abilities to restore body homeostasis. The particular DCs subsets are shaped by various maturation stimuli that affect DCs during their life cycle. Recently, in vitro prepared DCs have been cultured with different immunomodulatory agents, including immunoactivators associated with microbial patterns (e.g., bacterial peptidoglycan or lipopolysaccharide (LPS)) and proinflammatory mediators (e.g., IL-6, IFN-) [1,2]. In contrast, tolerogenic DCs (tDCs) induction protocols include a combination of IL-10 and/or TGF-or agents such as 1 ,25-dihydroxyvitamin D3 or dexamethasone [3,4]. As a result, fully matured activated DCs (aDCs) produce high levels of proinflammatory cytokines such as IL-6, IL-12, and IFN-, upregulate coreceptors CD80/CD86, and have become a promising candidate for modern anticancer therapies.
On the other hand, tDCs perpetuate a steady state characterized by antigen presentation without T cell activation. In cell-to-cell interactions, tDCs convert naïve T cells to regulatory T lymphocytes, induce anergy in autoreactive T cells, and expand naturally occurring T regulatory and T suppressor lymphocytes [5,6]. For these reasons, tDCs may be used in autoimmune disorders or graft rejection therapies. Since DCs were first identified [7,8], significant progress [9][10][11] has been achieved in understanding their biology.
In our study we focused on the first 6 h of maturation as it has been shown that DCs cease secretion of crucial immunostimulatory factors after 24 h of cultivation [12,13]. External stimuli are transferred to the DCs intracellular space and the end point of intracellular process results in gene expression that is posttranscriptionally regulated by microRNAs (miRNAs).
The miRNAs are small noncoding RNAs that inhibit their target mRNAs translation by binding to them [14]. They are essential in a variety of developmental and physiological processes [14] and play a crucial role in cancerogenesis, homeostasis maintenance, immune cell development and differentiation, antibody production, and inflammatory mediator release [15][16][17][18]. Each miRNA may control the expression of hundreds of target genes and also several miRNAs directly regulate one mRNA. Therefore, different miRNAs may influence target genes coordinately or synergistically [19,20] and regulate fine-tune immune response, including DCs function [15][16][17][18]21].
In this study, we focused on the first 6 h of maturation of DCs and compared DCs phenotypes using cytokine production and cell surface markers as well as miRNA profiles. As in previous reports [1,2,22], our results indicate that LPS and IFN-induce DCs activation whereas TGF-and IL-10 lead to DCs tolerogenic character during maturation in vitro. Upon maturation, we identified 4 miRNAs (miR-7, miR-9, miR-155, and miR-182) consistently upregulated in aDCs and 4 other miRNAs (miR-17, miR-133b, miR-203, and miR-23b) in tDCs. We also found 4 miRNAs (miR-10a, miR-203, miR-210, and miR-449b) upregulated and 3 miRNAs downregulated (miR-134, miR-145, and miR-149) in both tDCs and aDCs. To the authors' best knowledge, this is the first time the miRNA profile of human tDCs generated with IL-10 and TGF-has been described.
In conclusion, this work may contribute to identifying key tolerogenic miRNAs as potential therapeutic targets in the treatment of autoimmune disease or as immunomodulators after organ transplantation and for activating miR-NAs to characterize and/or activate DCs used for cancer immunotherapy.
Preparation of Human Dendritic Cells.
Buffy coats from healthy donors were obtained from the Department of Transfusion Medicine and Blood Bank, University Hospital (Brno, Czech Republic). All subjects' blood samples were taken after signing an informed consent form approved by the local ethical committee. Peripheral blood mononuclear cells (PBMCs) were isolated using density gradient centrifugation on Histopaque (Nycomed Pharma, Oslo, Norway). The PBMCs (1 × 10 6 cells/mL) were placed in a 5 mL Petri dish (Nunclon) for 2 h plastic adherence. To obtain iDCs, adherent cells were cultured in 5 mL of complete media X-VIVO 10 (BioWhittaker, Walkersville, USA) supplemented with 2 mM glutamine (Bio Whittaker), 3% heat inactivated human AB serum (SigmaAldrich, USA), 800 UI/mL GM-CSF (Pepro-Tech, USA), and 500 UI/mL IL-4 (Prospec, Israel) for 6 days, media was changed after 3 days. On day 6, iDCs were matured for a further 6 h or 24 h by 50 ng/mL IFN-(ProSpec, Israel) and 200 ng/mL LPS (Calbiochem, MA, USA). The resulting DCs were called aDCs. tDCs were matured using 1 ng/mL IL-10 (PeproTech, USA) and 2 ng/mL TGF-(PeproTech, USA) for the same time period as the aDCs. The DCs measurements and harvests were carried out on day 6 of cultivation (0 h) and at 6 h and 24 h of maturation.
Cytokines Profiles in DCs Culture
Media. The DCs culture media supernatants collected after 6 days of cultivation (0 h) and 6 h and 24 h of maturation were measured to obtain the levels of the following cytokines: IL-6, IL-10, IL-12p70, IFN-, and TNF-. Human simplex kits and human basic kit, such as the bead based analyte detection assays, were used for the quantitative detection of the previously mentioned cytokines (Bender MedSystems GmbH, Austria). The samples were assessed according to the manufacturer's protocol. Analyte concentrations were proportional to the fluorescent intensity measured on a flow cytometer system FACSArray (BD Biosciences, NJ, USA). Data were acquired using BD FACSArray System Software version 1.0.4 (BD Biosciences, NJ, USA) and analyzed using FlowCytomix Pro 2.4.
RNA Isolation and TaqMan Low Density Array (TLDA).
Total RNA enriched with small RNAs was isolated using the mirVANA miRNA Isolation Kit (Ambion Inc., Austin, TX, USA) according to the manufacturer's protocol. Total RNA concentration and purity were controlled using UV spectrophotometry (A260/A280 < 2.0) using a NanoDrop ND-1000 (Thermo Scientific, Wilmington, DE, USA). 100 ng of total RNA was reverse-transcribed into cDNA using a Multiplex RT set pool (Applied Biosystems, CA, USA) and loaded onto a TLDA Human miRNA Panel containing 384 wells (368 TaqMan MicroRNA Assays enabling the simultaneous quantification of 365 human miRNAs and 3 endogenous controls) according to the manufacturer's protocol. Quantitative miRNA expression data were acquired and normalized using the ABI Prism 7900HT Sequence Detection System (Applied Biosystems, CA, USA).
Expression Data Analysis.
Data are presented as the mean values ± SD of individual experiments. Because of the nonparametric distribution of data, the Mann-Whitney test was performed to assess statistical differences between the two experimental groups. In all cases a value < 0.05 was considered significant. The miRNA gene expression values were normalized according to endogenous control RNU6B and relative expression values were obtained using the ΔCt method with SDS software v 2.3 (Applied Biosystems, CA, USA). The relative target miRNA expression levels were The Spearman correlation test was used to correlate miRNA expression data with interleukin production.
Maturation with IL-10 and TGF-Leads to Decreased CD80 and CD83 Expression.
To measure the effect of maturation cocktails on DCs, we analyzed the profiles of T cell stimulator molecules HLA-DR and induced costimulatory (CD80/CD86) and maturation (CD83) markers using FACS analysis. The aDCs treated with LPS and IFN-showed a classical mature DCs phenotype that was already evident at 6 h of maturation. The aDCs expressed a significantly high DCs differentiation marker CD83 ( < 0.001) and costimulatory molecule CD80 ( < 0.001) levels compared with untreated iDCs. Both markers were even more expressed at 24 h aDCs ( Figure 1). The IL-10 and TGF-caused significant CD80 downregulation at 24 h of maturation in tDCs compared to iDCs ( < 0.05). We found a significant decrease of CD83 in tDCs compared to iDCs only at 6 h ( < 0.05, Figure 1). CD83 in tDCs was retained downregulated in comparison to both 6 and 24 h aDCs ( = 0.007, Figure 1). However, we found gradual reduction of CD80 expression from 6 to 24 h tDCs compared to 6 h ( < 0.0001) and 24 h ( < 0.000001) aDCs ( Figure 1). CD86 costimulatory receptor and HLA-DR molecule did not show significant differences between tDCs and aDCs (data not shown).
Cytokine Profiles.
It has been shown that DCs fundamentally affect naïve T lymphocyte development in different welldescribed subpopulations such as T helper cells including Th1, Th2, and Th17 types, T cytotoxic cells (Tc), and T regulatory cells (Tregs) [13]. T cell differentiation is conducted by DCs cytokine production. The cytokines IL-12 and IL-10 are critical in Th1 and Tregs cell differentiation. Considering that a predominance of either IL-12p70 or IL-10 is indicative of activatory or tolerogenic immune response mechanisms, we estimated the IL-12p70/IL-10 ratio by calculating the absolute values of concentration obtained from each experiment and for each particular DCs subtype. Thus, in aDCs, IL-12p70 expression constantly increased from 6 h aDCs ( < 0.05) to 24 h aDCs ( = 0.0005) when compared to iDCs ( Figure 2(a)). The IL-12p70/IL-10 ratio in aDCs reflected the upregulation of IL-12p70 levels against IL-10 ( Figure 2(c)). Nevertheless, in tDCs a reverse proportion between IL-12p70 and IL-10 was found. The IL-12/IL-10 ratio in tDCs clearly showed suppression of IL-12p70 production and upregulation of IL- Inflammation 5 cytokines TNF-, IL-6, and IFN-at both 6 h and 24 h aDCs against tDCs and untreated iDCs control (Figure 2(d)). Taken together, these data indicate that LPS and IFN-trigger activation of human DCs while TGF-and IL-10 trigger human DCs tolerogenic character during maturation in vitro.
miRNA Profiles in Human aDCs and tDCs in Comparison
to Untreated DCs Control. To study miRNA level changes in generated DCs in vitro, we isolated miRNA enriched total RNA and performed quantitative PCR based on TaqMan low density array for miRNA expression analysis. The only observed changes in miRNA expression with fold change higher than twofold in all samples were analyzed.
We then compared the miRNA profiles in aDCs to tDCs after 6 and 24 h of maturation. Thus, after 6 h of maturation, we found expression changes in 31 different miRNAs. Among them, 27 miRNAs were upregulated in tDCs and only 4 miRNAs were downregulated in tDCs in comparison to aDCs (Figures 4(a) and 4(c)).
In total, these data indicate different miRNA profiles in aDCs and tDCs upon maturation in vitro. Four upregulated miRNAs are important for activation (miR-7, miR-9, miR-155, and miR-182) in aDCs compared to tDCs and iDCs after 6 h and 24 h of maturation. On the other hand, 4 different upregulated miRNAs (miR-17, miR-133b, miR-203, and miR-23b) are more important for tDCs induction than aDCs and iDCs after 6 h of maturation.
Correlation of miRNA Expression with Cytokine Production.
According to the known role of selected miRNA, we analyzed the correlation between miRNA expression and cytokine production. We found a significant correlation between production of IL-12 and miR-221 expression ( = 0.0003; = −0,9273) and a clear trend between IL-12 and miR-155 production. We did not find any correlation between IFN-production and expression of miR-29c.
Discussion
In vitro culture of DCs with demanded characteristics has made marked progress during the last decade. Not only different cultivation strategies and monitored DCs phenotype characteristics but also novel molecular evaluation methods have been developed. In this study, we aimed to connect phenotypes with miRNA profiling of in vitro prepared human blood monocyte-derived DCs from healthy subjects. To the knowledge of the authors, this is the first time human immature DCs subsequently affected by LPS and IFN-or by IL-10 and TGF-for 6 or 24 hours have been evaluated using miRNA profiling.
At 6 h aDCs already strongly express costimulatory receptors CD80/CD86 together with high CD83 maturation marker necessary for T cell activation and survival [23]. The activatory phenotype of aDCs is enforced by the gradual growth of Th1 response promoter, IL-12, and proinflammatory cytokines, IFN-, IL-6, and TNF-. To date, miRNAs have been described in regulating naïve and adaptive immune system response, immune cell development and differentiation, and prevention of autoimmune diseases [24][25][26][27][28]. By analyzing miRNA expression profiles in DCs using qRT-PCR, we identified 4 miRNAs (miR-7, miR-9, miR-155, and miR-182) uniquely overexpressed in aDCs treated for 6 h or 24 h with LPS and IFN-compared to untreated immature iDCs and to tDCs cultured with IL-10 and TGF-. Recent investigations in different immune cell types have shown that TIRs and TNF-receptor activation results in the rapid expression of miRNAs including miR-9, miR-99b, miR-146a, miR-146b, and miR-155 [29]. Specifically, the studies of LPS-induced miR-9, miR-146a, and miR-155 expression demonstrate a central role in the activity of the proinflammatory transcription nuclear factor-(NF-) B [29,30]. Upregulation of miR-182, connected with the immune system, was referred to in sepsis patient leukocytes and activated helper T cells clonal expansion [31,32].
Currently the most studied miRNA in DCs is miR-155. In our experiments, aDCs have upregulated miR-155, which agrees with the results obtained in some previous studies [33,34]. Lu et al. found a correlation between the levels of miR-155 and miR-221 with IL-12 production, cell development, and proapoptotic effect in maturated DCs [33]. In our work we also found a significant correlation between the IL-12 production and the expression level of miR-221 and a trend with miR-155 level.
Furthermore, our data show that tDCs cultured for 6 h and 24 h with IL-10 and TGF-induce decreased expression of surface markers CD80 and CD83 compared to aDCs and iDCs. The expression of HLA-DR molecule as a potential stimulator of T cell activation remained unaffected in our experiments in both tDCs and aDCs in agreement with other studies [35][36][37]. T cell stimulator CD86 molecule showed a trend in downregulating this marker in tDCs compared to aDCs. Both 6 h and 24 h tDCs slightly produce the proinflammatory cytokines TNF-, IL-6, and IFN-together with restrained IL-12, which affects Th1 response in a negative manner and facilitates Th2 balance and T regulatory cells [38]. To the best of our knowledge, we here described, for the first time, the miRNA profiles in human tDCs stimulated for 6 h and 24 h with IL-10 and TGF-. We found 3 miRNAs uniquely elevated in tDCs compared to iDCs and aDCs (miR-17, miR-133b, and miR-203) and miR-23b cluster (miR-23b and miR-27b). To date, miR-17 has been studied in the context Mediators of Inflammation of the autoimmune disease multiple sclerosis (MS) in which its expression is significantly reduced in peripheral leukocytes and overexpressed targeted miR-17 genes are involved in activating the immune system [39].
Our results also showed an elevated miR-133b level in tDCs. miR-133b function in immunocompetent cells is still very limited and the only reference deals with miR-133b expression in correlation with Th17 cell differentiation [40]. miR-379 was studied by Kallioniemi group in bone metastasis of breast cancer. Ectopic miR-379 expression in the cell line of breast carcinoma MDA-231 decreases the expression of genes including some involved in TGF-signaling pathway [41]. In 6 h maturated tDCs we found decreased miR-23b, similar to Zhenga's results. His study on miR-23b tolerogenic character in DCs shows that its elevation is stimulated by ovalbumin. miR-23b expression inhibits the DCs maturation and reduced the DCs antigen uptake and also the expression of DCs surface markers [42]. Of note, our results show elevated miR-27b in 24 h tDCs expressed from the same cluster as miR-23b.
Furthermore, upregulation of the miR-148 family (miR-148a, miR-148b, and miR-152) was observed in DCs stimulated by LPS. The role of miR-148 family in activated DCs is to inhibit the MHC II expression, production of proinflammatory cytokines, and DCs-mediated CD4+ T cell expansion [46]. Let-7f was referred to as a potential regulator of IL-23 receptor expression in memory CD4+ T cells, which subsequently produce higher levels of IL-17 in comparison to naïve T cells [47].
On the other hand, we identified 4 miRNAs (miR-99b, miR-135a, miR-147, and miR-214) that were downregulated in 24 h tDCs when compared to 24 h aDCs. Similar miRNAs were upregulated in 24 h aDCs and iDCs. While upregulated miR-147 was identified in activated macrophages after multiple TLRs' stimulation, its expression is probably capable of downregulating excessive inflammatory responses [48].
These miRNAs fine-tune immune response by cell cycle arrest and induce apoptosis of activated or tolerogenic cells to retain homeostasis of the immune system.
In conclusion, aDCs generated in the presence of LPS and IFN-for 6 h and 24 h displayed an immunoactivatory phenotype. miRNA profiling showed similar data to previous studies of proactivatory miRNAs [34]. Our results show that a minimum of 6 h treatment is sufficient to generate aDCs that are able to highly produce factors leading to Th1 response and that this capacity can be further extended for the next 24 h. We here describe the set of miRNAs induced in IL-10 and TGF-cultured human blood monocytes-derived DCs. Finally, our results suggest that DCs miRNA profiling profitably supports DCs phenotype and functional studies. More intensive investigation of miRNA function in the future might bring greater insight into the regulation of immune systems and potential therapeutic possibilities of oncological or autoimmune disease. | 4,193 | 2014-04-01T00:00:00.000 | [
"Biology",
"Medicine"
] |